Taro Logo

Data Engineer, MIDAS, Digital Acceleration

Global technology company specializing in e-commerce, cloud computing, digital streaming, and artificial intelligence.
Data
Entry-Level Software Engineer
In-Person
5,000+ Employees
1+ year of experience
Enterprise SaaS · AI

Job Description

Join Amazon's Digital Acceleration (DA) organization as a Data Engineer on the MIDAS team, where you'll be at the forefront of the digital media revolution. This role focuses on developing foundational analytical datasets spanning orders, subscriptions, discovery, promotions, pricing, and royalties. The MIDAS team operates within Amazon's Digital Analytics engineering organization, building analytics and data engineering solutions that support cross-digital teams. Our platform delivers capabilities including metadata discovery, data lineage, customer segmentation, compliance automation, AI-driven data access through generative AI and LLMs, and advanced data quality monitoring. You'll work with AWS services like Redshift, Kinesis, EMR, Lambda, and internal BDT tools. The platform currently serves over 100 Amazon business teams with 20,000+ monthly active users. This role requires strong data engineering skills, business judgment, and excellent communication abilities. You'll be instrumental in enabling digital clients to innovate with data and make faster product and customer decisions.

Last updated a day ago

Responsibilities For Data Engineer, MIDAS, Digital Acceleration

  • Develop data products, infrastructure and data pipelines leveraging AWS services
  • Improve existing solutions/build solutions to improve scale, quality, IMR efficiency, data availability, consistency & compliance
  • Partner with Software Developers, Business Intelligence Engineers, MLEs, Scientists, and Product Managers
  • Drive operational excellence and build automation mechanisms

Requirements For Data Engineer, MIDAS, Digital Acceleration

Python
  • 1+ years of data engineering experience
  • Experience with SQL
  • Experience with data modeling, warehousing and building ETL pipelines
  • Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
  • Experience with one or more scripting language (e.g., Python, KornShell)

Related Jobs