Data Engineer I, UTR PLanning Tech

AmazonNashville, TN
5h

About The Position

UTR Planning Tech builds the data infrastructure that powers labor planning across 12 Amazon last-mile and sort-center business lines. Our pipelines feed the planning systems that determine how Amazon staffs its delivery network, serving hundreds of sites and processing millions of data points daily. Our team is shifting from hand-coded, custom pipelines to an AI-native approach. We are building reusable frameworks where engineers and business users define what they need through configurations and natural language instead of writing custom code for every use case. AI agents handle orchestration, validation, and deployment. We are early in this transformation, which means you will not inherit a finished system. You will help define how we build, what patterns we standardize, and how AI fits into data engineering workflows. If you want to learn fast and have a hand in shaping the way a team works, this is that opportunity. We are hiring a Data Engineer I to build data pipelines, contribute to these frameworks, and help shape how AI transforms data engineering at Amazon.

Requirements

  • 1+ years of data engineering experience
  • Experience with data modeling, warehousing and building ETL pipelines
  • Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
  • Experience with one or more scripting language (e.g., Python, KornShell)

Nice To Haves

  • Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
  • Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.

Responsibilities

  • Design and implement physical data models and ETL pipelines using AWS services (Redshift, S3, EMR, Glue, Lambda) and Python-based orchestration (Airflow/MWAA).
  • Build configuration-driven data frameworks that replace repetitive custom code with reusable, declarative patterns for ingestion, transformation, and metric curation.
  • Develop and optimize pipelines that ingest from multiple upstream sources, apply quality controls, and produce datasets that scientists and analysts use for labor planning decisions.
  • Implement monitoring, alarming, and data quality controls using standardized data contracts. Measure and improve dataset quality.
  • Build tooling that enables AI agents to interact with data engineering workflows: generating SQL, validating configurations, and executing pipeline operations through natural language interfaces.
  • Collaborate with planning scientists, software engineers, BI engineers, and product managers to translate data requirements into reusable solutions.
  • Write secure, testable, maintainable code. Get your data models, pipeline designs, and code reviewed. Document your solutions so others can maintain them.
  • Participate in operational excellence practices: on-call rotations, COE reviews, and pipeline health monitoring.

Benefits

  • health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage)
  • 401(k) matching
  • paid time off
  • parental leave
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service