About The Position

Amazon Web Services is seeking a talented and innovative Data Engineer II to design, build, and evolve critical data capabilities for Sales Planning and Compensation (SPC). SPC delivers secure, highly available, scalable, and high-performance data solutions that empower our field teams to deliver exceptional service to AWS customers. Join a dynamic, high-impact team at an exciting stage of product evolution. You'll help shape the future of how AWS manages and derives value from data at massive scale, while collaborating with product managers, program leaders, data scientists, and cross-AWS technical partners. As a Data Engineer II, you'll own end-to-end data solutions — from ingestion and transformation to analytics and insight generation — while incorporating modern generative AI practices to enhance efficiency, automation, and decision-making.

Requirements

  • 5+ years of developing and operating large-scale data structures for business intelligence analytics using ETL/ELT processes experience
  • 5+ years of developing and operating large-scale data structures for business intelligence analytics using SQL experience
  • Experience with data modeling, warehousing and building ETL pipelines
  • 5+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience
  • 5+ years of developing and operating large-scale data structures for business intelligence analytics using data modeling experience

Nice To Haves

  • Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
  • Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)
  • Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasets
  • Experience working on and delivering end to end projects independently

Responsibilities

  • Design and implement robust, scalable data pipelines and ETL/ELT processes using AWS-native services (e.g., Glue, Lambda, EMR, Kinesis, S3, Redshift/Spectrum).
  • Build and maintain data models, schemas, and storage solutions across relational (SQL) and NoSQL databases, data lakes, and warehouses.
  • Develop, automate, and optimize metrics, reports, dashboards, and analytics workflows to drive business insights and data-informed decisions.
  • Own infrastructure for data processing and analytics (e.g., Redshift clusters, Spectrum, EMR), including performance tuning, cost optimization, and architectural evolution.
  • Leverage Amazon Bedrock, Nova models, Amazon Q, Kiro, and other internal AWS GenAI services to prototype intelligent features, automate data workflows, enhance data quality, and accelerate insight delivery.
  • Demonstrate strong understanding of the broader GenAI ecosystem and apply it thoughtfully to real-world data engineering challenges in daily projects.
  • Conduct rapid prototyping, proof-of-concepts, and automation tooling to benchmark, validate, and improve data collection, processing, and analytics.
  • Collaborate across teams to ingest, transform, and integrate data from diverse sources using AWS big data technologies.
  • Champion best practices in data integrity, testing, validation, monitoring, and documentation in a fast-paced environment.
  • Proactively identify opportunities to improve system reliability, scalability, and efficiency while solving problems at their root.

Benefits

  • health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage)
  • 401(k) matching
  • paid time off
  • parental leave
  • sign-on payments
  • restricted stock units (RSUs)
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service