This job is closed

We regret to inform you that the job you were interested in has been closed. Although this specific position is no longer available, we encourage you to continue exploring other opportunities on our job board.

Amazon.composted about 1 month ago
Full-time • Mid Level
North Reading, MA
General Merchandise Retailers
Resume Match Score

About the position

The Amazon BADS (Business Analytics and Data Science) team is seeking a Data Engineer with an aptitude in systems development. If you enjoy innovating, thinking big and want to build automation solutions for a highly scalable data platform using AWS technologies, you may be a prime candidate for this position. We are looking for an experienced, self-driven, analytical, strategic Data Engineer with a strong skill set in Python and solid background with systems development. In this role, you will be working in a large, complex data warehouse and automation environment. You should be passionate about coding optimized and efficient solutions, working with disparate data sources and bringing information together to answer critical business questions. You should have deep expertise in the creation and management of data automation and the proven ability to translate data into meaningful insights through collaboration with analysts and engineers. In this role, you will share ownership of end-to-end development of data engineering solutions to answer complex questions and you'll play an integral role in strategic decision-making.

Responsibilities

  • Design, develop, and maintain Python-based automation solutions leveraging AWS technologies such as Redshift, DynamoDB, Athena, Glue, S3 and Kinesis
  • Prioritize and deliver structured and ad-hoc data automation projects, seamlessly integrating CI/CD practices
  • Architect and manage scalable data warehouse solutions on AWS, ensuring high availability and performance
  • Build and maintain orchestration workflows using Amazon Managed Workflows for Apache Airflow (MWAA)
  • Troubleshoot and support new and existing data pipelines across Development, Beta, and Production environments, adhering to industry best practices
  • Collect, analyze, and present actionable data insights to inform operational and logistics strategies

Requirements

  • 4+ years of data engineering experience
  • Experience with data modeling, warehousing and building ETL pipelines
  • Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
  • Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
  • Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS

Nice-to-haves

  • Experience with Apache Spark / Elastic Map Reduce
  • Experience with distributed systems as it pertains to data storage and computing
  • Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasets
  • Experience providing technical leadership and mentoring other engineers for best practices on data engineering

Benefits

  • Medical, Dental, and Vision Coverage
  • Maternity and Parental Leave Options
  • Paid Time Off (PTO)
  • 401(k) Plan
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service