About The Position

Are you driven by the excitement of unraveling intricate data puzzles? Does your DNA resonate with collaborative problem-solving? Do you thrive on witnessing the ripple effects of your contributions in the larger scheme of things? If you've nodded along to any of these questions, welcome to Amazon PeopleInsights eXperience (APIX) team! We're not just a team; we're a dynamic cohort of engineers dedicated to leveraging innovations for Amazon's PXT Business Analytics tools and software. Together, we confront tangible challenges head-on, poised to revolutionize customer experiences in unforeseen ways. Every day, we embark on a journey of invention, pioneering fresh solutions that shape tomorrow's landscape. We are currently seeking a talented and motivated Data Engineer. In this role, you will be building data engineering applications using AWS stack. You should have deep expertise and passion in working with large data sets, data visualization, building complex data processes, performance tuning, bringing data from disparate data stores and programmatically identifying patterns. You should have excellent business acumen and communication skills to be able to work with business owners to develop and define key business questions and requirements. APIX has culture of data-driven decision-making, and demands timely, accurate, and actionable business insights. Our mission is to harness the power of people data to empower PXT and people leaders to make decisions that help Amazon become the Earth’s Best Employer. Successful candidate should have experience working with big data, building data warehouses and data processing services. They are effective at seeing data patterns and building generic data solutions to improve user experience.

Requirements

  • 1+ years of data engineering experience
  • Experience with data modeling, warehousing and building ETL pipelines
  • Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
  • Experience with one or more scripting language (e.g., Python, KornShell)

Nice To Haves

  • Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
  • Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.

Responsibilities

  • Design, implement, and support business critical data warehouse / data lake infrastructure using AWS big data stack, Python, Redshift, Glue/lake formation, EMR/Spark/Scala, Athena etc in a stable, low cost model.
  • Collaborate with Business Intelligence Engineers to recognize and help adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation
  • Internalize our customer challenges and derive creative solutions which apply new and innovative technology.
  • Work directly with customers to integrate new data types, equipment, and incorporate feedback.
  • Empower technical and non-technical, internal customers to drive their own analytics and reporting (self-serve reporting) and support ad-hoc reporting when needed.
  • Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers.
  • Be comfortable with a degree of ambiguity and willing to develop quick proof of concepts, iterate and improve
  • Apply software best practices including coding standards, code reviews, source control management, agile development, build processes, and testing.
  • Actively support and foster a culture of inclusion.

Benefits

  • health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage)
  • 401(k) matching
  • paid time off
  • parental leave
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service