About The Position

Are you passionate about building data infrastructure that powers security insights and analytics at scale? Do you want to contribute to the modernization of a security data platform that enables measurable improvements in application security across Amazon? As a Data Engineer on the AppStar DNA team (Data & Analytics Engineering), you will build and maintain data pipelines and infrastructure that support the AppStar organization. You will work across multiple data domains to develop the data infrastructure that powers analytics and reporting. You should be a builder who is passionate about data engineering and eager to learn. You thrive in solving technical problems, building reliable data pipelines, and contributing to a high-performing team. You bring solid expertise in data modeling, ETL/ELT pipeline design, and distributed data systems, and you're excited to grow your skills in modern data architectures and AWS technologies. Amazon is continuously innovating new services and features for our customers. Our engineers invent, build, and sometimes break things to make them easier, faster, better, and more cost-effective. However, no matter what we're building—from websites to web services, AR to AI, drones to devices—security is always our top priority. The Amazon Application Security team focuses on working with our builders to provide experiences that our customers can trust. That means constantly learning new things and solving complex problems to protect the safety, security, and privacy of billions of lives on a global scale. At Amazon, you'll be working with the best minds in technology and security. Learn and be curious here, and accelerate your career growth. You can take pride in knowing that your work is meaningful, having a positive impact on others and making the world a better place.

Requirements

  • 3+ years of data engineering experience
  • 1+ years of developing and operating large-scale data structures for business intelligence analytics using ETL/ELT processes experience
  • 1+ years of developing and operating large-scale data structures for business intelligence analytics using OLAP technologies experience
  • 1+ years of developing and operating large-scale data structures for business intelligence analytics using data modeling experience
  • 1+ years of developing and operating large-scale data structures for business intelligence analytics using SQL experience
  • 1+ years of developing and operating large-scale data structures for business intelligence analytics using Oracle experience
  • Experience with data modeling, warehousing and building ETL pipelines

Nice To Haves

  • Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
  • Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)

Responsibilities

  • Design and implement ETL/ELT pipelines using SQL, Python, and AWS services (Redshift, Glue, S3, Lambda, Step Functions, Athena, Apache Airflow)
  • Build and maintain data models, conformed dimensions, and entity models that support downstream consumption
  • Contribute to the migration and modernization of legacy security data pipelines to modern lakehouse patterns (Apache Iceberg, Spectrum, Lake Formation)
  • Ensure data quality, lineage, and freshness in data pipelines
  • Follow data engineering best practices: data modeling standards, naming conventions, data quality frameworks, CI/CD for data pipelines, and operational excellence
  • Identify and resolve data pipeline issues—simplify complex data flows, remove bottlenecks, and address technical debt
  • Collaborate with senior engineers, business intelligence engineers, data scientists, and security stakeholders to deliver scalable data solutions
  • Design, implement, and support platforms providing secured access to large datasets
  • Analyze and solve problems at their root, stepping back to understand the broader context
  • Learn and understand a broad range of Amazon's security data resources and know when, how, and which to use
  • Continually improve data infrastructure and pipelines, automating or simplifying self-service support for datasets
  • Participate in design reviews, on-call rotations, and incident response for production data pipelines

Benefits

  • health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage)
  • 401(k) matching
  • paid time off
  • parental leave
  • sign-on payments
  • restricted stock units (RSUs)
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service