Senior AWS Cloud Data Engineer I

American Honda Motor Co., Inc.Marysville, OH
Onsite

About The Position

The Data Engineer designs, builds, and maintains scalable data solutions to enable advanced analytics and business intelligence across Honda’s enterprise.

Requirements

  • A Bachelor's degree in Computer Science / Data Science / Applied Mathematics / Information Technology / Informations Sciences / Electronics or a related field of study or, equivalent relevant work experience.
  • At least 3 years of demonstrable work experience in the industry, in data engineering or related roles.
  • Proven track record in Amazon Web Services (AWS) based data solutions and orchestration
  • Integration with ERP systems (SAP, Homegrown ERP Systems)
  • API-based Data Exchange between Manufacturing, Supply Chain legacy applications and AWS pipelines
  • Metadata Management for compliance attributes
  • Audit Trails & Reporting for compliance verification
  • Expertise in cloud to design, build, and maintain data-driven solutions
  • Skilled in Data Architecture and Data Engineering with a strong background in Supply Chain domain
  • Experienced in Data Modeling (Conceptual, Logical, and Physical), ETL optimizations, Query optimizations and Performance tuning
  • Programming languages: Python, PySpark, SQL
  • AWS Services: Glue, EMR, EC2, Lambda, DMS, S3, Redshift, RDS
  • Data Governance: Informatica CDGC/CDQ
  • DevOps Tools: Git, GitHub, AWS CDK
  • Security: IAM, encryption policies
  • Monitoring: CloudWatch, Glue Catalog, Athena
  • Strong integration background with DB2, UDB, SQL Server etc
  • Strong communication and collaboration skills.
  • Ability to work with cross-functional teams and stakeholders.

Responsibilities

  • Data Pipeline Development:
  • Design and implement ETL pipelines using AWS services (Glue, EMR, DMS, S3, Redshift).
  • Orchestrate workflows with AWS Step Functions, EventBridge, and Lambda.
  • Integrate CI/CD pipelines with GitHub and AWS CDK for automated deployments.
  • Data Modeling & Transformation:
  • Develop conceptual, logical, and physical data models for operational and analytical systems.
  • Optimize queries, normalize datasets, and apply performance tuning techniques.
  • Use Python, PySpark, and SQL for data transformation and automation.
  • Security & Governance:
  • Implement IAM roles and encryption policies for data protection.
  • Ensure compliance with governance standards using tools like Informatica CDGC/CDQ.
  • Monitoring & Optimization:
  • Monitor pipeline performance using CloudWatch and Glue job logs.
  • Troubleshoot and resolve data quality and performance issues proactively.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service