Data Engineer- AWS/PySpark/ETL

JPMorgan Chase & Co.Columbus, OH

About The Position

As a Data Engineer II at JPMorgan Chase within the Consumer and Community Banking and Data Technology, you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As a core technical contributor, you are responsible for conducting critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

Requirements

  • Formal training or certification on data engineering concepts and 2+ years applied experience
  • Experience with ETL tools like Ab Initio, Informatica, Data Pipeline and workflow management tools (Airflow, etc.)
  • Strong hands on coding experience with PySpark, Python and AWS
  • Experience working with modern Data Lakes : (Snowflake, Databricks etc.)
  • Hands-on practical experience delivering system design, application development, testing, and operational stability
  • Very strong problem solving skills
  • Proficiency in automation and continuous delivery methods

Nice To Haves

  • Advanced in one or more programming language(s) like SQL, Java etc
  • Proficient in all aspects of the Software Development Life Cycle
  • Advanced understanding of agile methodologies such as CI/CD, Application Resiliency, and Security
  • Demonstrated proficiency in software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning etc.)
  • In-depth knowledge of the financial services industry and their IT systems
  • Practical cloud native experience
  • Proven leadership and mentoring experience with varying levels of software engineers

Responsibilities

  • Develops secure high-quality production code, and reviews and debugs code written by others
  • Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems
  • Collaborate closely with cross-functional teams to develop efficient data pipelines to support various data-driven initiatives
  • Implement best practices for data engineering, ensuring data quality, reliability, and performance
  • Contribute to data modernization efforts by leveraging cloud solutions and optimizing data processing workflows
  • Perform data extraction and implement complex data transformation logic to meet business requirements
  • Leverage advanced analytical skills to improve data pipelines and ensure data delivery is consistent across projects
  • Monitor and executes data quality checks to proactively identify and address anomalies
  • Ensure data availability and accuracy for analytical purposes
  • Communicate technical concepts to both technical and non-technical stakeholders

Benefits

  • comprehensive health care coverage
  • on-site health and wellness centers
  • a retirement savings plan
  • backup childcare
  • tuition reimbursement
  • mental health support
  • financial coaching
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service