Data Engineer III

JPMorgan ChaseColumbus, OH
5d

About The Position

This is your chance to change the path of your career and work at one of the world's leading financial institutions. As a Python/Java Software Engineer III at JPMorgan Chase within the Consumer & Community Banking/Data Products team, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

Requirements

  • Formal training or certification on software engineering concepts and 3+ years of applied experience
  • Proficiency in Python/Java technology
  • Hands-on practical experience in system design, application development, testing, and operational stability
  • Proficient in coding in one or more languages such as Python/Java
  • Experience in building solutions using cloud technologies like AWS
  • Exposure to message streaming technologies such KAFKA/FLINK
  • Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages.
  • Overall knowledge of the Software Development Life Cycle
  • Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security
  • Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.)

Responsibilities

  • Demonstrate python development experience, including modular coding, creating reusable classes, logging, and testing
  • Understand PySpark or distributed data processing frameworks
  • Exhibit Proficiency in writing DBT macros, jinja templates, seeds, and custom test cases
  • Have Experience with Airflow on MWAA, Astronomer or Kubernetes
  • Show Proficiency with Glue jobs, Athena, Cloud watch, and Lambda
  • Working knowledge of Terraform for infrastructure automation
  • Apply Understanding of Iceberg, Hudi or Delta Lake and their role in data-lake architecture.
  • Possess Ability to query and manage open-table format datasets.
  • Designing and optimizing ETL/ELT pipelines.
  • Demonstrate Knowledge of snowflake streams, tasks, roles, and warehouses.
  • Utilize Experience with version control like GIT, testing frameworks, and code reviews

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service