About The Position

Join JPMorgan Chase as a Senior Principal Software Engineer where you will own the Databricks platform architecture on AWS, build Terraform/Python automation, mentor teams, and drive high-impact data innovation. The Chief Data & Analytics Office (CDAO) at JPMorgan Chase is responsible for accelerating the firm’s data and analytics journey. This includes ensuring the quality, integrity, and security of the company's data, as well as leveraging this data to generate insights and drive decision-making. The CDAO is also responsible for developing and implementing solutions that support the firm’s commercial goals by harnessing artificial intelligence and machine learning technologies to develop new products, improve productivity, and enhance risk management effectively and responsibly. As a Senior Principal Software Engineer at JPMorganChase in Corporate – AIML Data Platforms and the Chief Data & Analytics team, you will bring your deep expertise in Databricks and large-scale data engineering. In this role you will drive the architecture, design, and implementation of advanced data solutions, leveraging Databricks and related technologies to enable business insights and innovation

Requirements

  • Formal training or certification on software engineering concepts and 10+ years applied experience
  • Expert-level proficiency in Databricks platform administration, AWS networking, and infrastructure-as-code with Terraform
  • Hands-on experience with Python and/or Java application program development with use of automated unit testing
  • Hands-on practical experience delivering system design, application development, testing, and operational stability
  • Hands-on practical experience with terraform development and understanding of terraform enterprise
  • Hands-on experience with GitHub / Bitbucket code versioning tool, Jenkins build tool and pypi / maven artifactory integrations
  • Experience in managing product release lifecycle at enterprise level.

Nice To Haves

  • Strong knowledge of distributed compute frameworks like Spark and how platform-level configurations impact workload performance and reliability
  • Experience with Agile development processes, as needed (SCRUM/KANBAN) using JIRA.
  • Experience in Data pipelines using Spark

Responsibilities

  • Designs and develops infrastructure automation for the Databricks control plane, including workspace provisioning, account configuration, onboarding/offboarding, and Delta Sharing using Terraform, AWS, and Python Lambdas.
  • Develops secure, high-quality production code across Terraform modules, Python Lambda functions, and control plane services while reviewing and debugging code across the team.
  • Leads the architecture of scalable Databricks platform infrastructure, including workspace automation, AWS networking, and control plane APIs enabling self-service for downstream teams.
  • Define and implement best practices for data engineering, data lake architecture, and distributed computing
  • Solves the companies most challenging cloud data platform problems by building innovative technical solutions around Data Lake Tools.
  • Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems
  • Adds to team culture of diversity, opportunity, inclusion, and respect
  • Mentor and guide technical teams, fostering a culture of continuous learning and excellence in software engineering practices
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service