Lead Software Engineer – Databricks / PySpark / AWS

Chase- Candidate Experience pageWilmington, DE
111d

About The Position

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Lead Software Engineer at JPMorgan Chase within the Corporate Technology, you play a crucial role in an agile team dedicated to enhancing, building, and delivering trusted, market-leading technology products in a secure, stable, and scalable manner. As a key technical contributor, you are tasked with implementing critical technology solutions across multiple technical domains, supporting various business functions to achieve the firm's business objectives.

Requirements

  • Formal training or certification on software engineering concepts and 5+ years applied experience.
  • Proficiency in programming languages such as Python and PySpark.
  • Experience designing and implementing data pipelines in cloud environments.
  • Strong background in design, architecture, and development using AWS Services, Databricks, Spark, Snowflake, and related technologies.
  • Experience with CI/CD tools such as Jenkins, GitLab, or Terraform.
  • Familiarity with containerization and orchestration technologies including ECS, Kubernetes, and Docker.
  • Ability to troubleshoot issues related to Big Data and Cloud technologies.

Nice To Haves

  • 5+ years of experience leading and developing data solutions in AWS Cloud.
  • 10+ years of experience building, implementing, and managing data pipelines using Databricks on Spark or similar cloud technologies.

Responsibilities

  • Design solutions at the appropriate level of detail and drive consensus among peers as needed.
  • Champion software engineering best practices within the team.
  • Collaborate with software engineers and cross-functional teams to design and implement deployment strategies using AWS Cloud and Databricks pipelines.
  • Lead the design, development, testing, and implementation of application solutions.
  • Partner with technical experts, stakeholders, and team members to resolve complex technical challenges.
  • Proactively address issues to support leadership objectives and prevent customer impact.
  • Design, develop, and maintain robust data pipelines for ingesting, processing, and storing large volumes of data from diverse sources.
  • Implement ETL (Extract, Transform, Load) processes to ensure data quality and integrity using tools such as Apache Spark and PySpark.
  • Monitor and optimize the performance of data systems and pipelines.
  • Apply best practices for data storage, retrieval, and processing.
  • Maintain comprehensive documentation for data systems, processes, and workflows.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service