Lead Software Engineer - Market Risk

JPMorgan Chase & Co.Jersey City, NJ
9d

About The Position

We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. As a Lead Software Engineer at JPMorgan Chase within the Market Risk MXL DataLake Team, you will join a strategic initiative building cutting-edge data platforms for market risk and analytics. In this role, you'll design and implement high-volume data pipelines and historical data stores, collaborating closely with architects, risk technologists, and product owners.

Requirements

  • Degree-level education in Computer Science, Software Engineering, or a related discipline (or equivalent practical experience)
  • Strong software engineering fundamentals, including data structures, algorithms, and system design
  • Proven experience building large-scale data engineering solutions on big-data platforms
  • Hands-on experience developing PySpark / Spark pipelines in production environments
  • Solid understanding of data modelling for analytical and historical data use cases
  • Experience working with large volumes of structured data over long time horizons
  • Familiarity with distributed systems concepts such as fault tolerance, parallelism, and idempotent processing.

Nice To Haves

  • Experience with Databricks, Delta Lake, or similar cloud-based big-data platforms
  • Hands-on experience designing and implementing Data Vault 2.0 models.
  • Exposure to historical / regulatory data platforms, risk data, or financial services
  • Knowledge of append-only data patterns, slowly changing dimensions, or event-driven data models
  • Experience with CI/CD, automated testing, and production monitoring for data pipelines
  • Experience building highly reliable, production-grade risk systems with robust controls and integration with modern SRE tooling.

Responsibilities

  • Design, build, and maintain large-scale historical data stores on modern big-data platforms
  • Develop robust, scalable data pipelines using PySpark / Spark for batch and incremental processing
  • Apply strong data-modelling principles (e.g. dimensional, Data Vault–style, or similar approaches) to support long-term historical analysis and regulatory requirements
  • Engineer high-quality, production-grade code with a focus on correctness, performance, testability, and maintainability
  • Optimize Spark workloads for performance and cost efficiency (partitioning, clustering, file layout, etc.)
  • Collaborate with architects and senior engineers to evolve platform standards, patterns, and best practices
  • Contribute to code reviews, technical design discussions, and continuous improvement of engineering practices

Benefits

  • comprehensive health care coverage
  • on-site health and wellness centers
  • a retirement savings plan
  • backup childcare
  • tuition reimbursement
  • mental health support
  • financial coaching
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service