Software Engineer [Multiple Positions Available]

JPMorganChaseJersey City, WA
Onsite

About The Position

This Software Engineer position involves designing and developing high-volume data pipelines and web services to offer data via API. The role requires architecting distributed computation and parallel processing data flows, and being responsible for SRE and DevOps, including defining SLIs and SLAs for all layers of the system. The engineer will partner with upstream and Systems of Record groups to establish data sourcing, perform data research, engineering, and analysis, and implement web annotation components and reactive applications for multiple treasury services. Key tasks include creating test cases, loading and transforming data from big data platforms into downstream database systems, drafting data models and API contracts for multiple products, and working to ensure redundancy, fault tolerance, and uninterrupted customer experience.

Requirements

  • Minimum education and experience required: Master's degree in Computer Science, or related field of study plus two (2) years of experience in the job offered or as Software Development, Application Developer, Software Development & Research, Global Product & Technology, or related occupation
  • Experience with building AWS EKS front end applications that utilize ReactJS and Javascript to support critical product initiative
  • Experience with developing Enterprise scale RESTful API by utilizing Java, Spring Boot, and Maven to resolve application dependencies
  • Experience with building telemetry by using Java and creating metrics tracking dashboard in Datadog to improve stability and resiliency
  • Experience with performing data manipulation, data structuring, data design flow and query optimization using SQL and Python
  • Experience with designing exploratory data analysis within large enterprise databases to extract, clean, transform by using data modeling and Python
  • Experience with developing and automating Spark data processing engine to drive and improve product experience by using Pyspark
  • Experience with building and loading data from AWS s3 into AWS Redshift and DynamoDB database by using Python

Responsibilities

  • Design and develop highly volume data pipelines
  • Design and develop web services to offer our data via API
  • Architect distributed computation and parallel processing data flows
  • Responsible for SRE and DevOps
  • Define SLIs and SLAs for all layers of the system
  • Partner with upstream and Systems of Record groups to establish data sourcing
  • Perform data research, engineering and analysis
  • Implement web annotation components
  • Implement reactive application for multiple treasury service
  • Create test cases
  • Load and transform data from big data platform into downstream database systems
  • Draft data model and API contracts for multiple products
  • Work to ensure redundancy, fault tolerance, and uninterrupted customer experience

Benefits

  • Competitive total rewards package including base salary determined based on the role, experience, skill set, and location
  • Discretionary incentive compensation which may be awarded in recognition of individual achievements and contributions
  • Comprehensive health care coverage
  • On-site health and wellness centers
  • A retirement savings plan
  • Backup childcare
  • Tuition reimbursement
  • Mental health support
  • Financial coaching
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service