Software Engineer [Multiple Positions Available]

JPMorgan Chase & Co.Jersey City, NJ
$144,720 - $185,000Onsite

About The Position

Duties: Build and deploy applications and microservices for AI/ML applications in both on-prem and public cloud. Build CI/CD pipelines for AI/ML applications including building machine learning pipelines from model code to inference service deployment in public cloud. Design and build high availability, scalable on-prem and public cloud infrastructure using Infrastructure as Code approach. Ingest large amounts of data from public and internal company data sources, perform ETL, and store data in data lakes and warehouses.

Requirements

  • Master's degree in Computer Science, Computer Engineering, Electrical/Electronics Engineering, Data Science, or related field of study plus one (1) year of experience in the job offered or as Software Engineer, Programmer Analyst, or related occupation.
  • Bachelor's degree in Computer Science, Computer Engineering, Electrical/Electronics Engineering, Data Science, or related field of study plus three (3) years of experience in the job offered or as Software Engineer, Programmer Analyst, or related occupation.
  • Performance measurement in the financial services industry and translating quantitative information into actionable insights
  • Infrastructure design for running and implementing machine learning models for large scale deployment using tools including Terraform and AWS CDK
  • Building metrics to measure infrastructure and application performance, and setting up AWS CloudWatch monitors and alarms to observe performance
  • Working with data lakes using Amazon S3 and data warehouses including AWS Redshift
  • Accessing S3 and Redshift using AWS spectrum to query the data in data lakes and data lakehouses
  • Using AWS services and CI/CD pipelines to deploy and maintain machine learning applications and services
  • Developing and maintaining dynamic and interactive dashboards using Tableau or Qlik Sense leveraging advanced visualization, ETL automation, and ODBC connectors
  • Automating the production of recurring reports and dashboards using ETL techniques, SQL, and Robotic Process Automation techniques
  • Performing data manipulation, data structuring, data design flow, and query optimization using programming languages including SQL and Python
  • Processing large data sets using data containers, multithreading, and multiprocessing in PySpark and Tensorflow
  • Developing software or microservices that deploy as REST APIs
  • Using AWS Kinesis and Firehose to ingest large amounts of data and perform ETL using AWS Glue
  • Developing and automating large-scale, high-performance data processing systems
  • Building scalable Spark data pipelines leveraging scheduler and executor frameworks

Responsibilities

  • Build and deploy applications and microservices for AI/ML applications in both on-prem and public cloud.
  • Build CI/CD pipelines for AI/ML applications including building machine learning pipelines from model code to inference service deployment in public cloud.
  • Design and build high availability, scalable on-prem and public cloud infrastructure using Infrastructure as Code approach.
  • Ingest large amounts of data from public and internal company data sources, perform ETL, and store data in data lakes and warehouses.

Benefits

  • comprehensive health care coverage
  • on-site health and wellness centers
  • a retirement savings plan
  • backup childcare
  • tuition reimbursement
  • mental health support
  • financial coaching
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service