About The Position

R1 is the leading provider of technology-driven solutions that transform the patient experience and financial performance of hospitals, health systems and medical groups. We are the one company that combines the deep expertise of a global workforce of revenue cycle professionals with the industry’s most advanced technology platform, encompassing sophisticated analytics, AI, intelligent automation, and workflow orchestration. As our Software Engineer III, you will join our engineering team that is responsible for building the state-of-the-art data platform foundation for the company. Each day, you will design, develop, and maintain software applications that handle and process large volumes of data. To thrive in this role, you must have excellent problem-solving and analytical skills, with excellent verbal and written communication skills Here’s what you will experience working as a Software Engineer II: Designing, developing, and maintaining software applications that handle and process large volumes of data. Collaborating with cross-functional teams to understand data requirements and develop software solutions that effectively integrate and utilize data. Building and optimizing data models and databases for performance and efficiency. Writing code to extract, transform, and load data from various sources into data warehouses or data lakes. Implementing data quality checks and data governance processes to ensure data accuracy and consistency. Troubleshooting and resolving software and data-related issues. Working with big data technologies such as Hadoop, Spark, and Kafka. Conducting performance testing and optimization of software applications that handle large datasets. Strong problem-solving, analytical thinking, and communication skills are also essential for this role.

Requirements

  • 4+ years' work experience in the software engineering or data engineering domain
  • Expert knowledge and experience working with Scala or PySpark
  • Experience working with modern data pipeline orchestration tools to create complex ETL pipeline jobs
  • Experience working with SQL and NoSQL database systems
  • Experience in distributed system architecture design
  • Experience with acquiring and preparing data from primary and secondary disparate data sources
  • Experience with cloud environments (Azure Preferred)
  • Experience working with Databricks
  • Strong problem-solving, analytical thinking, and communication skills

Nice To Haves

  • Healthcare industry experience

Responsibilities

  • Designing, developing, and maintaining software applications that handle and process large volumes of data.
  • Collaborating with cross-functional teams to understand data requirements and develop software solutions that effectively integrate and utilize data.
  • Building and optimizing data models and databases for performance and efficiency.
  • Writing code to extract, transform, and load data from various sources into data warehouses or data lakes.
  • Implementing data quality checks and data governance processes to ensure data accuracy and consistency.
  • Troubleshooting and resolving software and data-related issues.
  • Working with big data technologies such as Hadoop, Spark, and Kafka.
  • Conducting performance testing and optimization of software applications that handle large datasets.

Benefits

  • annual bonus plan
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service