Engineer, Data

Ensemble Health PartnersWork at Home - Ohio - Other, OH
Remote

About The Position

Thank you for considering a career at Ensemble! Ensemble is a leading provider of technology-enabled revenue cycle management solutions for health systems, including hospitals and affiliated physician groups. They offer end-to-end revenue cycle solutions as well as a comprehensive suite of point solutions to clients across the country. Ensemble keeps communities healthy by keeping hospitals healthy. We recognize that healthcare requires a human touch, and we believe that every touch should be meaningful. This is why our people are the most important part of who we are. By empowering them to challenge the status quo, we know they will be the difference! O.N.E Purpose: Customer Obsession: Consistently provide exceptional experiences for our clients, patients, and colleagues by understanding their needs and exceeding their expectations. Embracing New Ideas: Continuously innovate by embracing emerging technology and fostering a culture of creativity and experimentation. Striving for Excellence: Execute at a high level by demonstrating our “Best in KLAS” Ensemble Difference Principles and consistently delivering outstanding results. The Opportunity: By embodying our core purpose of customer obsession, new ideas, and driving innovation, and delivering excellence, you will help ensure that every touchpoint is meaningful and contributes to our mission of redefining the possible in healthcare. The Engineer II, Data is primarily responsible for building and operating reliable data integrations and data platform capabilities that move sensitive healthcare data securely and consistently. This includes creating new source ingestions feeds and interfaces; maintaining and troubleshooting existing interfaces; improving ingestion, validation, and monitoring; and partnering with stakeholders to deliver analytics, warehousing, and reporting solutions. This role also contributes to platform engineering practices by automating environments, standardizing deployments, and improving observability and reliability for data services.

Requirements

  • 2+ years of coding experience with Microsoft SQL
  • 1+ years working with big data technologies including but not limited to Databricks, Apache Spark, Python, Microsoft Azure (Data Factory, Dataflows, Azure Functions, Azure Service Bus) with a willingness and ability to learn new ones
  • Understanding of engineering fundamentals: testing automation, code reviews, telemetry, iterative delivery and DevOps
  • Experience with polyglot storage architectures including relational, columnar, key-value, graph or equivalent
  • Some experience with BI tools, preferably Power BI
  • Some experience with cloud-based infrastructure, preferably Microsoft Azure
  • Experience with infrastructure as code and configuration management (e.g., Terraform) and environment standardization
  • Familiarity with operational excellence practices such as incident response and runbooks
  • Understanding of security fundamentals for cloud platforms (identity and access management, secrets management, encryption, and audit logging)
  • Experience packaging and running services using containers and orchestration (e.g., Docker, Kubernetes) and/or serverless patterns
  • Experience with CI/CD, version control, and release automation for data pipelines and platform components (e.g., Azure DevOps/GitHub Actions)
  • Very strong attention to detail
  • Must be inquisitive and demonstrate openness to innovation including AI to explore better processes and ways to alleviate friction and improve patient and client experiences.

Responsibilities

  • Implement changes and enhancements in our internal flat file and message processing services
  • Communicate with our clients and their technical stakeholders via meetings, calls, and emails regarding our data integration
  • Assist other team members of the organization with data related inquiries and issues as they arise
  • Automate and standardize data platform deployments using infrastructure as code and CI/CD to improve consistency, security, and release velocity
  • Aid in the development and design of analytics projects as necessary like modifying reporting queries and creation of new data visualizations
  • Create new data warehousing objects, views, procedures as needed to further streamline and enhance our reporting and visualization capabilities
  • Discover opportunities for an organization to improve its systems, enterprises, and processes through the use of data analytics
  • Maintain documentation related to datasets and analysis and ensuring that everyone on the data team uses the same language and definitions
  • Consistently applies generative AI in day‑to‑day engineering work - using it to accelerate development, improve code quality, troubleshoot complex systems, and design scalable data solutions.

Benefits

  • healthcare
  • time off
  • retirement
  • well-being programs
  • professional development
  • tuition reimbursement
  • quarterly and annual incentive programs
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service