About The Position

At OMNY Health, we are bridging the gap between clinical complexity and life-saving research. We are looking for a Data Engineer II who is passionate about the intersection of healthcare and technology. Your primary mission will be to architect and scale the pipelines that transform raw, messy clinical data into a high-fidelity, de-identified, research-ready data product. You will be a key player in managing our hybrid data landscape, extracting insights from our data sources, and curating them within our BigQuery and Snowflake environments. This role is about ensuring privacy at scale while maintaining the scientific utility of the data powering the next generation of medical breakthroughs.

Requirements

  • 3-5+ years in Data Engineering, with a focus on building production-grade healthcare pipelines.
  • Hands-on experience with Google Cloud Platform, specifically CloudSQL and BigQuery.
  • Deep expertise in BigQuery and Snowflake architectures, including performance tuning and secure data sharing.
  • Expert-level Python and SQL.
  • Proven experience with Argo Workflows/Events for containerized orchestration.
  • Mastery of dbt for maintaining the transformation layer.
  • Experience using SODA (or Great Expectations) to define and enforce data contracts.
  • Understanding of HIPAA regulations and encryption standards.
  • The "Curator" Mindset: You don't just move data; you care about its meaning. You understand that a "null" in a lab result is a clinical signal, not just a missing string.
  • Adaptability: You thrive in the "zero-to-one" phase where documentation might be thin, but the impact is massive.
  • Collaborative Spirit: You can speak "Data" to engineers and "Insight" to clinical researchers.
  • Familiarity with the trade-offs between data utility and privacy—specifically how to handle dates, zip codes, and unique identifiers in a way that satisfies both statisticians and compliance officers.

Nice To Haves

  • Familiarity with healthcare-specific data challenges (ICD-10, FHIR, or provider-specific MS SQL schemas) is a significant plus.

Responsibilities

  • Design, build, and maintain robust ETL/ELT pipelines to ingest structured and unstructured healthcare data into our BigQuery and Snowflake warehouses.
  • Lead the development of modular, high-performance transformations using stored procedures, and dbt (data build tool) to map raw clinical data to standardized research schemas in our Common Data Model (CDM).
  • Deploy and manage complex workflows using Argo, ensuring high availability and fault tolerance within our GCP ecosystem.
  • Implement "trust-but-verify" frameworks using SODA to monitor clinical data integrity, ensuring every record in our research product is validated and compliant.
  • Implement and automate sophisticated de-identification protocols (Safe Harbor or Expert Determination methods) to ensure HIPAA compliance while preserving data longitudinality.
  • Architect scalable data models (Common Data Model) that allow researchers to query complex patient journeys with ease.
  • Collaborate with DevOps to manage cloud-native data infrastructure, ensuring high availability and rigorous security controls.

Benefits

  • Impact: Your work directly enables researchers to find cures and improve patient outcomes.
  • Innovation: We are tackling the hardest problem in health tech: making data usable without sacrificing privacy.
  • Growth: As an early hire, you will have a front-row seat (and a steering wheel) in building our engineering culture.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

11-50 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service