About The Position

At OMNY Health, we are bridging the gap between clinical complexity and life-saving research. We are looking for a Data Engineer II who is passionate about the intersection of healthcare and technology. Your primary mission will be to architect and scale the pipelines that transform raw, messy clinical data into a high-fidelity, de-identified, research-ready data product. You will be a key player in managing our hybrid data landscape, extracting insights from our data sources, and curating them within our BigQuery and Snowflake environments. This role is about ensuring privacy at scale while maintaining the scientific utility of the data powering the next generation of medical breakthroughs.

Requirements

  • Experience: 3-5+ years in Data Engineering, with a focus on building production-grade healthcare pipelines.
  • GCP & Storage: Hands-on experience with Google Cloud Platform, specifically CloudSQL and BigQuery.
  • Warehousing: Deep expertise in BigQuery and Snowflake architectures, including performance tuning and secure data sharing.
  • Code & Orchestration: Expert-level Python and SQL.
  • Proven experience with Argo Workflows/Events for containerized orchestration.
  • Mastery of dbt for maintaining the transformation layer.
  • Quality Assurance: Experience using SODA (or Great Expectations) to define and enforce data contracts.
  • Security Mindset: Understanding of HIPAA regulations and encryption standards.

Nice To Haves

  • Healthcare Domain: Familiarity with healthcare-specific data challenges (ICD-10, FHIR, or provider-specific MS SQL schemas) is a significant plus.

Responsibilities

  • Pipeline Development: Design, build, and maintain robust ETL/ELT pipelines to ingest structured and unstructured healthcare data into our BigQuery and Snowflake warehouses.
  • Modern Transformations: Lead the development of modular, high-performance transformations using stored procedures, and dbt (data build tool) to map raw clinical data to standardized research schemas in our Common Data Model (CDM).
  • Cloud-Native Orchestration: Deploy and manage complex workflows using Argo, ensuring high availability and fault tolerance within our GCP ecosystem.
  • Automated Data Quality: Implement "trust-but-verify" frameworks using SODA to monitor clinical data integrity, ensuring every record in our research product is validated and compliant.
  • De-identification & Privacy: Implement and automate sophisticated de-identification protocols (Safe Harbor or Expert Determination methods) to ensure HIPAA compliance while preserving data longitudinality.
  • Data Modeling: Architect scalable data models (Common Data Model) that allow researchers to query complex patient journeys with ease.
  • Infrastructure: Collaborate with DevOps to manage cloud-native data infrastructure, ensuring high availability and rigorous security controls.

Benefits

  • Impact: Your work directly enables researchers to find cures and improve patient outcomes.
  • Innovation: We are tackling the hardest problem in health tech: making data usable without sacrificing privacy.
  • Growth: As an early hire, you will have a front-row seat (and a steering wheel) in building our engineering culture.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

11-50 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service