About The Position

We're seeking a Senior Data Engineer - Operations to support our data platform, with a strong focus on triaging, debugging, and operating production data pipelines. This role sits within the Data Platform Operations pillar and is responsible for the day-to-day health, reliability, and correctness of ingestion pipelines, transformations, and analytics workflows. You’ll work hands-on across ingestion, orchestration, dbt transformations, and medallion-layer data models, partnering closely with other data and analytic engineers and DevOps to ensure timely resolution of data issues and smooth platform operations.

Requirements

  • 7+ years of experience in data engineering, analytics engineering, or software development, with significant experience operating and supporting production data pipelines
  • Strong programming skills in Python & SQL on at least one major data platform (Snowflake, BigQuery, Redshift, or similar)
  • Experience supporting schema evolution, data contracts, and downstream consumers in production environments
  • Strong experience triaging, debugging, and maintaining dbt models, including understanding dependencies across medallion layers (bronze/silver/gold)
  • Experience with streaming, distributed compute, or S3-based table formats (Spark, Kafka, Iceberg/Delta/Hudi).
  • Experience with schema governance, metadata systems, and data quality frameworks.
  • Hands-on experience operating and debugging orchestration workflows (Airflow, Dagster, Prefect), including retries, backfills, and dependency management
  • Solid grasp of CI/CD, Docker, and 2 years of experience in AWS

Nice To Haves

  • Experience participating in on-call rotations, incident response, or data operations teams
  • Experience with data observability, data catalog, or metadata management tools
  • Experience working with healthcare data (X12, FHIR)
  • Understanding of authentication/authorization (OAuth2, JWT, SSO)

Responsibilities

  • Operational Enablement & Automation: Build and maintain automation, scripts, and lightweight tooling to support operational workflows, including pipeline triage, data validation, backfills, reprocessing, and quality checks. Improve self-service and reduce manual operational toil.
  • Pipeline Operations & Debugging: Own operational support for ingestion and transformation pipelines built on Airflow, Spark, dbt, Kafka, Snowflake (or similar). Triaging failed jobs, diagnosing data issues, performing backfills, and coordinating fixes across ingestion, transformation, and analytics layers.
  • Observability, Data Quality & Incident Response: Monitor pipeline health, data freshness, and quality metrics across medallion layers. Investigate data anomalies, schema drift, and transformation failures, and drive incidents to resolution through root-cause analysis and corrective actions.
  • Cross-Functional Operations: Act as the primary interface between Data Platform, Analytics Engineering, and downstream consumers during operational issues. Communicate impact, coordinate fixes, and ensure timely resolution of data incidents.

Benefits

  • Totally remote within the contiguous United States, full-time (40h/week)
  • Stable, long-term independent contract agreement
  • Work hours - US Eastern time office hours

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

51-100 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service