Senior Data Engineer

Apartment List
$113,000 - $158,400

About The Position

We are seeking a Senior Data Engineer (L3) to help build and operate the reliable, scalable data systems that power analytics, experimentation, and decision-making across Apartment List. In this role, you will be a strong end-to-end executor responsible for delivering production-grade data pipelines and workflows that meet defined service-level agreements (SLAs) for freshness, quality, and cost. You will work closely with Analytics Engineering, Data Science, Product, and Engineering partners to deliver durable data platform improvements that support company-wide initiatives. The ideal candidate is comfortable owning medium-sized data platform projects end-to-end—from design through deployment and operational support—while working within established platform architecture and engineering patterns. This role emphasizes execution excellence, reliability, and operational ownership within existing platform standards. Platform-level architecture and system design are owned at more senior levels, but this role plays a critical part in ensuring that the platform functions reliably at scale.

Requirements

  • 5+ years of experience in data engineering, with a track record of delivering reliable production data pipelines and systems.
  • Strong experience designing and maintaining orchestration workflows using Apache Airflow.
  • Experience building scalable data pipelines using modern cloud data platforms such as BigQuery and tools such as DBT.
  • Strong understanding of data modeling, schema design, and building maintainable, modular data systems.
  • Experience implementing CI/CD best practices for data pipelines, including automated testing, validation, and deployment workflows to ensure reliable and repeatable production releases.
  • Experience monitoring and operating production data systems, including pipeline observability, data quality checks, and incident response.
  • Ability to identify performance and cost risks in large-scale data systems and implement optimizations.
  • Strong collaboration skills and experience working with cross-functional partners including analytics engineers, data scientists, and product teams.
  • Proven ability to independently execute medium-sized projects and deliver reliable, production-grade systems.

Nice To Haves

  • Familiarity with Kubernetes-based data infrastructure, including deploying and operating containerized data services and workflows in a production environment.
  • Experience migrating legacy ETL systems to modern orchestration frameworks.
  • Familiarity with observability and monitoring tools for data pipelines (e.g., Datadog or similar).
  • Experience operating data systems with strict SLAs for freshness, reliability, and cost efficiency.

Responsibilities

  • Design, build, test, and deploy scalable and reliable data pipelines that power analytics and product decision-making.
  • Own medium-sized data platform initiatives end-to-end, from initial design through production deployment and operational support.
  • Design, migrate, and maintain data workflows in Apache Airflow, including supporting migration of legacy ETL systems to modern orchestration patterns.
  • Ensure pipeline reliability by proactively monitoring workflow SLAs for freshness, quality, and performance, and resolving failures efficiently.
  • Implement and utilize monitoring systems to detect pipeline failures, schema drift, and data quality anomalies.
  • Participate in on-call rotations and contribute to incident response and root cause analysis for data incidents.
  • Apply best practices in warehouse performance and cost optimization, including partitioning, indexing, and efficient data modeling to control BigQuery spend.
  • Build maintainable, modular data models and pipelines using reusable patterns and shared components across the team.
  • Partner closely with Analytics Engineering, Data Science, and business stakeholders to deliver durable improvements across the ingestion, transformation, modeling, and serving layers of the data platform.
  • Contribute to operational excellence through documentation, monitoring improvements, and participation in postmortems and reliability initiatives.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service