Senior Data Engineer

Formation BioBoston, NY
5dHybrid

About The Position

We're looking for a Data Engineer to join the Scientific Data Intelligence (SDI) team at Formation Bio to help build and scale the data infrastructure that powers our drug development platform. In this role, your primary focus will be supporting the Data Science team — building the coherent, well-structured data models, feature engineering pipelines, and evidence layers that underpin their scientific work. You'll also be responsible for ingesting data from a wide variety of sources—APIs, files, databases, and licensed third-party datasets—and transforming it into clean, analytics-ready assets using modern data tooling. This is a foundational engineering role for someone who takes pride in building things right: reliable pipelines, well-structured data models, and systems that others can trust and build on. You'll work closely with Data Scientists and other engineers across the SDI team to ensure that high-quality data is consistently available where and when it's needed — and structured in a way that directly accelerates scientific discovery. The ideal candidate is deeply fluent in Snowflake and dbt, has strong opinions about data modeling best practices, and has experience handling large and complex datasets across diverse ingestion patterns. You thrive in environments where data quality and engineering rigor are treated as first-class concerns.

Requirements

  • You have 5+ years of experience in data engineering, with a strong track record of building and maintaining production-grade pipelines.
  • You are deeply fluent in dbt and follow best practices rigorously—including modular model design, sources and refs, testing (schema + data), documentation, and environment management.
  • You have hands-on expertise with Snowflake , including schema design, performance tuning, and data governance patterns.
  • You have experience with at least one modern orchestration tool—we use Dagster , but experience with Airflow or Prefect is equally welcome.
  • You have broad ingestion experience across source types: REST APIs, flat files (CSV, JSON, Parquet), relational databases, and vendor-licensed datasets.
  • You have worked with large datasets (TB -> PB scale) and understand the engineering considerations that come with scale—partitioning, incremental loading, efficient data movement, and storage optimization.
  • You're a strong data modeler who thinks carefully about how data is structured, named, and layered for downstream usability.
  • You value documentation, testability, and building things others can maintain and extend.
  • You can balance upfront design with speed to execution, slowing down when it counts without getting stuck in the details.

Nice To Haves

  • You have experience with data quality and observability tooling such as Elementary , Great Expectations , or similar frameworks — and you treat data quality as a first-class engineering concern, not an afterthought.
  • You have experience with Spark, Databricks for large-scale data processing workloads.
  • You have experience with large-scale data transfer tooling such as AWS DataSync , AWS S3 Transfer Acceleration, or equivalent cloud-native data movement services.
  • You have experience in healthcare or life sciences data environments, including familiarity with EHR, claims, or other biomedical datasets.

Responsibilities

  • Partner closely with Data Scientists and other teams to build coherent data models, feature engineering pipelines, and structured evidence layers that directly support scientific analysis and machine learning workflows.
  • Design and build scalable ingestion pipelines to onboard data from diverse sources including REST APIs, flat files, relational databases, and licensed third-party datasets.
  • Develop and maintain robust data models in dbt, adhering to best practices around modularity, testing, documentation, and layered architecture (staging, intermediate, mart).
  • Orchestrate pipelines using Dagster to ensure reliable, observable, and maintainable workflows.
  • Implement data quality checks, validation frameworks, and monitoring to ensure trustworthiness of datasets across the platform.
  • Collaborate with Data Scientists and analysts to understand data needs and translate them into well-structured, reusable models.
  • Handle large-scale data movement and transfer scenarios, applying appropriate tooling and patterns to ensure efficiency and reliability at scale.
  • Document data models, pipeline logic, and transformation assumptions to support discoverability and data governance.

Benefits

  • In addition to base salary, we offer equity, comprehensive benefits, generous perks, hybrid flexibility, and more.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service