Senior Software Engineer, Data Pipelines

Ginkgo Bioworks Inc.Boston, MA
1d

About The Position

Our mission is to make biology easier to engineer. Ginkgo is constructing, editing, and redesigning the living world in order to answer the globe’s growing challenges in health, energy, food, materials, and more. Our bioengineers make use of an in-house automated foundry for designing and building new organisms. Ginkgo Biosecurity is building next-generation biosecurity infrastructure to help governments and partners detect, attribute, and deter biological threats. Our mission extends across public health, national security, and global defense, ensuring nations can rapidly identify dangerous pathogens, understand where threats originate, and respond with confidence. On our Biosecurity team, you are a software engineer focused on building and operating critical biosecurity data systems. You design reliable data pipelines and models, productionize analytics, and ensure data quality across programs spanning PCR, sequencing, wastewater, biosurveillance, and large-scale environmental monitoring. This role requires strong software engineering fundamentals—including system design, testing, and code quality—applied to data infrastructure challenges. You will work primarily on backend data systems, designing data warehouses, building ETL/ELT pipelines, and managing data architecture. The role combines platform engineering (e.g., orchestration with Airflow, observability, infrastructure-as-code) with analytics engineering (SQL modeling, testing, documentation) to deliver reliable data products that support threat detection, pathogen attribution, and operational decision-making.

Requirements

  • 7+ years of professional experience in data or software engineering, with a focus on building production-grade data products and scalable architectures
  • Expert proficiency with SQL for complex transformations, performance tuning, and query optimization
  • Strong Python skills for data engineering workflows, including pipeline development, ETL/ELT processes, and data processing; experience with backend frameworks (FastAPI, Flask) for API development; focus on writing modular, testable, and reusable code
  • Proven experience with dbt for data modeling and transformation, including testing frameworks and documentation practices
  • Hands-on experience with cloud data warehouses (Snowflake, BigQuery, or Redshift), including performance tuning, security hardening, and managing complex schemas
  • Experience with workflow orchestration tools (Airflow, Dagster, or equivalent) for production data pipelines, including DAG development, scheduling, monitoring, and troubleshooting
  • Solid grounding in software engineering fundamentals: system design, version control (Git), CI/CD pipelines, containerization (Docker), and infrastructure-as-code (Terraform, CloudFormation)
  • Hands-on experience managing AWS resources, including S3, IAM roles/policies, API integrations, and security configurations
  • Strong ability to analyze large datasets, identify data quality issues, debug pipeline failures, and propose scalable solutions
  • Excellent communication skills and ability to work cross-functionally with scientists, analysts, and product teams to turn ambiguous requirements into maintainable data products

Nice To Haves

  • Domain familiarity with biological data (PCR, sequencing, wastewater surveillance, TAT metrics) and experience working with lab, bioinformatics, NGS, or epidemiology teams
  • Production ownership of Snowflake environments including RBAC, secure authentication patterns, and cost/performance optimization
  • Experience with observability and monitoring stacks (Grafana, Datadog, or similar) and data quality monitoring (anomaly detection, volume/velocity checks, schema drift detection)
  • Familiarity with container orchestration platforms (Kubernetes) for managing production workloads
  • Experience with data ingestion frameworks (Airbyte, Fivetran) or building custom ingestion solutions for external partner data delivery
  • Familiarity with data cataloging, governance practices, and reference data management to prevent silent data drift
  • Experience designing datasets for visualization tools (Tableau, Looker, Metabase) with strong understanding of dashboard consumption patterns; familiarity with JavaScript for custom visualizations or front-end dashboard development
  • Comfort with AI-assisted development tools (GitHub Copilot, Cursor) to accelerate code generation while maintaining quality standards
  • Startup or fast-paced environment experience with evolving priorities and rapid iteration
  • Scientific or data-intensive domain experience (life sciences, healthcare, materials science)

Responsibilities

  • Plan, architect, test, and deploy data warehouses, data marts, and ETL/ELT pipelines primarily within AWS and Snowflake environments
  • Build scalable data pipelines capable of handling structured, unstructured, and high-throughput biological data from diverse sources
  • Develop data models using dbt with rigorous testing, documentation, and stakeholder-aligned semantics to ensure analytics-ready datasets
  • Ensure data integrity, consistency, and accessibility across internal and external biosecurity data products
  • Develop, document, and enforce coding and data modeling standards to improve code quality, maintainability, and system performance
  • Serve as the in-house data expert, making recommendations on data architecture, pipeline improvements, and best practices; define and adapt data engineering processes to deliver reliable answers to critical biosecurity questions
  • Build high-performance APIs and microservices in Python that enable seamless integration between the biosecurity data platform and user-facing applications
  • Design backend services that support real-time and batch data access for biosecurity operations
  • Create data products that empower public health officials, analysts, and partners with actionable biosecurity intelligence
  • Democratize access to complex biosecurity datasets using AI and LLMs, making data more discoverable and usable for stakeholders
  • Apply AI-assisted development tools to accelerate code generation, data modeling, and pipeline development while maintaining high quality standards
  • Build robust, production-ready data workflows using AWS, Kubernetes, Docker, Airflow, and infrastructure-as-code (Terraform/CloudFormation)
  • Diagnose system bottlenecks, optimize for cost and speed, and ensure the reliability and fault tolerance of mission-critical data pipelines
  • Implement observability, monitoring, and alerting to maintain high availability for biosecurity operations
  • Lead data projects from scoping through execution, including design, documentation, and stakeholder communication
  • Collaborate with technical leads, product managers, scientists, and data analysts to build robust data products and analytics capabilities

Benefits

  • company stock awards
  • a comprehensive benefits package including medical, dental & vision coverage
  • health spending accounts
  • voluntary benefits
  • leave of absence policies
  • 401(k) program with employer contribution
  • 8 paid holidays in addition to a full-week winter shutdown and unlimited Paid Time Off policy
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service