Staff Software Engineer, Data Infrastructure, AI Compute Platform

BiohubRedwood City, CA
$214,000 - $295,000Hybrid

About The Position

Biohub is the first large-scale initiative bringing frontier AI models, massive compute, and frontier experimental capabilities under one roof. We're building a general-purpose system to accelerate scientific discovery, integrating frontier AI models, biological foundation models, and lab capabilities, with the ultimate goal of curing disease. Our technology powers scientists around the world, translating AI capabilities into tools that accelerate research everywhere. The Team Our AI research team sits at the heart of our mission to unlock new dimensions of biological understanding. You will leverage state-of-the-art AI to accelerate discovery and drive transformative insights in biology—developing novel AI models purpose-built for biological research, engineering robust systems that enable breakthrough science at unprecedented scale, and translating these advances into practical tools that empower researchers worldwide. Our approach is comprehensive and integrated, bringing together world-class AI model development, exceptional engineering talent, high-quality biological data, powerful computing infrastructure, and strategic partnerships. Success requires excellence across five interconnected pillars: training frontier AI models specifically for biology; building engineering systems that maximize research velocity and efficiency; executing a sophisticated data strategy that fuels AI development; operating a world-class AI compute platform; and creating impactful products that transform AI capabilities into accessible scientific tools. The Opportunity The Data Engineering and Infrastructure team brings AI/ML technology and Data to the table in new ways to help drive AI powered solutions that accelerate Biomedical research. We are uniquely positioned to design, build, and scale software systems to help scientists better address the myriad challenges they face. We are supporting researchers and scientists around the world by developing the capacity to apply state-of-the-art methods in artificial intelligence and machine learning to solve important problems in the biomedical sciences. This team works on building shared tools and platforms to be used across Biohub, partnering and supporting the work of an extensive group of Research Scientists, Data Scientists, AI Research Scientists, and Computational Biologists. Members of the shared infrastructure engineering team have an impact on all of Biohub’s initiatives by enabling the technology solutions used by other engineering teams to build a frontier model and scale the feedback loop.

Requirements

  • BS, MS, or PhD in Computer Science or a related technical discipline, or equivalent experience. 8+ years of hands-on coding experience in scripting (Python, PHP, Ruby) and systems languages (Rust, C++, C#, Go, Java, or Scala).
  • Data Platform Expertise: Proficiency in managing large-scale data operations, including designing scalable pipelines (streaming and batch), working with varied data types, and optimizing flexible storage solutions using tools like Argo Workflows, Databricks, Slurm, etc.
  • Data Management and Development Operations: Experience with data governance, metadata, and data lineage tooling like Open Lineage or Marquez. Deep experience working with building CI/CD pipelines for data infrastructure and associated observability and monitoring tooling such as Prometheus, Grafana, OpenTelemetry, Prometheus, or Honeycomb
  • Supporting Complex Data and AI ML Workflows: Experience with addressing end to end data needs for working with complex data and delivering this data ready form model training, working directly with AI Researchers and AI Engineers as part of AI model training project teams. Knowledge of data pipelines that feed Deep Learning model training that makes use of accelerated computing systems (GPUs as well as ASICs, FPGAs, TPUs)
  • Containerization: Extensive experience with scaling containerized applications on Kubernetes or Mesos, with a focus on secure custom containers, replicability and portability.
  • Cloud & Infrastructure: Strong experience with AWS, GCP, or Azure. Exposure to hybrid environments with on-prem and colocation systems is a plus. Familiarity with Infrastructure as Code (e.g., Terraform, Ansible) and monitoring tools (Datadog, Prometheus).
  • Collaboration & Problem Solving: Proven ability to work with diverse, cross-functional stakeholders and teams to navigate complex technical challenges, adapt to evolving requirements, and drive impactful solutions.

Responsibilities

  • Develop and maintain the tooling and infrastructure that drives the entire data lifecycle at Biohub, from ingestion and processing to secure storage and access. Your work will directly support cutting-edge research, AI model training, and data analysis across the organization.
  • Partner with researchers and engineers across various domains, including genetics, imaging, and scientific literature. You'll work on various use cases, from web analytics to complex model training, ensuring data accessibility and performance for teams across CZI.
  • Design and implement flexible, scalable, and performant systems to address our stakeholders’ needs, leveraging technologies like Argo Workflows, Slurm, Ray, AWS Parallel Cluster for mass-scale job processing and orchestration; Vast Data, Delta Lake, Databricks and Apache Iceberg for data management and access; and cloud and on-prem HPC resources

Benefits

  • Provides a generous employer match on employee 401(k) contributions to support planning for the future.
  • Paid time off to volunteer at an organization of your choice.
  • Funding for select family-forming benefits.
  • Relocation support for employees who need assistance moving
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service