About The Position

Manifold Bio builds AI models for protein therapeutic design, trained on proprietary experimental data generated at unprecedented scale. Our in vivo-centric discovery platform produces millions of experimentally validated protein designs per campaign, creating the datasets that make our models possible and our approach uniquely powerful. We combine high-throughput protein engineering with computational design to create antibody-like drugs and other biologics. Our world-class team of protein engineers, biologists, and computational scientists are working together to aim the platform at therapeutic opportunities where precise targeting is the key to overcoming clinical challenges. Position Manifold's AI team is actively training protein foundation models on our proprietary experimental datasets. Our generative antibody design model, mBER, has already demonstrated controllable de novo binder design across multiple million-scale screening campaigns, and the team is now scaling foundation model capabilities to push well beyond current performance. We are looking for an AI/ML Scientist to join this effort. You will work alongside our existing model training team to accelerate the development of foundation models fine-tuned on Manifold's data, bringing additional depth in pre-training methodology, architecture development, and large-scale training. Your work will directly improve mBER's design capabilities and unlock new modeling paradigms for the broader team. You'll own foundation model projects end-to-end, from architecture selection and training infrastructure to evaluation against real experimental outcomes, while contributing to the team's shared research agenda.

Requirements

  • Demonstrated experience pretraining and/or fine-tuning protein foundation models (folding, docking, language models, or generative design) with published or otherwise demonstrable results
  • Strong familiarity with AlphaFold architecture and training methodology
  • 2+ years of hands-on experience with PyTorch and/or JAX for deep learning
  • Experience with large-scale model training: distributed training, multi-GPU/multi-node setups, mixed precision, gradient checkpointing
  • Solid understanding of deep learning architectures (transformers, attention mechanisms, diffusion/flow matching) and optimization techniques
  • Experience working with protein structure data (PDB, mmCIF) and/or protein sequence datasets
  • Strong statistical analysis and experimental design skills
  • Proficiency in Python scientific computing stack (NumPy, Pandas, scikit-learn)
  • Self-directed researcher who can balance guidance with independence
  • Excellent written and verbal communication skills for cross-functional collaboration

Nice To Haves

  • Experience with protein generative design methods (e.g., RFdiffusion, ProteinMPNN, flow matching approaches)
  • Experience with protein language models (e.g., ESM family)
  • Published research in computational biology, protein design, or structural biology
  • Experience training on proprietary or domain-specific biological datasets
  • Familiarity with Ray for distributed computing
  • Experience with Kubernetes (EKS) and cloud computing platforms (AWS)
  • Knowledge of protein engineering, directed evolution, or structural biology wet lab techniques
  • Experience working with agentic AI coding tools for fast, parallelized execution of modeling experiments
  • Previous biotech/pharma industry experience

Responsibilities

  • Advance the team's ongoing foundation model training efforts—pretraining, fine-tuning, and evaluating folding, docking, language, and generative design models on Manifold's proprietary experimental data
  • Bring depth in training methodology, architecture selection, and optimization to complement the existing team's expertise
  • Develop and scale training pipelines for distributed, multi-GPU and multi-node training runs
  • Integrate foundation model outputs into mBER to improve binder design success rates and enable new design capabilities
  • Design and execute ML experiments with clear hypotheses, rigorous evaluation frameworks, and systematic analysis
  • Establish best practices for mixed-precision training, gradient checkpointing, and computational efficiency at scale
  • Produce clear documentation and analysis supporting architecture and training decisions
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service