About The Position

Build and scale distributed training systems that power frontier model pre-training. Work closely with research teams to design and operate large-scale training runs for foundation models. Develop infrastructure that enables efficient training across thousands of GPUs using modern distributed training frameworks. Optimize training throughput, stability, and efficiency for large model training workloads. Collaborate directly with pre-training researchers to translate experimental ideas into scalable, production-ready training systems. Improve performance of distributed training workloads through optimization of communication, memory usage, and GPU utilization. Build and maintain training pipelines that support large-scale datasets, checkpointing, and experiment iteration. Debug and resolve performance bottlenecks across distributed training stacks including model parallelism, GPU communication, and training runtime systems. Contribute to the development of systems that enable rapid experimentation and iteration on new training techniques.

Requirements

  • Experience building or operating distributed training systems for large machine learning models.
  • Strong experience working with modern distributed training frameworks such as Megatron, DeepSpeed, or similar large-scale training systems.
  • Familiarity with large-scale model parallelism strategies (data, tensor, pipeline, or expert parallelism).
  • Experience optimizing training throughput and GPU utilization in large distributed environments.
  • Familiarity with GPU communication libraries such as NCCL and performance tuning for distributed workloads.
  • Experience working closely with ML researchers to productionize experimental training workflows.
  • Strong debugging skills across GPU compute, distributed training systems, and large-scale ML pipelines
  • Experience working with large datasets and training pipelines used for foundation model pre-training.

Responsibilities

  • Build and scale distributed training systems that power frontier model pre-training.
  • Work closely with research teams to design and operate large-scale training runs for foundation models.
  • Develop infrastructure that enables efficient training across thousands of GPUs using modern distributed training frameworks.
  • Optimize training throughput, stability, and efficiency for large model training workloads.
  • Collaborate directly with pre-training researchers to translate experimental ideas into scalable, production-ready training systems.
  • Improve performance of distributed training workloads through optimization of communication, memory usage, and GPU utilization.
  • Build and maintain training pipelines that support large-scale datasets, checkpointing, and experiment iteration.
  • Debug and resolve performance bottlenecks across distributed training stacks including model parallelism, GPU communication, and training runtime systems.
  • Contribute to the development of systems that enable rapid experimentation and iteration on new training techniques.

Benefits

  • Top-tier compensation: Salary and equity structured to recognize and retain the best talent globally.
  • Health & wellness: Comprehensive medical, dental, vision, life, and disability insurance.
  • Life & family: Fully paid parental leave for all new parents, including adoptive and surrogate journeys. Financial support for family planning.
  • Benefits & balance: paid time off when you need it, relocation support, and more perks that optimize your time.
  • Opportunities to connect with teammates: lunch and dinner are provided daily. We have regular off-sites and team celebrations.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

1-10 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service