Build and scale distributed training systems that power frontier model pre-training. Work closely with research teams to design and operate large-scale training runs for foundation models. Develop infrastructure that enables efficient training across thousands of GPUs using modern distributed training frameworks. Optimize training throughput, stability, and efficiency for large model training workloads. Collaborate directly with pre-training researchers to translate experimental ideas into scalable, production-ready training systems. Improve performance of distributed training workloads through optimization of communication, memory usage, and GPU utilization. Build and maintain training pipelines that support large-scale datasets, checkpointing, and experiment iteration. Debug and resolve performance bottlenecks across distributed training stacks including model parallelism, GPU communication, and training runtime systems. Contribute to the development of systems that enable rapid experimentation and iteration on new training techniques.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed
Number of Employees
1-10 employees