Research Scientist / Engineer – Training Infrastructure

IntelliPro Group Inc.Palo Alto, CA
104d$220,000 - $300,000

About The Position

We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change. We are looking for engineers with significant experience solving hard problems in PyTorch, CUDA and distributed systems. You will work alongside the rest of the research team to build & train cutting edge foundation models on thousands of GPUs that are built to scale from the ground up.

Requirements

  • Extensive experience with distributed PyTorch training and parallelisms in foundation model training
  • Deep understanding of GPU clusters, networking, and storage systems
  • Familiarity with communication libraries (NCCL, MPI) and distributed system optimization

Nice To Haves

  • Strong Linux systems administration and scripting capabilities
  • Experience managing training runs across >100 GPUs
  • Experience with containerization, orchestration, and cloud infrastructure

Responsibilities

  • Design, implement, and optimize efficient distributed training systems for models with thousands of GPUs
  • Research and implement advanced parallelization techniques (FSDP, Tensor Parallel, Pipeline Parallel, Expert Parallel)
  • Build monitoring, visualization, and debugging tools for large-scale training runs
  • Optimize training stability, convergence, and resource utilization across massive clusters
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service