As an engineer on the ML Compute team, your work will include driving large-scale pre-training initiatives to support cutting-edge foundation models, focusing on resiliency, efficiency, scalability, and resource optimization. You will enhance distributed training techniques for foundation models and research and implement new patterns and technologies to improve system performance, maintainability, and design. Your role will also involve optimizing execution and performance of workloads built with JAX, PyTorch, XLA, and CUDA on large distributed systems. You will leverage high-performance networking technologies such as NCCL for GPU collectives and TPU interconnect (ICI/Fabric) for large-scale distributed training. Additionally, you will architect a robust MLOps platform to streamline and automate pretraining operations, operationalize large-scale ML workloads on Kubernetes, ensuring distributed trainings are robust, efficient, and fault-tolerant. You will lead complex technical projects, defining requirements and tracking progress with team members, collaborate with cross-functional engineers to solve large-scale ML training challenges, and mentor engineers in areas of your expertise, fostering skill growth and knowledge sharing. Cultivating a team centered on collaboration, technical excellence, and innovation is also a key aspect of this position.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Education Level
Bachelor's degree
Number of Employees
5,001-10,000 employees