About The Position

As an engineer on the ML Compute team, your work will include driving large-scale pre-training initiatives to support cutting-edge foundation models, focusing on resiliency, efficiency, scalability, and resource optimization. You will enhance distributed training techniques for foundation models and research and implement new patterns and technologies to improve system performance, maintainability, and design. Your role will also involve optimizing execution and performance of workloads built with JAX, PyTorch, XLA, and CUDA on large distributed systems. You will leverage high-performance networking technologies such as NCCL for GPU collectives and TPU interconnect (ICI/Fabric) for large-scale distributed training. Additionally, you will architect a robust MLOps platform to streamline and automate pretraining operations, operationalize large-scale ML workloads on Kubernetes, ensuring distributed trainings are robust, efficient, and fault-tolerant. You will lead complex technical projects, defining requirements and tracking progress with team members, collaborate with cross-functional engineers to solve large-scale ML training challenges, and mentor engineers in areas of your expertise, fostering skill growth and knowledge sharing. Cultivating a team centered on collaboration, technical excellence, and innovation is also a key aspect of this position.

Requirements

  • Bachelors in Computer Science, engineering, or a related field.
  • 6+ years of hands-on experience in building scalable backend systems for training and evaluation of machine learning models.
  • Proficient in relevant programming languages, like Python or Go.
  • Strong expertise in distributed systems, reliability and scalability, containerization, and cloud platforms.
  • Proficient in cloud computing infrastructure and tools: Kubernetes, Ray, PySpark.
  • Ability to clearly and concisely communicate technical and architectural problems.

Nice To Haves

  • Advance degrees in Computer Science, engineering, or a related field.
  • Proficient in working with and debugging accelerators, like: GPU, TPU, AWS Trainium.
  • Proficient in ML training and deployment frameworks, like: JAX, Tensorflow, PyTorch, TensorRT, vLLM.

Responsibilities

  • Drive large-scale pre-training initiatives to support cutting-edge foundation models.
  • Enhance distributed training techniques for foundation models.
  • Research and implement new patterns and technologies to improve system performance, maintainability, and design.
  • Optimize execution and performance of workloads built with JAX, PyTorch, XLA, and CUDA on large distributed systems.
  • Leverage high-performance networking technologies such as NCCL for GPU collectives and TPU interconnect (ICI/Fabric) for large-scale distributed training.
  • Architect a robust MLOps platform to streamline and automate pretraining operations.
  • Operationalize large-scale ML workloads on Kubernetes, ensuring distributed trainings are robust, efficient, and fault-tolerant.
  • Lead complex technical projects, defining requirements and tracking progress with team members.
  • Collaborate with cross-functional engineers to solve large-scale ML training challenges.
  • Mentor engineers in areas of your expertise, fostering skill growth and knowledge sharing.
  • Cultivate a team centered on collaboration, technical excellence, and innovation.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service