Principal ML Engineer - Large Scale Training Performance Optimization

Advanced Micro Devices, IncSan Jose, CA
4hHybrid

About The Position

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. We are looking for a Principal Machine Learning Engineer to join our Models and Applications team. If you are excited by the challenge of distributed training of large models on a large number of GPUs, and if you are passionate about improving training efficiency while innovating and generating new ideas, then this role is for you. You will be part of a world class team focused on addressing the challenge of training generative AI at scale. The ideal candidate should have experience with distributed training pipelines, be knowledgeable in distributed training algorithms (Data Parallel, Tensor Parallel, Pipeline Parallel, Expert Parallel ZeRO), and be familiar with training large models at scale.

Requirements

  • Experience with ML/DL frameworks such as PyTorch, JAX, or TensorFlow.
  • Experience with distributed training and distributed training frameworks, such as Megatron-LM, MaxText, TorchTitan.
  • Excellent Python or C++ programming skills, including debugging, profiling, and performance analysis at scale.
  • Experience with ML infra at kernel, framework, or system level
  • Strong communication and problem-solving skills.
  • A master's degree or PhD degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field.

Nice To Haves

  • Experience with LLMs or computer vision, especially large models, is a plus.
  • Experience with GPU kernel optimization is a plus.

Responsibilities

  • Train large models to convergence on AMD GPUs at scale.
  • Improve the end-to-end training pipeline performance.
  • Optimize the distributed training pipeline and algorithm to scale out.
  • Contribute your changes to open source.
  • Stay up-to-date with the latest training algorithms.
  • Influence the direction of AMD AI platform.
  • Collaborate across teams with various groups and stakeholders.

Benefits

  • AMD benefits at a glance.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Principal

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service