Advanced Micro Devices-posted 3 months ago
Full-time • Senior
Hybrid • San Jose, CA
5,001-10,000 employees
Computer and Electronic Product Manufacturing

We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences - the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world's most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. We are looking for a Principal Machine Learning Engineer to join our Models and Applications team. If you are excited by the challenge of distributed training of large models on a large number of GPUs, and if you are passionate about improving training efficiency while innovating and generating new ideas, then this role is for you. You will be part of a world class team focused on addressing the challenge of training generative AI at scale.

  • Train large models to convergence on AMD GPUs at scale.
  • Improve the end-to-end training pipeline performance.
  • Optimize the distributed training pipeline and algorithm to scale out.
  • Contribute your changes to open source.
  • Stay up-to-date with the latest training algorithms.
  • Influence the direction of AMD AI platform.
  • Collaborate across teams with various groups and stakeholders.
  • Experience with distributed training pipelines.
  • Knowledgeable in distributed training algorithms (Data Parallel, Tensor Parallel, Pipeline Parallel, ZeRO).
  • Familiar with training large models at scale.
  • Excellent Python or C++ programming skills, including debugging, profiling, and performance analysis at scale.
  • Experience with ML infra at kernel, framework, or system level.
  • Strong communication and problem-solving skills.
  • Experience with ML frameworks such as PyTorch, JAX, or TensorFlow.
  • Experience with distributed training and distributed training frameworks, such as Megatron-LM, DeepSpeed.
  • Experience with LLMs or computer vision, especially large models, is a plus.
  • AMD benefits at a glance.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service