Principal / Senior GPU Software Performance Engineer — Post‑Training

Advanced Micro Devices, IncSan Jose, CA
15dHybrid

About The Position

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. Principal / Senior GPU Software Performance Engineer — Post‑Training THE ROLE: Drive the performance of post‑training workloads on AMD Instinct™ GPUs. You’ll work across kernels, distributed training, and framework integrations to deliver fast, stable, and reproducible training pipelines on ROCm. THE PERSON: The ideal candidate is passionate about software engineering and the craft of training performance. You lead sophisticated cross‑stack issues—spanning data loaders, kernels, distributed training, and compilers—to clear resolution. You communicate crisply and collaborate effectively with framework, compiler, kernel, and model teams across AMD, driving measurable improvements with rigor, ownership, and reproducibility.

Nice To Haves

  • Proven GPU performance engineering for deep learning (ROCm/HIP, Triton, or similar).
  • Hands‑on with SFT. LoRA and RL‑based training at scale.
  • Strong PyTorch experience (torch.distributed, FSDP/ZeRO or equivalent).
  • Proficient in Python and C++; comfortable reading/writing kernels when needed.
  • Experience with distributed systems and collective communication libraries.
  • Track record of turning profiles into fixes, upstreaming changes, and documenting results.

Responsibilities

  • Lead performance for finetuning and RL training solutions on AMD GPUs.
  • Improve throughput, memory efficiency, and stability across data, model, and optimizer steps.
  • Optimize multi‑GPU/multi‑node training and communication patterns.
  • Contribute efficient kernels/ops and targeted graph‑level optimizations.
  • Profile, diagnose, and resolve bottlenecks using standard tooling; prevent regressions in CI.
  • Ship reproducible pipelines and documentation adopted by internal teams and external developers.
  • Collaborate with framework, compiler, and model teams to land durable improvements.

Benefits

  • AMD benefits at a glance.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service