Principal GPU Performance Engineer - Artificial Intelligence

Advanced Micro Devices, Inc.Santa Clara, CA
32d

About The Position

At AMD, our mission is to build great products that accelerate next-generation computing experiences-from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges-striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: We are seeking a GPU Performance Engineer to optimize AI training and inference workloads and guide the evolution of next-generation AMD Instinct GPU architectures. In this role, you will work across the software and hardware stack to profile, analyze, and improve the efficiency of training foundation models on large-scale GPU clusters. You will collaborate with framework developers, distributed systems engineers, and hardware architects to ensure our GPUs deliver industry-leading training performance today while shaping the designs of tomorrow. THE PERSON: We value curiosity and innovation, and we're committed to providing a challenging and supportive environment where you can learn and grow. As you collaborate with your peers, you'll have the opportunity to make a real impact and contribute to our organization's success.

Requirements

  • Strong expertise in GPU tuning and optimization (CUDA, ROCm, or equivalent).
  • Understanding of GPU microarchitecture (execution units, memory hierarchy, interconnects, warp scheduling).
  • Hands-on experience with distributed training and inference frameworks and communication libraries (e.g., PyTorch DDP, DeepSpeed, Megatron-LM, NCCL/RCCL, MPI).
  • Advanced Linux OS, container (e.g. Docker) and GitHub skills
  • Proficiency in Python or C++ for performance-critical development.
  • Familiarity with large-scale AI training and inference infrastructure (NVLink, InfiniBand, PCIe, cloud/HPC clusters).
  • Experience in benchmarking methodologies, performance analysis/profiling (e.g. Nsight), performance monitoring tools.

Nice To Haves

  • Experience scaling training to thousands of GPUs for foundation models a plus.
  • Strong track record of optimizing large-scale AI systems in cloud or HPC environments is desired.

Responsibilities

  • Profile and optimize large-scale AI training and inference workloads (transformers, multimodal, diffusion, recommender systems) across multi-node, multi-GPU clusters.
  • Identify bottlenecks in compute, memory, interconnects, and communication libraries (NCCL/RCCL, MPI), and deliver optimizations to maximize scaling efficiency.
  • Collaborate with compiler/runtime teams to improve kernel performance, scheduling, and memory utilization.
  • Develop, maintain and recommend benchmarks representative of foundation model AI training and inference workloads.
  • Provide performance insights to AMD Instinct GPU architecture teams, informing hardware/software co-design decisions for future architectures.
  • Partner with framework teams (PyTorch, JAX, TensorFlow) to upstream performance improvements and enable better scaling APIs.
  • Present findings to cross-functional teams and leadership, shaping both software and hardware roadmaps.

Benefits

  • AMD benefits at a glance.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Principal

Industry

Computer and Electronic Product Manufacturing

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service