Fellow GPU Performance Optimization Engineer

Advanced Micro Devices, IncSan Jose, CA
10hHybrid

About The Position

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: We are seeking a Fellow GPU Performance Optimization Engineer to join our Models and Applications team. This role focuses on maximizing performance and efficiency of large-scale AI training workloads on AMD GPU platforms. You will drive innovations across the full software-hardware stack, optimizing distributed training at scale and pushing the limits of system throughput, scalability, and utilization for generative AI workloads. This position requires deep expertise in GPU performance analysis, distributed systems, and ML workloads, along with the ability to influence architecture, software ecosystems, and best practices across the organization. THE PERSON: The ideal candidate is a recognized technical leader with deep expertise in GPU performance optimization, large-scale distributed training, and system-level bottleneck analysis. You have a strong understanding of GPU architecture, interconnects, memory hierarchies, and communication patterns, and can translate this knowledge into measurable improvements in training efficiency at scale. You are comfortable operating across layers—from kernels and runtimes to frameworks and distributed strategies—and have a track record of driving impactful optimizations and influencing technical direction.

Requirements

  • Deep expertise in GPU architecture and performance characteristics (compute units, memory hierarchy, interconnects such as PCIe/Infinity Fabric/RDMA).
  • Strong experience with performance profiling tools (e.g., ROCm tools, Nsight-like systems, custom profilers) and bottleneck analysis.
  • Proven experience optimizing large-scale distributed training workloads across thousands of GPUs.
  • Experience with distributed training frameworks such as Megatron-LM, Torchtitan, MaxText, or equivalent.
  • Strong understanding of communication libraries and patterns (e.g., NCCL/RCCL, collective ops, overlap of compute and communication).
  • Expertise in ML frameworks (PyTorch, JAX, TensorFlow) with a focus on performance tuning.
  • Proficiency in Python and at least one systems language (C++/CUDA/HIP), including debugging and low-level optimization.
  • Demonstrated technical leadership and ability to influence cross-functional teams.

Nice To Haves

  • Experience with compiler stacks, kernel optimization, or graph-level optimization is a strong plus.

Responsibilities

  • Lead performance optimization of large-scale AI training workloads on AMD GPU platforms across single-node and multi-node environments.
  • Identify and eliminate system bottlenecks across compute, memory, and communication (e.g., kernel efficiency, memory bandwidth, network utilization).
  • Optimize distributed training strategies (Data, Tensor, Pipeline Parallelism, ZeRO, etc.) for scalability and efficiency on AMD hardware.
  • Drive cross-stack optimizations spanning kernels, compilers, runtimes, communication libraries, and ML frameworks.
  • Develop and apply advanced profiling, benchmarking, and performance modeling methodologies.
  • Collaborate with hardware, compiler, and framework teams to influence next-generation GPU architecture and software stack design.
  • Contribute to and lead open-source efforts to improve ecosystem performance on AMD platforms.
  • Define best practices and guide teams on performance tuning for large-scale training workloads.
  • Stay at the forefront of advancements in large-scale training systems and performance optimization techniques.

Benefits

  • AMD benefits at a glance.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service