About The Position

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.

Requirements

  • Expert-level proficiency in parallel programming models such as (HIP/ROCm, OpenCL, SYCL; CUDA interop), and performance tuning at scale.
  • Deep knowledge of GPU microarchitecture (compute units/SIMD, wavefront scheduling, memory hierarchy, caches, shared/local memory, interconnects, PCIe/Infinity Fabric).
  • Hands-on experience with profiling, instrumentation, and performance counters; familiarity with tools such as ROCm tools, Radeon GPU Profiler/Memory Visualizer, and LLVM-based toolchains.
  • Proven track record building diagnostics: sanitizers, static/dynamic analysis, fuzzing, crash triage systems, telemetry pipelines; experience with kernel-level debugging and driver/runtime fault isolation.
  • Strong systems background: Linux, kernel, drivers, multi-node HPC/cloud, NUMA, distributed training/inference pipelines.
  • Proficiency in C/C++, Python; familiarity with LLVM IR, GPU ISA, and performance modeling.
  • Demonstrated impact influencing architecture and shipping high-performance, reliable products at scale.

Nice To Haves

  • Experience with AMD GPU architectures (CDNA/RDNA) and ROCm ecosystem.
  • Contributions to open-source compilers/runtimes/tools; leadership in standards bodies.
  • Experience with ML/DL compiler stacks (TVM, XLA, Triton) and performance engineering for AI workloads.
  • Background in RAS, fault tolerance, error containment, and predictive analytics for accelerators.

Responsibilities

  • Define the long-term technical roadmap for GPU performance engineering, diagnostics, and reliability/availability/serviceability (RAS).
  • Establish best-in-class methodologies for performance modeling, profiling, and optimization across ROCm/HIP, OpenCL, SYCL, CUDA interoperability, and ML/DL frameworks.
  • Influence architecture decisions with data-backed insights on compute, memory hierarchy, interconnect, scheduling, and compiler/runtime impacts.
  • Architect end-to-end performance workflows: microbenchmarks, workload characterization, bottleneck analysis, instrumentation, and guided optimization.
  • Lead development of profiling and visualization capabilities (e.g., pipeline stages, wavefront occupancy, cache/memory behavior, interconnect utilization, synchronization overhead).
  • Drive compiler and runtime optimizations, including code generation, vectorization, register allocation, memory tiling, and kernel fusion.
  • Build scalable auto-tuning systems for kernels (GEMM/conv, attention, graph workloads) across different GPU generations and system topologies.
  • Design advanced diagnostics to detect, localize, and triage GPU defects across silicon, firmware, driver, runtime, and application layers.
  • Develop static/dynamic analysis tools, sanitizers, fuzzing, and differential testing frameworks targeted at GPU execution and memory models.
  • Integrate telemetry and performance counters for fault detection (ECC, parity, timeouts, hangs, deadlocks, memory corruption, race conditions) and create automated triage pipelines.
  • Collaborate on DFT/DFD strategies, bring-up, validation, and manufacturing test enhancements to improve yield and field reliability.
  • Partner with RAS teams to enhance error containment, recovery, and predictive failure analytics; contribute to BIST and in-field diagnostics strategy.
  • Work closely with Architecture, Silicon Design, Firmware, Drivers, Compiler/Runtime, Tools, QA/Validation, and Customer Engineering to deliver performance and reliability targets.
  • Serve as a primary technical interface for strategic customers, ISVs, hyperscalers, and HPC partners, shaping workload optimization and diagnostics deployment.
  • Mentor senior/principal engineers and lead technical reviews; cultivate a culture of rigorous measurement, reproducibility, and engineering excellence.
  • Publish influential research, secure patents, and present at top-tier conferences (SC, Hot Chips, ISCA, PACT, MICRO, GTC).
  • Contribute to standards and open-source initiatives in heterogeneous computing, performance tools, and reliability.

Benefits

  • AMD benefits at a glance.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

Ph.D. or professional degree

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service