Triton is a widely adopted language and compiler for high-performance GPU kernels, powering major AI frameworks such as PyTorch, vLLM, and SGLang. As AI workloads increasingly rely on Triton-based kernels, first-class Triton support is strategically critical to AMD’s AI software roadmap. AMD GPUs are an official Triton backend; delivering industry-leading Triton performance on AMD Instinct accelerators is a top priority for AMD. The performance and usability of Triton directly impact the competitiveness of AMD hardware in large-scale AI training and inference. In this role you will author state-of-the-art performant Triton/Gluon kernels for ML kernels powering the latest and greatest AI models. You will collaborate with research, compiler, and hardware architecture teams to co-design high-performance solutions, analyze bottlenecks to make AMD GPUs the best-in-class platform for Triton-powered AI workloads. The ideal candidate has deep expertise in SIMT programming, parallel algorithms, GPU architecture, and performance engineering. You are comfortable working across the full stack to drive e2e model performance — from vLLM/SGL down to ISA-level performance tuning — and can perform rigorous quantitative analysis to drive measurable improvements. You thrive in highly technical environments, enjoy solving complex performance problems, and are excited to collaborate across model deployment, compiler, runtime, and hardware teams. Most importantly, you are curious, hands-on, and willing to learn and work across boundaries.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level