At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. We are seeking a Principal GenAI Inference Optimization Engineer to join our Models and Applications team. This role focuses on improving performance, efficiency, and scalability of generative AI inference workloads on AMD GPU platforms. You will contribute to optimizing latency, throughput, and cost efficiency for real-world deployment of large-scale models, working across the software-hardware stack. THE PERSON The ideal candidate is a strong technical contributor with expertise in GenAI inference optimization, GPU performance, and large-scale serving systems. You have a solid understanding of GPU architecture, memory systems, and communication patterns, and can apply this knowledge to improve inference efficiency. You are comfortable working across multiple layers—from kernels and runtimes to frameworks and serving systems—and can independently drive optimization efforts while collaborating with cross-functional teams.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Principal
Number of Employees
5,001-10,000 employees