Principal GenAI Inference Optimization Engineer

Advanced Micro Devices, IncSan Jose, CA
Hybrid

About The Position

We are seeking a Principal GenAI Inference Optimization Engineer to join our Models and Applications team. This role focuses on improving performance, efficiency, and scalability of generative AI inference workloads on AMD GPU platforms. You will contribute to optimizing latency, throughput, and cost efficiency for real-world deployment of large-scale models, working across the software-hardware stack.

Requirements

  • Strong technical contributor with expertise in GenAI inference optimization, GPU performance, and large-scale serving systems.
  • Solid understanding of GPU architecture, memory systems, and communication patterns, and can apply this knowledge to improve inference efficiency.
  • Comfortable working across multiple layers—from kernels and runtimes to frameworks and serving systems—and can independently drive optimization efforts while collaborating with cross-functional teams.
  • Strong understanding of GPU architecture and performance fundamentals (compute, memory hierarchy, interconnects such as PCIe/Infinity Fabric/RDMA).
  • Experience with GenAI inference optimization techniques (e.g., quantization, KV-cache optimization, batching).
  • Hands-on experience with inference/serving frameworks such as vLLM, SGLang, Triton, TensorRT-LLM, or similar.
  • Experience working on LLM or multimodal inference workloads.
  • Familiarity with distributed systems and serving architectures.
  • Experience with ML frameworks (PyTorch, JAX, or TensorFlow), especially for inference.
  • Proficiency in Python and at least one systems language (C++/CUDA/HIP).
  • Experience with profiling, debugging, and performance tuning tools.
  • Ability to work collaboratively across teams and deliver impactful optimizations.
  • B.S., M.S. or Ph.D. in Computer Science, Computer Engineering, or a related field preferred, or equivalent industry experience.

Responsibilities

  • Optimize performance of GenAI inference workloads on AMD GPU platforms across single-node and distributed environments.
  • Improve latency, throughput, and cost efficiency for LLM and multimodal model serving in production.
  • Analyze and resolve bottlenecks across compute, memory, and communication (e.g., kernel efficiency, KV-cache usage, memory bandwidth, scheduling).
  • Contribute to cross-stack optimizations spanning kernels, runtimes, communication libraries, and inference/serving frameworks (e.g., vLLM, SGLang, Triton, or similar systems).
  • Implement and evaluate inference optimization techniques such as batching strategies, quantization, prefix caching, and speculative decoding.
  • Support development and optimization of scalable serving systems, including request scheduling and resource utilization.
  • Develop and use profiling, benchmarking, and performance analysis tools for inference workloads.
  • Collaborate with hardware, compiler, and framework teams to improve overall system performance.
  • Contribute to internal tools and, where applicable, open-source projects for inference optimization on AMD platforms.
  • Document best practices and contribute to performance guidelines for GenAI deployment.

Benefits

  • AMD benefits at a glance

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Principal

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service