AI Model Optimization Engineer

Advanced Micro Devices, IncSanta Clara, CA
20hHybrid

About The Position

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: The AMD AI Group is looking for a Senior Software Development Engineer to own the end-to-end model execution stack on AMD Instinct GPUs - spanning training infrastructure at scale and high-performance inference serving. This role demands someone who has shipped LLMs on real hardware, written GPU kernels that moved production metrics, and built the systems infrastructure (orchestration, storage, monitoring) that keeps thousands of GPUs productive. You will be instrumental in ensuring AMD GPUs are first-class citizens for frontier model training and inference across current and next-generation Instinct accelerators.

Requirements

  • Strong Industry experience shipping production AI/ML infrastructure, with hands-on work spanning both training and inference.
  • Proven experience running LLMs on AMD GPUs (ROCm, HIP) or equivalent depth with CUDA, with strong willingness to work on AMD platforms.
  • Track record of writing custom GPU kernels (CUDA, HIP, or Triton) that delivered measurable throughput improvements in production systems.
  • Strong systems engineering skills: Kubernetes, container orchestration, distributed storage, and GPU cluster management at scale (1,000+ GPUs).
  • Proficiency in Python and at least one systems language (C++, Rust, Go, C#) with production-quality software engineering practices.
  • Deep understanding of LLM architecture internals: attention mechanisms, KV-cache, quantization schemes, and distributed parallelism strategies (tensor, pipeline, expert parallelism).
  • Expert knowledge and hands-on experience in C, C++
  • Solid understanding of object-oriented-design principles
  • Solid understanding of Software Engineering principles, Data structure, algorithms, Operating Systems concepts and multithread programming
  • Excellent design and code development skills, familiarity with Linux and modern software tools and techniques for development
  • Good analytical and problem-solving skills
  • Bachelor’s or Master’s degree in Computer/Software Engineering, Computer Science, or related technical discipline
  • This role is not eligible for visa sponsorship.

Nice To Haves

  • Direct experience enabling frontier models (GPT-4 class) on AMD Instinct hardware end-to-end.

Responsibilities

  • Enable and optimize large-scale model training (LLMs, VLMs, MoE architectures) on AMD Instinct GPU clusters, ensuring correctness, reproducibility, and competitive throughput.
  • Build and maintain training infrastructure: job orchestration, distributed checkpointing, data loading pipelines, and storage optimization for multi-thousand GPU clusters on Kubernetes.
  • Debug and resolve training-specific issues including gradient norm explosions, non-deterministic behavior across GPU generations, and compute-communication overlap in distributed training (FSDP, DeepSpeed, Megatron-LM).
  • Optimize RCCL collective communication patterns for training workloads, including all-reduce, all-gather, and reduce-scatter across multi-node topologies.
  • Develop monitoring, alerting, and compliance infrastructure to ensure training cluster health, data security, and SLA adherence at scale.
  • Write and optimize high-performance GPU kernels (GEMM, attention, quantized matmul, GPTQ/AWQ) in HIP, Triton, and MLIR targeting AMD Instinct architectures, with demonstrated ability to outperform open-source baselines.
  • Drive end-to-end inference enablement on new AMD GPU silicon - be among the first to get frontier models running on each new Instinct generation, creating reproducible guides and reference implementations.
  • Optimize inference serving frameworks (vLLM, SGLang, TorchServe) for AMD GPUs: batching strategies, KV-cache management, speculative decoding, and continuous batching for production throughput/latency targets.
  • Develop novel approaches to inference acceleration, including bio-inspired algorithms, SLM-assisted batching, and custom scheduling strategies that exploit AMD hardware characteristics.
  • Build quantization pipelines (FP8, FP6, FP4, GPTQ, AWQ) for production model deployment, ensuring quality-performance tradeoffs are well-characterized across AMD GPU generations.
  • Design observability and debugging tooling: log analysis pipelines, anomaly detection systems, and failure correlation tools for large-scale GPU clusters processing hundreds of terabytes of telemetry per month.
  • Collaborate with AMD silicon architecture teams to provide software feedback on next-generation Instinct GPU designs for both training and inference workloads.
  • Contribute to the open ROCm ecosystem and AMD’s developer experience - SDKs, CI dashboards, documentation, and developer cloud enablement.
  • Collaborate closely with multiple teams to deliver key planning solutions and the technology to support them
  • Help contribute to the design and implementation of future architecture for a highly scalable, durable, and innovative system
  • Work very closely with dev teams and Project Managers to drive results

Benefits

  • AMD benefits at a glance.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service