Senior Software Development Engineer  – SGLang and Inference Stack

Advanced Micro Devices, IncSanta Clara, CA
4h

About The Position

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: As a core member of the team, you will play a pivotal role in optimizing and developing deep learning frameworks for AMD GPUs. Your work will be instrumental in enhancing GPU kernel performance, accelerating deep learning models, and enabling RL training and SOTA LLM and Multimodal inference at scale across multi-GPU and multi-node systems. You will collaborate across internal GPU software teams and engage with open-source communities to integrate and optimize cutting-edge compiler technologies and drive upstream contributions that benefit AMD’s AI software ecosystem. THE PERSON: Skilled engineer with strong technical and analytical expertise in GPGPU C++, Triton, TileLang or DSL development within Linux environments. The ideal candidate will thrive in both collaborative team settings and independent work, with the ability to define goals, manage development efforts, and deliver high-quality solutions. Strong problem-solving skills, a proactive approach, and a keen understanding of software engineering best practices are essential.

Requirements

  • Strong Programming Skills: Proficient in C++ and/or Python (PyTorch, Triton, TileLang), with demonstrated ability to code, debug, profile, and optimize performance-critical code.
  • GPGPU Computing: Working knowledge of HIP, CUDA, Triton, TileLang or other GPU programming models; experience with GCN/CDNA architecture preferred.

Nice To Haves

  • SGLang and LLM Optimization: Hands-on experience with SGLang or similar LLM inference frameworks is highly preferred.
  • Compiler and GPU Architecture Knowledge: Background in compiler design or familiarity with technologies like LLVM, MLIR, or ROCm is a plus.
  • Heterogeneous System Workloads: Experience running and scaling workloads on large-scale, heterogeneous clusters (CPU + GPU) using distributed training or inference strategies.
  • AI Framework Integration: Experience contributing to or integrating optimizations into deep learning frameworks such as PyTorch, SGLang, vLLM, Slime, VeRL

Responsibilities

  • Optimize Deep Learning Frameworks: Enhance performance of frameworks like TensorFlow, PyTorch, and SGLang on AMD GPUs via upstream contributions in open-source repositories.
  • Develop and Optimize Deep Learning Models: Profile, analyze, code change and tune large-scale training and inference models for optimal performance on AMD hardware. Day-0 supports to many SOTA models, DeepSeek 3.2, Kimi K2.5, etc.
  • GPU Kernel Development: Design, implement, and optimize high-performance GPU kernels using HIP, Triton, TileLang or other DSLs for AI operator efficiency.
  • Collaborate with GPU Library and Compiler Teams: Work closely with internal compiler and GPU math library teams to integrate, optimize and align kernel-level optimizations with full-stack performance goals. Initiate and help with different level codegen optimizations.
  • Contribute to SGLang Development: Support optimization, feature development, and scaling of the SGLang framework across AMD GPU platforms for LLM, multimodal serving and RL-training.
  • Distributed System Optimization: Tune and scale performance across both multi-GPU (scale-up) and multi-node (scale-out) environments, including inference parallelism, prefill-decode disaggregation, Wide-EP and collective communication strategies.
  • Graph Compiler Integration: Integrate and optimize runtime execution through graph compilers such as XLA, TorchDynamo, or custom pipelines.
  • Open-Source Collaboration: Partner with external maintainers to understand framework needs, propose optimizations, and upstream contributions effectively.
  • Apply Engineering Best Practices: Leverage modern software engineering practices in debugging, profiling, test-driven development, and CI/CD integration.

Benefits

  • AMD benefits at a glance.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service