AMD-posted 3 days ago
Full-time • Mid Level
Santa Clara, CA
5,001-10,000 employees

WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: As a core member of the team, you will play a pivotal role in optimizing and developing deep learning frameworks for AMD GPUs. Your experience will be critical in enhancing GPU kernels, deep learning models, and finetuning inference performance across multi-GPU and multi-node systems through open - source popular frameworks like vllm / SGLang and internal inferencing platforms . You will engage with both internal framework teams and open-source maintainers to ensure seamless integration of optimizations, utilizing cutting-edge technologies and advanced engineering principles to drive continuous improvement. THE PERSON: Skilled engineer with strong technical and analytical expertise in Python development within Linux environments. The ideal candidate will thrive in both collaborative team settings and independent work, with the ability to define goals, manage development efforts, and deliver high-quality solutions. Strong problem-solving skills, a proactive approach, and a keen understanding of software engineering best practices are essential.

  • Optimize Deep Learning Frameworks: Enhance and optimize frameworks like PyTorch / vllm / SGLang for AMD GPUs in open-source repositories.
  • Design and scale : multi‑GPU inference strategies (TP/PP/EP hybrid).
  • Develop & Optimize Models: Design and optimize deep learning models specifically for AMD GPU performance.
  • Collaborate with GPU Library Teams: Work closely with internal teams to analyze and improve training and inference performance on AMD GPUs.
  • Collaborate with Open-Source Maintainers: Engage with framework maintainers to ensure code changes are aligned with requirements and integrated upstream.
  • Work in Distributed Computing Environments: Optimize deep learning performance on both scale-up (multi-GPU) and scale-out (multi-node) systems.
  • Utilize Cutting-Edge Compiler Tech: Leverage advanced compiler technologies to improve deep learning performance.
  • Optimize Deep Learning Pipeline: Enhance the full pipeline, including integrating graph compilers.
  • Software Engineering Best Practices: Apply sound engineering principles to ensure robust, maintainable solutions.
  • Skilled engineer with strong technical and analytical expertise in Python development within Linux environments.
  • Strong problem-solving skills, a proactive approach, and a keen understanding of software engineering best practices are essential.
  • Kernel & Inference Frameworks: Strong background in GPU kernel development and LLM inference frameworks .
  • Inference Stack Knowledge: Hands-on understanding of SGLang internals or similar stacks such as vLLM and FasterTransformer .
  • Distributed & Open-Source Execution: Solid ex perienced with distributed inference scaling and proven contributor to upstream open-source projects
  • Deep Learning Integration: Strong and significant e xperience in integrating optimized GPU performance into machine learning frameworks (e.g., TensorFlow, PyTorch ) to accelerate model training and inference, with a focus on scaling and throughput.
  • Software Engineering: Expert in Python and C++, with experience in debugging, performance tuning, and test design to ensure high-quality, maintainable software solutions.
  • High-Performance Computing: Solid e xperience d in running large-scale workloads on heterogeneous compute clusters, optimizing for efficiency and scalability.
  • Compiler Optimization: Foundational understanding of compiler theory and tools like LLVM and ROCm for kernel and system performance optimization.
  • AMD benefits at a glance
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service