Advanced Micro Devices-posted 3 months ago
Austin, TX
5,001-10,000 employees
Computer and Electronic Product Manufacturing

At AMD, we care deeply about transforming lives with technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences - the building blocks for the data center, artificial intelligence, PCs, gaming, and embedded systems. Underpinning our mission is the AMD culture, where we push the limits of innovation to solve the world's most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. As a core member of the team, you will play a pivotal role in optimizing and developing deep learning frameworks for AMD GPUs. Your experience will be critical in enhancing GPU kernels, deep learning models, and training/inference performance across multi-GPU and multi-node systems. You will engage with both internal GPU library teams and open-source maintainers to ensure seamless integration of optimizations, utilizing cutting-edge compiler technologies and advanced engineering principles to drive continuous improvement.

  • Optimize Deep Learning Frameworks: Enhance and optimize frameworks like TensorFlow and PyTorch for AMD GPUs in open-source repositories.
  • Develop GPU Kernels: Create and optimize GPU kernels to maximize performance for specific AI operations.
  • Develop & Optimize Models: Design and optimize deep learning models specifically for AMD GPU performance.
  • Collaborate with GPU Library Teams: Work closely with internal teams to analyze and improve training and inference performance on AMD GPUs.
  • Collaborate with Open-Source Maintainers: Engage with framework maintainers to ensure code changes are aligned with requirements and integrated upstream.
  • Work in Distributed Computing Environments: Optimize deep learning performance on both scale-up (multi-GPU) and scale-out (multi-node) systems.
  • Utilize Cutting-Edge Compiler Tech: Leverage advanced compiler technologies to improve deep learning performance.
  • Optimize Deep Learning Pipeline: Enhance the full pipeline, including integrating graph compilers.
  • Software Engineering Best Practices: Apply sound engineering principles to ensure robust, maintainable solutions.
  • Skilled engineer with strong technical and analytical expertise in C++ development within Linux environments.
  • Strong problem-solving skills and a proactive approach.
  • Keen understanding of software engineering best practices.
  • Experienced in designing and optimizing GPU kernels for deep learning on AMD GPUs using HIP, CUDA, and assembly (ASM).
  • Strong knowledge of AMD architectures (GCN, RDNA) and low-level programming to maximize performance for AI operations.
  • Experienced in integrating optimized GPU performance into machine learning frameworks (e.g., TensorFlow, PyTorch).
  • Skilled in Python and C++, with experience in debugging, performance tuning, and test design.
  • Solid experience in running large-scale workloads on heterogeneous compute clusters.
  • Foundational understanding of compiler theory and tools like LLVM and ROCm.
  • AMD benefits at a glance.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service