Senior GPU Kubernetes Engineer

Advanced Micro Devices, IncSanta Clara, CA
4h

About The Position

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: AMD’s Software and Solutions Team is seeking a Senior GPU Kubernetes Engineer to lead GPU operator development, advanced scheduling strategies, and deployment automation for the AMD Enterprise AI Suite. This role requires strong Kubernetes engineering expertise, deep understanding of GPU resource management, and hands-on experience optimizing AI workloads in cloud and on-prem environments. The Senior GPU Kubernetes Engineer will help define next-generation GPU orchestration, improve workload predictability and utilization, and design scalable automation for distributed inference, fine-tuning, and LLM-based services. The work spans operator development, cluster optimization, autoscaling logic, Helm-based deployment patterns, and integration with AMD’s GPU software stack. THE PERSON: A highly motivated and passionate professional with deep expertise in Kubernetes, GPU acceleration, and cloud-native deployment systems, with a proven track record in problem-solving, collaboration, and technical execution

Requirements

  • Strong hands-on experience with Kubernetes GPU workloads, Operator/CRD development, scheduling plugins, and resource managers.
  • Deep understanding of NUMA, GPU topology, affinity/anti-affinity rules, and multi-GPU inference strategies is essential.

Nice To Haves

  • Proficiency with Helm, Kustomize, Prometheus, Grafana, FluentD/FluentBit, and ArgoCD is valuable.
  • Familiarity with distributed inference frameworks such as vLLM, Triton, KServe, or Ray, along with experience deploying LLM workloads, is highly desirable.
  • Knowledge of ROCm, AMD MI300/MI325 platforms, OpenShift, KubeVirt, or enterprise Kubernetes systems provides a strong advantage.

Responsibilities

  • Lead GPU Operator development; implement topology-aware scheduling policies; optimize NUMA placement, PCIe locality, and memory bandwidth; and ensure robust integration with AMD’s ROCm drivers and runtimes.
  • Design autoscaling logic for GPU-heavy inference and fine-tuning workloads, build monitoring and telemetry instrumentation, strengthen workload reliability, and develop scalable Helm charts and automation workflows.
  • Collaborate closely with ROCm, platform, performance, and model teams to ensure end-to-end integration quality; troubleshoot across GPU runtimes, Kubernetes layers, and AI frameworks; influence AMD’s Kubernetes roadmap; and support deployment models across customer, partner, and ecosystem environments.

Benefits

  • AMD benefits at a glance.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service