AI Infrastructure / Platform Engineer - GPU compute

Advanced Micro Devices, IncSan Jose, CA
9hHybrid

About The Position

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. We are seeking an AI Infrastructure / Platform Engineer to join our team building and operating large-scale GPU compute infrastructure that powers AI and ML workloads. The ideal candidate should be passionate about software engineering and possess leadership skills to independently deliver on multiple projects. They should be able to communicate effectively and work optimally with their peers within our larger organization.

Requirements

  • Experience in Platform, Infrastructure, DevOps Engineering.
  • Deep hands-on experience with Kubernetes and container orchestration at scale.
  • Proven ability to design and deliver platform features that serve internal customers or developer teams
  • Experience building developer-facing platforms or internal developer portals (e.g. Custom workflow tooling).

Nice To Haves

  • Hands-on experience in storage or network engineering within Kubernetes environments (e.g., CSI drivers, dynamic provisioning, CNI plugins, or network policy).
  • Experience with Infrastructure as Code tools like Terraform.
  • Background in HPC, Slurm, or GPU-based compute systems for ML/AI workloads.
  • Practical experience with monitoring and observability tools (Prometheus, Grafana, Loki, etc.).
  • Understanding of machine learning frameworks (PyTorch, vLLM, SGLang, etc.).
  • High performance network and IB/RDMA tuning.

Responsibilities

  • Build and extend platform capabilities to enable different classes of workloads (e.g., Large-scale AI training, inferencing etc).
  • Design and operate scalable orchestration systems using Kubernetes across both on-prem and multi-cloud environments.
  • Develop platform features such as pre-flight health checks, job status monitoring and post-mortem analysis.
  • Partner with development teams to extend the GPU developer platform with features, APIs, templates, and self-service workflows that streamline job orchestration and environment management.
  • Apply expertise in storage and networking to design and integrate CSI drivers, persistent volumes, and network policies that enable high-performance GPU workloads.
  • Production support on large-scale GPU clusters.

Benefits

  • AMD benefits at a glance.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service