AI Infrastructure Engineer

Advanced Micro Devices, Inc.San Jose, CA
59dHybrid

About The Position

At AMD, our mission is to build great products that accelerate next-generation computing experiences-from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges-striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE PERSON: We are seeking a DevOps / Platform Engineer to join our team building and operating large-scale GPU compute infrastructure that powers AI and ML workloads. The ideal candidate should be passionate about software engineering and possess leadership skills to independently deliver on multi-quarter projects. They should be able to caommunicate effectively and work optimally with their peers within our larger organization. Finally, you aren't afraid of a team in more of a startup mode at a larger company and willing to jump in to help in areas adjacent to your main project as needed.

Requirements

  • 5+ years of experience in DevOps, Platform, or Infrastructure Engineering.
  • Deep hands-on experience with Kubernetes and container orchestration at scale.
  • Proven ability to design and deliver platform features that serve internal customers or developer teams
  • Experience building developer-facing platforms or internal developer portals (e.g.custom workflow tooling).

Nice To Haves

  • Hands-on experience in storage or network engineering within Kubernetes environments (e.g., CSI drivers, dynamic provisioning, CNI plugins, or network policy).
  • Experience with Infrastructure as Code tools like Terraform.
  • Background in HPC, Slurm, or GPU-based compute systems for ML/AI workloads.
  • Practical experience with monitoring and observability tools (Prometheus, Grafana, Loki, etc.).
  • Understanding of machine learning frameworks (PyTorch, vLLM, SGLang, etc.).

Responsibilities

  • Build and extend platform capabilities to enable new classes of workloads (e.g., interactive development pods, CI pipelines, inference services, benchmarking jobs).
  • Design and operate scalable orchestration systems using Kubernetes across both on-prem and multi-cloud environments.
  • Develop platform features such as secret management, configuration management, and deployment automation for customers.
  • Partner with development teams to extend the GPU developer platform with features, APIs, templates, and self-service workflows that streamline job orchestration and environment management.
  • Manage service lifecycle within Kubernetes using Helm and GitOps workflows (e.g., ArgoCD or Flux).
  • Apply expertise in storage and networking to design and integrate CSI drivers, persistent volumes, and network policies that enable high-performance GPU workloads.

Benefits

  • AMD benefits at a glance.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Computer and Electronic Product Manufacturing

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service