GPU Stack CI Infrastructure Engineer

Advanced Micro Devices, IncSan Jose, CA
Hybrid

About The Position

AMD's AI software stack is moving fast — and keeping pace means shipping complete, validated GPU stack releases to customers as quickly as the software can evolve. Getting there requires validating not just ROCm, but the full recipe: firmware, kernel driver, and ROCm together, across multiple GPU products, with confidence that what ships to customers actually works as a coherent system. We're building the CI infrastructure that makes that possible — automated pre-submit validation, nightly integration builds across the full stack, and a Last Known Good (LKG) manifest that gives every engineer a trusted baseline to build from. The goal is a release pipeline where validated, customer-ready GPU stack recipes can be produced on demand rather than assembled manually. You'd own that CI system end-to-end: provisioning the runners, integrating the hardware test pipeline, coordinating with IT on real infrastructure constraints, and shipping the automation that enables the whole thing. It's high-ownership work with direct impact on AMD's ability to move fast for customers.

Requirements

  • 8+ years of software engineering or infrastructure engineering experience
  • Strong coding ability — you'll be writing automation, not clicking through UIs
  • Deep knowledge of CI/CD pipeline design and GitHub Actions (or comparable platform)
  • Experience provisioning and maintaining self-hosted runners or build infrastructure at scale
  • Comfort navigating complex infrastructure environments — network permissions, NFS mounts, firewall rules, signing pipelines
  • Strong problem-solving and communication skills across engineering and IT stakeholders

Nice To Haves

  • Fluency with agentic AI workflows (Cursor, Claude, Copilot, etc.) as a force multiplier for engineering throughput
  • Experience setting up CI infrastructure on AWS (EC2-based runners, IAM, networking)
  • Familiarity with firmware signing pipelines and firmware release processes — understanding how signing fits into a CI workflow is a meaningful advantage given the constraints of this environment
  • Familiarity with firmware or kernel build environments and their infrastructure constraints
  • Experience integrating CI systems with hardware-in-the-loop testing

Responsibilities

  • Get nightly CI running, fast — the first priority is standing up nightly integration builds for the unified GPU stack. You'll own that end-to-end: the pipeline, the scheduling, the result reporting, and the Last Known Good (LKG) manifest promotion that gives every engineer a trusted baseline.
  • Solve the runner provisioning problem — standard cloud runners can't build firmware. You'll work directly with IT to provision GitHub Actions self-hosted runners that handle the real constraints: NFS mounts for host-side tools, code-signing pipelines, network access, and permissions that firmware builds require. This is the kind of infrastructure work that requires both technical depth and the ability to get things done across organizational boundaries.
  • Build toward AWS-aligned infrastructure — the broader GPU stack CI is moving toward AWS-hosted runners. You'll make sure UnderTheRock's infrastructure is consistent with that direction from the start, rather than creating something that has to be rebuilt later.
  • Own the CI, not just contribute to it — nobody else on the team is currently focused on CI. You'll be setting the direction, making the tooling choices, and shipping the automation that everything else depends on.

Benefits

  • AMD benefits at a glance.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service