AI/HPC Cluster Administrator

Advanced Micro Devices, IncAustin, TX
6h

About The Position

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: We are looking for a Senior Compute Cluster Administrator responsible for operating and supporting compute clusters used in upcoming datacenter buildouts leveraging AMD Instinct products. This role owns Day Two and beyond operations, encompassing both proactive maintenance and reactive support across complex, highly technical environments. This is an operational role supporting a demanding user base of AI server hardware, software, and firmware developers. You will manage a mix of R&D lab and production lab environments, each with distinct release cycles, stability requirements, and operational expectations. The role requires close collaboration with IT, Infosec, infrastructure automation teams, and deeply technical end users to ensure service quality, delivery commitments, and governance standards are consistently met.

Requirements

  • Hands‑on experience administering or supporting HPC clusters in production, research, or academic environments
  • Practical experience working as an HPC user combined with Linux system administration in enterprise or lab environments
  • Background in software development combined with deep Linux systems exposure in server or infrastructure contexts
  • Demonstrated intermediate to advanced Linux expertise; industry‑recognized certifications are valued
  • Strong understanding of networking fundamentals, including the OSI model, multi‑homed systems, firewall troubleshooting, and high‑speed interconnects
  • Willingness to experiment with open‑source and emerging technologies that may not conform to established standards
  • Experience supporting infrastructure services such as DNS, DHCP, BOOTP, PXE, TFTP, NTP, and PAM
  • Understanding of interprocess communication and familiarity with MPI implementations such as OpenMPI or MPICH
  • Proficiency with Linux troubleshooting tools such as nmap, gdb, lsof, sar, and server management interfaces including IPMI, iDRAC, and iLO
  • Strong written communication skills with the ability to produce clear technical documentation
  • Experience developing automation using Python and/or Ansible
  • Familiarity with version control systems such as Git
  • Self‑directed, analytical, dependable, and comfortable working both independently and in a team‑based environment
  • Bachelor’s or Master’s degree in Computer Science, Computer Engineering, or a related technical discipline.

Nice To Haves

  • Experience with RDMA is a plus; familiarity with PCIe, I2C, compiler optimization, or other low‑level system components is beneficial
  • Working knowledge of virtualization, VLANs, and directory services sufficient to collaborate effectively with partner teams

Responsibilities

  • Work directly with tenants and stakeholders to maximize service quality, utilization, and availability of managed compute clusters
  • Collaborate with highly technical users working deep within AMD’s Instinct platform (e.g., ROCm) to troubleshoot misconfigurations impacting HPC performance
  • Lead the resolution of complex issues during new deployments and ongoing operations
  • Partner with hardware vendors on technical escalations involving third‑party OEM platforms and coordinate maintenance cycles aligned with upstream releases
  • Support multiple Linux distributions across Red Hat and Ubuntu/Debian families
  • Act as a subject matter expert in one or more cluster scheduling technologies such as Slurm, LSF, Sun Grid Engine, OpenLava, or Kubernetes
  • Compare configurations and behaviors across heterogeneous clusters within AMD’s compute estate
  • Engage with emerging technologies where formal documentation may be limited, including white‑box platforms and pre‑beta hardware
  • Maintain and evolve compute images using automated CI/CD pipelines, or deploy software manually where automation is not available
  • Monitor cluster health, performance, and availability using standard tooling such as Grafana, Prometheus, and Zabbix
  • Work collaboratively with team members to reproduce and resolve difficult or intermittent issues
  • Train and enable on‑site L1 support teams
  • Participate in on‑call incident response as L2 support when required

Benefits

  • AMD benefits at a glance.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service