System Engineer, GPU Fleet

FluidstackAustin, TX
62d$200,000 - $300,000

About The Position

As a System Engineer, GPU Fleet, you will manage, operate, and optimize hyperscale GPU compute infrastructure supporting AI/ML training and inference workloads. Ensure high availability, performance, and reliability of GPU server fleet through automation, monitoring, troubleshooting, and collaboration with hardware engineering, platform teams, and datacenter operations.

Requirements

  • Bachelor's degree in Computer Science, Engineering, or related technical field (or equivalent practical experience)
  • 3+ years (System Engineer) or 5+ years (Senior System Engineer) in Linux system administration, datacenter operations, or infrastructure engineering
  • Strong Linux/Unix fundamentals including system administration, shell scripting (Bash, Python), troubleshooting, and performance tuning
  • Experience with server hardware architecture, troubleshooting techniques, and understanding of compute, memory, storage, and networking components
  • Experience in automation and configuration management tools (Ansible, Puppet, Chef, Terraform).
  • Strong analytical and problem-solving skills with ability to diagnose complex technical issues under pressure
  • Excellent communication and collaboration skills; ability to work effectively with cross-functional teams

Nice To Haves

  • Experience managing large-scale GPU infrastructure (NVIDIA H100, A100, B200, GB200) in production environments supporting AI/ML workloads
  • Deep knowledge of GPU architecture, CUDA toolkit, GPU drivers, monitoring tools (nvidia-smi, DCGM)
  • Experience with HPC cluster management, job schedulers (Slurm, PBS, LSF), and container orchestration (Kubernetes, Docker)
  • Proficiency in out-of-band management protocols (IPMI, Redfish, BMC) and firmware management for server hardware
  • Experience with high-performance networking (InfiniBand, RoCE, RDMA) and network troubleshooting in GPU cluster environments
  • Familiarity with datacenter operations including rack installations, cabling, power management, and thermal considerations

Responsibilities

  • Operate and maintain large-scale GPU server fleet (H100, B200, GB200) supporting AI/ML workloads; monitor system health, performance, and utilization to maximize uptime and ensure SLA compliance
  • Perform hands-on troubleshooting and root cause analysis of complex hardware, firmware, OS, and application issues across GPU clusters; coordinate with vendors and hardware teams to resolve systemic failures
  • Develop and maintain automation scripts for provisioning, configuration management, monitoring, and remediation at scale.
  • Build and improve tooling for GPU health checks, performance diagnostics, driver validation, and automated recovery
  • Execute server provisioning, configuration, firmware updates, and OS installation using automation frameworks; manage lifecycle operations including deployment, maintenance, and decommissioning
  • Participate in 24x7 on-call rotation; respond to production incidents and coordinate resolution with cross-functional teams including datacenter operations, network engineering, and application teams
  • Lead post-incident reviews, document root causes, and drive continuous improvement initiatives focused on automation, reliability, monitoring, and operational efficiency

Benefits

  • Competitive total compensation package (salary + equity).
  • Retirement or pension plan, in line with local norms.
  • Health, dental, and vision insurance.
  • Generous PTO policy, in line with local norms.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service