You will be the person who turns a hardware listing and a software bundle into a running AI inference platform — from bare metal to serving production traffic. This is a hands-on role at the intersection of physical datacenter infrastructure and platform engineering. You will rack GPU servers, cable network fabrics, provision bare metal via PXE, deploy Kubernetes clusters, stand up monitoring and network telemetry stacks, and validate end-to-end inference pipelines — all in air-gapped, classified environments with no internet access. You are the high side. Everything the platform engineering team builds on the unclassified side — deployment tooling, signed software bundles, switch configurations, OS images — you execute on classified infrastructure. You own the full stack from physical hardware through running GPU workloads, including the cross-domain solution (CDS) receive pipeline that automates software delivery into the classified environment. When something breaks on-site, you fix it. When an update arrives through the data diode or on physical media, you apply it. You are the bridge between xAI's engineering organization and the classified compute facilities where our infrastructure operates. This role requires significant time on-site at classified compute facilities. You will work closely with customer IT and security teams, cleared facility personnel, and xAI's uncleared platform engineering team (via approved communication channels).
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed