About the Role Our Vision AI platform runs where the data is generated — on-premises, inside government facilities, and at the network edge — not in a hyperscaler cloud. That means the infrastructure has to be bulletproof: GPU clusters provisioned correctly, Kubernetes workloads scheduled efficiently across heterogeneous compute, storage performing at the throughput AI training and inference demands, and the network capable of handling high-bandwidth, low-latency sensor data at scale. As our MLOps / AI Infrastructure Engineer, you will own all of it. You will rack, configure, and operate the on-premises compute and GPU infrastructure that powers the platform, build and maintain the Kubernetes clusters that orchestrate AI workloads, design the networking fabric that ties edge nodes to core compute, and implement the MLOps pipelines that take models from development to production. You will work directly with our AI/ML engineers, the Lead Architect, and on-site client technical teams to ensure the platform runs reliably in environments that are often air-gapped, physically secured, and subject to strict government compliance requirements.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed
Number of Employees
101-250 employees