Member of Technical Staff, Cloud Orchestration

InferactSan Francisco, CA
Hybrid

About The Position

Inferact's mission is to grow vLLM as the world's AI inference engine and accelerate AI progress by making inference cheaper and faster. Founded by the creators and core maintainers of vLLM, we sit at the intersection of models and hardware—a position that took years to build. About the Role We're looking for an cloud orchestration engineer to build the operational backbone that keeps vLLM running reliably at massive scale. You'll design the systems for cluster management, deployment automation, and production monitoring that enable teams worldwide to serve AI models without friction. You'll ensure that vLLM deployments are observable, debuggable, and recoverable, turning operational complexity into infrastructure that just works.

Requirements

  • Bachelor's degree or equivalent experience in computer science, engineering, or similar.
  • Strong experience with Kubernetes and container orchestration at scale.
  • Experience designing and implementing custom Kubernetes operators.
  • Proficiency in Python/Rust/Go and infrastructure-as-code tools (Terraform, Helm, etc).
  • Experience managing GPU clusters and debugging hardware issues.
  • Ability to work across cloud platforms (AWS, GCP, Azure) and on-premise infrastructure.

Nice To Haves

  • Experience with ML-specific orchestration tools (Ray, Slurm).
  • Knowledge of GPU scheduling, multi-tenancy, and resource optimization.
  • Familiarity with vLLM deployment patterns and configuration.
  • Track record of improving operational reliability for ML systems.
  • Experience deploying inference systems on large-scale GPU (1,000+) clusters.

Responsibilities

  • Design systems for cluster management, deployment automation, and production monitoring.
  • Ensure vLLM deployments are observable, debuggable, and recoverable.

Benefits

  • Generous health, dental, and vision benefits
  • 401(k) company match
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service