Software Engineer, Distributed Systems

OpenAISan Francisco, CA
70d

About The Position

We’re looking for a senior engineer to design and build the load balancer that will sit at the very front of our research inference stack - routing the world’s largest AI models with millisecond precision and bulletproof reliability. This system will serve research jobs where requests must stay “sticky” to the same model instance for hours or days and where even subtle errors can directly degrade model performance.

Requirements

  • Have deep experience designing and operating large-scale distributed systems, particularly load balancers, service gateways, or traffic routing layers.
  • Have 5+ years of experience designing in theory for and debugging in practice for the algorithmic and systems challenges of consistent hashing, sticky routing, and low-latency connection management.
  • Have 5+ years of experience as a software engineer and systems architect working on high-scale, high-reliability infrastructure.
  • Have a strong debugging mindset and enjoy spending time in tracing, logs, and metrics to untangle distributed failures.
  • Are comfortable writing and reviewing production code in Rust or similar systems languages (C/C++, Java, Go, Zig, etc).
  • Have operated in big tech or high-growth environments and are excited to apply that experience in a faster-moving setting.
  • Take ownership of problems end-to-end and are excited to build something foundational to how our models interact with the world.

Nice To Haves

  • Experience with gateway or load balancing systems (e.g., Envoy, gRPC, custom LB implementations).
  • Familiarity with inference workloads (e.g., reinforcement learning, streaming inference, KV cache management, etc).
  • Exposure to debugging and operational excellence practices in large production environments.

Responsibilities

  • Architect and build the gateway / network load balancer that fronts all research jobs, ensuring long-lived connections remain consistent and performant.
  • Design traffic stickiness and routing strategies that optimize for both reliability and throughput.
  • Instrument and debug complex distributed systems — with a focus on building world-class observability and debuggability tools (distributed tracing, logging, metrics).
  • Collaborate closely with researchers and ML engineers to understand how infrastructure decisions impact model performance and training dynamics.
  • Own the end-to-end system lifecycle: from design and code to deploy, operate, and scale.
  • Work in an outcome-oriented environment where everyone contributes across layers of the stack, from infra plumbing to performance tuning.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service