Infrastructure and Reliability Engineer - Developer Platform

TypeSafe AISan Francisco, CA
Onsite

About The Position

TypeSafe is a frontier model lab. We build reliable and general AI systems to power economically valuable automation. Our mission is to usher in a new era of Transformative Artificial Intelligence (TAI): technology with the power to drive a societal shift on the scale of the agricultural and industrial revolutions. While others chase benchmarks and academic puzzles, we’ve been quietly rethinking the LLM stack from first principles — building a new kind of general frontier model designed for real-world reliability, decision-making, and autonomy in production. We’re a small, fast-moving team from OpenAI, Google Brain, and Meta/FAIR, backed by top-tier investors. Since mid-2024, we’ve been engineering the foundation for what comes after the current “state-of-the-art” — a model that actually gets things done. About the Role As an ML infrastructure and reliability engineer, you will join the team responsible for building and maintaining TypeSafe’s API platform for inference. These APIs will be user facing, latency sensitive, and (once we ship) have uptime, reliability and backwards compatibility requirements. The role is wide ranging and you will wear many hats.

Requirements

  • Responsible, ownership-inclined, and a team player – you believe there is no such thing as "other people's code", and will complain only the normal amount about pager duty
  • Experience building and operating backend services at scale with continuous delivery
  • Experience designing resilient systems and improving on-call experience on production systems
  • Are excellent under pressure – especially debugging/resolving outages
  • Collaborate well with others on technical and product design, advocating for what you need, and adjusting specs to changing requirements and feedback
  • Mission aligned and excited to go all-in
  • Love being part of a team

Nice To Haves

  • Previously built big things
  • Detail-oriented to the point of helpful paranoia
  • 5+ years of professional software engineering experience (3+ years of infra/backend) with a team with rigorous engineering standards
  • Experience with Kubernetes, cloud providers, and AWS in particular
  • Experience with the challenges of ML orchestration
  • Experienced LLMs’ capabilities and limitations from implementing them in the past

Responsibilities

  • Create robust infrastructure for serving inference across multiple cloud providers
  • Work to ensure inference infrastructure and services are reliable and have low error rates
  • Create and maintain infrastructure for monitoring and alerting on requests, improving our debugging and operations stance
  • Ramp up oncall engineers on error handling in production via playbooks, mentorship, “fire drill” exercises, etc.

Benefits

  • Competitive salary and equity
  • 100% covered health insurance
  • Daily lunch and dinner
  • Visa sponsorships
  • 401K plans
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service