Site Reliability Engineer (SRE)

Thinking Machines LabSan Francisco, CA
Onsite

About The Position

Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals. We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything. About Tinker Tinker is our fine-tuning API that empowers researchers and developers to customize frontier AI to their needs — opening access to capabilities that have previously been concentrated in a handful of labs. We manage the infrastructure while allowing Tinkerers full flexibility in training open weights models with their own data, algorithms, and for their own needs. Tinker is rapidly adding new customers, features, and novel use-cases. We’re hiring to grow the platform alongside the Tinker community. About the Role We're looking for a Site Reliability Engineer to drive the reliability of Tinker end-to-end. You'll work alongside the engineers building the platform and research teams to make every layer of the system more robust and resilient.

Requirements

  • Bachelor's degree or equivalent experience in computer science, engineering, or similar.
  • Experience in distributed systems, cloud infrastructure, or site reliability engineering.
  • Proficiency writing software to solve reliability problems, including building tooling and automation.
  • Experience with production incident response, postmortems, and systematic reliability improvement.
  • Strong communication skills and track record of coordination across engineering and research teams.

Nice To Haves

  • Deep experience operating production cloud services at scale (e.g., public cloud platforms, internal cloud services)
  • Background in distributed training frameworks and how infrastructure failures surface in training behavior.
  • Track record building checkpoint and recovery systems for long-running distributed jobs.
  • Expertise in Kubernetes at scale: deploying, operating, debugging, and tuning clusters handling heterogeneous GPU workloads.

Responsibilities

  • Define and own end-to-end reliability, from CI/CD flows to production observability and incident response.
  • Develop appropriate Service Level Objectives for distributed training systems, balancing job completion reliability and scheduling latency with development velocity.
  • Design and implement monitoring and observability across the full training path.
  • Drive incident response for Tinker platform issues, ensuring rapid recovery, thorough incident reviews, and systematic improvements that prevent recurrence.
  • Harden multi-tenant isolation and resource scheduling so that LoRA-based workload co-scheduling maximizes utilization without compromising reliability or data separation
  • Collaborate with security teams to address production vulnerabilities

Benefits

  • generous health, dental, and vision benefits
  • unlimited PTO
  • paid parental leave
  • relocation support
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service