About The Position

The Scaling team designs, builds, and operates critical infrastructure that enables research at OpenAI. Our mission is simple: accelerate the progress of research towards AGI. We do this by building core systems that researchers rely on - ranging from low-level infrastructure components to research-facing custom applications. These systems must scale with the increasing complexity and size of our workloads, while remaining reliable and easy to use. We're looking for an experienced Site Reliability Engineer to own production-critical infrastructure end to end. This role is centered on data-heavy, low-latency workloads, with emphasis on operating large-scale ClickHouse clusters, high-throughput Kafka pipelines, and reliable integrations with Snowflake. You'll turn ambiguous operational problems into clear plans, ship pragmatic solutions quickly, and improve them through production feedback and iteration. We are specifically looking for someone who can independently define and raise operational standards across teams while remaining deeply hands-on in production systems.

Requirements

  • A track record of owning production infrastructure for data-heavy, low-latency systems end to end.
  • Strong hands-on experience operating ClickHouse, Kafka, and adjacent large-scale data systems.
  • Practical experience with Snowflake workflows and cross-system data architecture.
  • The ability to independently define operational standards (runbooks, incident process, rollout safety) and make them stick.
  • Strong operational experience with Kubernetes, Terraform, and cloud infrastructure.
  • Excellent communication and collaboration skills; you work effectively across engineering and research teams.
  • High personal rigor and organization in high-pressure production environments.
  • A deeply hands-on mindset: willing to debug incidents, tune systems, and implement fixes directly.

Responsibilities

  • Own infrastructure lifecycle management across provisioning, upgrades, scaling, and decommissioning (IaC-first).
  • Operate and scale ClickHouse clusters, including sharding, replication, capacity planning, performance tuning, and maintenance.
  • Operate Kafka as the ingestion backbone, improving throughput, lag, backpressure handling, and failure recovery.
  • Improve end-to-end latency and reliability for data-heavy serving and query workloads.
  • Build and maintain strong monitoring and alerting: SLIs/SLOs, dashboards, alert policies, and actionable runbooks.
  • Define, implement, and continuously improve incident response standards, on-call practices, and postmortem quality.
  • Own backup/restore and disaster recovery strategy, including regular recovery drills.
  • Plan and execute safe rollouts across multiple environments (dev/stage/prod), including canary and rollback strategies.
  • Partner day to day with software engineers, embedding reliability into design, implementation, and release processes.
  • Set the quality bar for operational readiness and runbook standards, and drive adoption across teams.
  • Improve CI/CD pipelines and DevEx for faster, safer, and more predictable releases.
  • Strengthen security posture across infrastructure and delivery systems (least privilege, secrets management, patching, supply-chain controls).
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service