Staff Software Engineer, Node Infra

AnthropicSeattle, WA
Hybrid

About The Position

Anthropic's Infrastructure organization is foundational to our mission of developing AI systems that are reliable, interpretable, and steerable. The systems we build determine how quickly we can train new models, how reliably we can run safety experiments, and how effectively we can scale Claude to millions of users — demonstrating that safe, reliable infrastructure and frontier capabilities can go hand in hand. Node Infra owns the full lifecycle of accelerator capacity at Anthropic. We ingest and provision compute from all major CSPs and our own datacenters, stand up and scale clusters from thousands to hundreds of thousands of hosts, and build the health, diagnostics and repair automation that keep every GPU, TPU and Trainium node in the fleet usable and ready to power Anthropic’s frontier AI research.

Requirements

  • Deep expertise in distributed systems, reliability, and cloud platforms (e.g., Kubernetes, IaC, AWS/GCP/Azure)
  • Strong proficiency in at least one systems language (e.g., Rust, Go, or Python), IaC proficiency with Terraform.
  • Hands-on experience with machine learning accelerators (GPUs, TPUs, or Trainium)
  • Track record of leading complex, multi-quarter technical initiatives that span multiple teams or systems
  • Ability to build alignment across senior stakeholders and communicate effectively at all levels

Nice To Haves

  • 8+ years of software engineering experience, including time as a technical lead setting direction for a team
  • Experience managing large scale compute infrastructure at hyperscale (10K+ nodes), including capacity management and efficiency
  • Depth in one or more of: Kubernetes internals (scheduler, autoscaler, kubelet, Karpenter), cluster orchestration systems (Mesos, Borg-like), or node provisioning pipelines
  • Low-level systems experience: kernel, virtualization, device drivers, firmware, or hardware health/diagnostics daemons
  • Familiarity with high-performance networking (EFA, RDMA, InfiniBand) for distributed ML workloads.
  • Demonstrated ownership of production reliability for high-throughput, latency-sensitive systems
  • Contributions to relevant open-source projects (Kubernetes, Linux kernel, container runtimes, etc.)
  • Skill in quickly understanding systems design tradeoffs and keeping track of rapidly evolving software systems

Responsibilities

  • Own the technical strategy and roadmap for node lifecycle management - ingestion, bring-up, health checking, and automated repair
  • Drive cross-team initiatives to build and scale AI clusters across multiple clouds and accelerator families
  • Design and operate the systems that detect, isolate, and remediate unhealthy hardware automatically, driving up fleet MTBI and minimizing stranded capacity
  • Define infrastructure architecture, ensuring the hardest problems get solved - whether by you directly or by working through others
  • Work closely with cloud providers and internal research/inference/product teams to shape long-term compute, data, and infrastructure strategy
  • Establish and evolve operational excellence practices (incident response, postmortem culture, on-call)
  • Support the growth of engineers around you through technical mentorship and coaching

Benefits

  • competitive compensation
  • generous vacation and parental leave
  • flexible working hours
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service