Engineering Manager, Inference Routing and Performance

AnthropicSan Francisco, NY
1dHybrid

About The Position

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role Every request that hits Claude — from claude.ai , the API, our cloud partners, or internal research — passes through a routing decision. Not a generic load balancer round-robin, but a decision that accounts for what's already cached where, which accelerator the request runs best on, and what else is in flight across the fleet. Get it right and you extract meaningfully more throughput from the same hardware. Get it wrong and you burn capacity, miss latency SLOs, or shed load that shouldn't have been shed. The Inference Routing team owns this layer. We build the cluster-level routing and coordination plane for Anthropic's inference fleet — the system that sits between the API surface and the inference engines themselves, making fleet-wide efficiency decisions in real time. As Anthropic moves from "many independent inference replicas" toward "a single warehouse-scale computer running a coordinated program," Dystro is the coordination layer. This is a deeply technical team. The engineers here design custom load-balancing algorithms, build quantitative models of system performance, debug latency spikes that cross kernel, network, and framework boundaries, and reason carefully about cache placement across thousands of accelerators. They work shoulder-to-shoulder with teams that write kernels and ML framework internals. The EM for this team doesn't need to write kernels — but they do need the systems depth to make architectural calls, evaluate deeply technical candidates, and spot when a proposed optimization will have second-order effects on the fleet. You'll inherit a strong team of distributed-systems engineers, and you'll be accountable for two things that pull in different directions: shipping system-level performance improvements that measurably increase fleet throughput and efficiency, and running the team operationally so that deploys are safe, incidents are rare, and the teams who depend on Dystro can plan around you with confidence. The job is holding both. Representative work: Things the Inference Routing EM actually spends time on: Deciding whether a proposed routing algorithm change is worth the deploy risk, given the modeled throughput gain and the blast radius if it regresses Sequencing a quarter where KV-cache offload, a new coordination protocol, and two model launches all compete for the same engineers Working through a persistent tail-latency regression with the team — walking down from fleet-level metrics to per-replica behavior to a root cause in the networking stack Building the case (with numbers) to peer teams for why a cross-team protocol change unlocks the next efficiency win Running the post-incident review after a cache-eviction bug caused a capacity event, and turning it into process changes that stick Interviewing a candidate who has built schedulers at supercomputing scale, and deciding whether they'd be additive to a team that already goes deep

Requirements

  • Have 5+ years of engineering management experience, ideally with at least part of that leading teams on critical-path production infrastructure at scale
  • Have a deep systems background — load balancing, scheduling, cache-coherent distributed state, high-performance networking, or similar. You need enough depth to make architectural calls about routing and efficiency, and to evaluate candidates who go to the kernel and framework level
  • Have shipped performance improvements in large-scale systems and can explain, with numbers, what the impact was
  • Have run production infrastructure with real operational stakes: on-call, incident response, capacity events, deploy discipline
  • Are results-oriented with a bias toward impact, and comfortable working in a space where throughput, latency, stability, and feature velocity all pull in different directions
  • Build strong relationships across team boundaries — this is a seam role, and much of the job is making sure other teams can rely on yours
  • Are curious about machine learning systems. You don't need an ML research background, but you should want to learn how transformer inference actually works and how that shapes the systems problems
  • We require at least a Bachelor's degree in a related field or equivalent experience.

Nice To Haves

  • Experience with LLM inference serving — KV caching, continuous batching, request scheduling, prefill/decode disaggregation
  • Background in cluster schedulers, load balancers, service meshes, or coordination planes at scale
  • Familiarity with heterogeneous accelerator fleets (GPU/TPU/Trainium) and how hardware differences affect workload placement
  • Experience with GPU/accelerator programming, ML framework internals, or OS-level performance debugging — enough to follow and evaluate the technical work, not necessarily to do it daily
  • Led teams at supercomputing or hyperscaler infrastructure scale
  • Led teams through rapid-growth periods where hiring and onboarding competed with roadmap delivery

Responsibilities

  • Drive system-level performance
  • Own the technical roadmap for cluster-level inference efficiency — routing decisions, cache placement and eviction, cross-replica coordination, and the protocols that keep routing and inference engines in sync
  • Partner with the inference engine, kernels, and performance teams to identify fleet-level throughput and latency wins, then turn those into shipped improvements with measurable results
  • Build the team's habit of quantitative performance modeling: claim a win only when you can measure it, and know before you ship what the expected effect is
  • Deliver reliably and operate cleanly
  • Set technical strategy for how routing evolves across heterogeneous hardware (GPUs, TPUs, Trainium) and across all our serving surfaces
  • Run the team's operational backbone — on-call rotation, incident response, postmortem review, deploy safety — so the team can ship aggressively without the system becoming fragile
  • Create clarity at a seam: Inference Routing sits between the API surface, the inference engines, and the cloud deployment teams. You'll make sure commitments are realistic, dependencies are understood, and nobody is surprised
  • Build and grow the team
  • Develop and retain a strong existing team, and hire against the bar described above: people who can go to the OS and framework level when the problem demands it, and who care about production reliability
  • Coach engineers through a roadmap where priorities shift with model launches, new hardware, and scaling demands. We pair a lot here — you'll help make that collaboration pattern productive
  • Pick up slack when it matters. This is a small team in a critical path; sometimes the EM is the one unblocking a stuck deploy or synthesizing a design debate

Benefits

  • competitive compensation and benefits
  • optional equity donation matching
  • generous vacation and parental leave
  • flexible working hours
  • a lovely office space in which to collaborate with colleagues
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service