Network Engineer, Capacity and Efficiency

AnthropicSan Francisco, NY
Hybrid

About The Position

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. The Capacity & Efficiency team sits inside Anthropic’s Compute organization and owns the cost, utilization, and attribution story for non-accelerator infrastructure — the network, compute, and storage backbone that moves petabytes between training clusters, inference fleets, and object storage across clouds and regions. Anthropic runs a private multi-cloud backbone built from dark fiber, optical transport, and CSP direct-connect products, layered over data center fabrics spanning tens of thousands of hosts. The scale is real, the spend is large, and the efficiency levers are still mostly unpulled. We work alongside the Systems Networking team (who build and operate the fabric) and the Observability team (who own the telemetry platform). This role lives at the intersection: you’ll use deep networking knowledge and rigorous measurement to figure out where and how bandwidth, latency, and dollars are being used, find optimization opportunities and land them. We’re looking for a network engineer who thinks in metrics first. You understand spine-leaf fabrics, BGP, SDN overlays, and cloud interconnect products well enough to build them. You will instrument them, model their cost-per-bit, and squeeze out the inefficiency, while ensuring we can move the bits to the right places in the most efficient manner. You’ll own the observability and efficiency surface for Anthropic’s network: from per-flow telemetry on backbone routers, to QoS policy on cross-region links carrying inference traffic, to cost attribution that tells a research team exactly what their checkpoint sync is costing. This is a hands-on IC role. You’ll write code (Python, Go), build dashboards, model capacity, and ship config changes to production routers. You’ll also influence architecture: when the data says a traffic pattern is pathological, you’ll be in the room root causing it and fixing it. You will be working across three areas: network telemetry and observability, traffic engineering, and cost modeling and attribution. We expect you to be strong in at least two and willing to grow into the third. If you're a telemetry-first engineer who's never built a chargeback model, or a traffic engineer who hasn't shipped eBPF probes, apply anyway and tell us which axis you want to grow on.

Requirements

  • Have 5+ years operating large-scale production networks — data center fabrics (spine-leaf, Clos), backbone/WAN, or hyperscaler-adjacent environments.
  • Are genuinely fluent across the stack: BGP (including policy and communities), ECMP, VXLAN/EVPN or equivalent overlays, QoS (DSCP, queuing, shaping), and L1/optical basics (DWDM, coherent, LAGs).
  • Know at least one major CSP’s networking model deeply — AWS (VPC, TGW, Direct Connect, Gateway Load Balancer) or GCP (Shared VPC, Interconnect, Cloud Router, Network Connectivity Center) — and understand how their overlays interact with physical underlays.
  • Have built or operated network telemetry at scale: streaming telemetry (gNMI/OpenConfig), flow export (sFlow, IPFIX, NetFlow), or eBPF-based host-side instrumentation. You can reason about sampling, cardinality, and storage tradeoffs.
  • Comfortable writing Python or Go to build tooling, telemetry pipelines, infrastructure-as-code, config management for network devices and automation, that you’ll ship to production.
  • Think quantitatively by default. You reach for a notebook or a Grafana query before you reach for an opinion, and you can turn messy counter data into a defensible cost model.
  • Communicate crisply. You can explain to a finance partner why a 10% egress reduction matters, and to a network engineer why a specific ECMP imbalance is costing real money.

Nice To Haves

  • SRE experience for large-scale network infrastructure — designing for reliability, defining SLOs/SLIs for network services, capacity planning with error budgets, and incident response for network-impacting outages at scale.
  • Background on a cloud provider's networking team or a cloud networking product team — building or operating the interconnect, backbone, or SDN control plane from the provider side, not just consuming it as a customer.
  • Familiarity with AI/ML infrastructure traffic patterns like collective communication (all-reduce, all-gather), checkpoint/weight transfer, inference serving, and how these stress networks differ than traditional workloads in terms of burst behavior, flow synchronization, and bandwidth symmetry.
  • Experience with HPC fabrics like InfiniBand, RoCE v2, lossless Ethernet, or custom high-radix topologies and an understanding of how job placement, congestion management, and adaptive routing interact at scale.
  • Background in traffic engineering for large backbones and the operational judgment to know when TE is worth the complexity.
  • Hands-on time with multi-cloud connectivity: cross-cloud peering, private interconnect products, and the billing models that come with them.
  • Experience building cost/chargeback systems for shared infrastructure, or FinOps exposure in a large cloud environment.

Responsibilities

  • Build the network observability stack. Design and deploy telemetry pipelines — sFlow/IPFIX, gNMI streaming, eBPF host probes — that turn packet counters into per-flow, per-tenant, per-workload cost and utilization data. Own the SLIs for backbone and DCN fabric health.
  • Hunt for efficiency. Analyze inter-region traffic patterns, identify hot links and stranded capacity, and quantify the dollar impact. Build the models that tell us whether we should buy more capacity, or move the workload.
  • Own QoS and traffic engineering. Design and operate traffic classification, marking, and shaping across the backbone. Make sure bulk checkpoint transfers don’t starve latency-sensitive inference, and that we’re not paying premium cross-region rates for traffic that could take the cheap path.
  • Drive cost attribution. Tie network spend — egress, interconnect ports, transit, optical leases — back to the teams and workloads that generate it. Make network cost a first-class input to capacity planning and workload placement decisions.
  • Influence decisions you don't own . A large fraction of this role is convincing other teams to act on what your data shows: making the case to research that a traffic pattern needs to change, to finance that an interconnect tranche is worth buying, to Systems Networking that a QoS policy needs rewriting. You'll partner closely with Systems Networking on fabric architecture and Observability on telemetry platform integration, but the cost and efficiency wins will come from moving teams that don't report to you.
  • Automate. Extend our intent-based network configuration systems and write the tooling that turns your efficiency findings into safe, reviewable, and impactful changes.

Benefits

  • competitive compensation and benefits
  • optional equity donation matching
  • generous vacation and parental leave
  • flexible working hours
  • a lovely office space in which to collaborate with colleagues
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service