Staff + Senior Software Engineer, Cloud Inference

AnthropicSeattle, WA
3hHybrid

About The Position

The Cloud Inference team scales and optimizes Claude to serve the massive audiences of developers and enterprise companies across AWS, GCP, Azure, and future cloud service providers (CSPs). We own the end-to-end product of Claude on each cloud platform—from API integration and intelligent request routing to inference execution, capacity management, and day-to-day operations. Our engineers are extremely high leverage: we simultaneously drive multiple major revenue streams while optimizing one of Anthropic's most precious resources—compute. As we expand to more cloud platforms, the complexity of managing inference efficiently across providers with different hardware, networking stacks, and operational models grows significantly. We need engineers who can navigate these platform differences, build robust abstractions that work across providers, and make smart infrastructure decisions that keep us cost-effective at massive scale. Your work will increase the scale at which our services operate, accelerate our ability to reliably launch new frontier models and innovative features to customers across all platforms, and ensure our LLMs meet rigorous safety, performance, and security standards.

Requirements

  • Have significant software engineering experience, with a strong background in high-performance, large-scale distributed systems serving millions of users
  • Have experience building or operating services on at least one major cloud platform (AWS, GCP, or Azure), with exposure to Kubernetes, Infrastructure as Code or container orchestration
  • Have strong interest in inference
  • Thrive in cross-functional collaboration with both internal teams and external partners
  • Are a fast learner who can quickly ramp up on new technologies, hardware platforms, and provider ecosystems
  • Are highly autonomous and self-driven, taking ownership of problems end-to-end with a bias toward flexibility and high-impact work
  • Pick up slack, even when it goes outside your job description
  • We require at least a Bachelor's degree in a related field or equivalent experience.

Nice To Haves

  • Direct experience working with CSP partner teams to scale infrastructure or products across multiple platforms, navigating differences in networking, security, privacy, billing, and managed service offerings
  • A background in building platform-agnostic tooling or abstraction layers that work across cloud providers
  • Hands-on experience with capacity management, cost optimization, or resource planning at scale across heterogeneous environments
  • Strong familiarity with LLM inference optimization, batching, caching, and serving strategies
  • Experience with Machine learning infrastructure including GPUs, TPUs, Trainium, or other AI accelerators
  • Background designing and building CI/CD systems that automate deployment and validation across cloud environments
  • Solid understanding of multi-region deployments, geographic routing, and global traffic management
  • Proficiency in Python or Rust

Responsibilities

  • Design and build infrastructure that serves Claude across multiple CSPs, accounting for differences in compute hardware, networking, APIs, and operational models
  • Collaborate with CSP partner engineering teams to resolve operational issues, influence provider roadmaps, and stand up end-to-end serving on new cloud platforms
  • Design and evolve CI/CD automation systems, including validation and deployment pipelines, that reliably ship new model versions to millions of users across cloud platforms without regressions
  • Design interfaces and tooling abstractions across CSPs that enable cost-effective inference management, scale across providers, and reduce per-platform complexity
  • Contribute to capacity planning and autoscaling strategies that dynamically match supply with demand across CSP validation and production workloads
  • Optimize inference cost and performance across providers—designing workload placement and routing systems that direct requests to the most cost-effective accelerator and region
  • Contribute to inference features that must work consistently across all platforms
  • Analyze observability data across providers to identify performance bottlenecks, cost anomalies, and regressions, and drive remediation based on real-world production workloads

Benefits

  • competitive compensation and benefits
  • optional equity donation matching
  • generous vacation and parental leave
  • flexible working hours
  • a lovely office space in which to collaborate with colleagues
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service