Product Manager, Compute Platform

AnthropicSeattle, WA
3hHybrid

About The Position

As a Product Manager focused on Compute Platform, you’ll partner with Infrastructure, Compute Operations, Engineering, Finance & Strategy, and Research to build the scheduling, orchestration, and capacity management systems that power Anthropic’s compute infrastructure—the foundation on which every model training run, evaluation, and inference workload depends: Partner with Infrastructure to build the systems that determine how jobs are scheduled, prioritized, and allocated across Anthropic’s growing fleet of GPU and accelerator clusters—ensuring the right workloads run on the right hardware at the right time. Your work directly impacts cluster utilization, cost efficiency, and researcher velocity: defining the semantic layer for job scheduling, establishing resource guarantees, and making the trade-offs that keep our infrastructure running at peak capacity. You’ll drive the evolution of our compute platform to support increasingly diverse workloads—from large-scale training runs and fine-tuning jobs to real-time inference and batch evaluation—each with distinct scheduling requirements, priority levels, and resource profiles. You will define and own the strategy and roadmap across job scheduling primitives, capacity allocation policies, preemption and fairness frameworks, quota management, and the observability tooling that gives engineering and leadership confidence in how compute resources are being used.

Requirements

  • 7+ years of product management experience, with deep exposure to compute infrastructure, distributed systems, or scheduling/orchestration platforms
  • Experience taking technical infrastructure products from infancy to scale—you’ve built something from the ground up and grown it to serve demanding internal or external customers
  • Track record of building platform products that balance the needs of multiple users and stakeholders—you’re comfortable making prioritization trade-offs between utilization, latency, cost, and fairness, and communicating them clearly
  • Ability to internalize complex technical systems (job schedulers, cluster managers, resource orchestrators) and translate that understanding into a comprehensive product vision
  • Fluent across functions—you’re equally credible discussing scheduling algorithms with engineers, capacity economics with finance, and infrastructure strategy with leadership
  • Strong instinct for connecting technical decisions to business outcomes: every percentage point of cluster utilization has measurable impact
  • Scrappy and resourceful—you do what it takes to get things done in a fast-moving environment
  • We require at least a Bachelor's degree in a related field or equivalent experience.

Nice To Haves

  • Built or scaled job scheduling, resource orchestration, or workload management systems for large-scale compute clusters (e.g., Kubernetes, Slurm, Borg, YARN, or custom schedulers).
  • Deep familiarity with GPU/accelerator scheduling challenges, including gang-scheduling, topology-aware placement, preemption, and hardware affinity constraints.
  • Experience defining and enforcing SLAs and resource guarantees for compute workloads—including mechanisms to validate job prerequisites (data readiness, checkpoint availability, hardware compatibility) before scheduling to avoid wasted resources.
  • Capacity planning experience across cloud and on-premises infrastructure, including cost modeling, demand forecasting, and vendor management for compute procurement.
  • Scaled through hypergrowth in compute-intensive environments (AI/ML, HPC, large-scale cloud infrastructure).
  • Experience with observability and efficiency tooling for distributed infrastructure—building dashboards, automation, and governance workflows that drive utilization and cost accountability.

Responsibilities

  • Deeply understand the needs of internal customers across Research, Infrastructure, Product, and Finance—from researchers who need guaranteed resources for multi-week training runs to platform teams managing inference workloads with strict latency SLAs.
  • Define and iterate on the semantic layer for job scheduling: the abstractions, priority tiers, resource classes, and preemption policies that govern how work flows through our compute clusters.
  • Partnering with engineering leads to design scheduling capabilities that maximize cluster utilization while honoring resource guarantees—ensuring jobs have the right prerequisites (data, checkpoints, hardware affinity) validated before launch to avoid wasted compute.
  • Drive product strategy and roadmap for compute capacity management, including quota systems, fairness policies, bin-packing optimizations, and gang-scheduling for distributed workloads.
  • Own the trade-off framework between utilization efficiency, job latency, cost, and reliability—making transparent prioritization decisions and communicating them clearly to senior leadership.
  • Collaborate with the Capacity Strategy & Operations team on capacity planning models, demand forecasting, and cost-to-serve analytics that inform infrastructure investment decisions.
  • Build and champion observability tools and dashboards that provide real-time visibility into cluster health, queue depth, scheduling efficiency, and resource waste.

Benefits

  • competitive compensation and benefits
  • optional equity donation matching
  • generous vacation and parental leave
  • flexible working hours
  • a lovely office space in which to collaborate with colleagues
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service