Technical Program Manager, Cloud Inference

AnthropicSan Francisco, CA
Hybrid

About The Position

We are seeking an experienced Technical Program Manager to support our critical cloud deployments. In this role you will be an execution owner, driving coordination and collaboration across multiple engineering teams. You will also support the collaboration and technical execution between our internal engineering teams and our major cloud partners including Amazon Bedrock, Google Vertex, and Microsoft Foundry. Your primary focus will be on ensuring tight coordination on engineering deliverables both within our internal teams and between our partner teams, enabling repeatable and efficient product development and launch pipelines for our AI models on third-party platforms. You will be responsible for aligning a range of business and technical stakeholders to drive execution of technical roadmaps, with a particular emphasis on optimizing our presence and performance. This position offers the opportunity to make a significant impact on Anthropic's growth and success in the cloud AI market, while working at the forefront of AI development and innovation.

Requirements

  • Several years of experience in technical program management, with a track record of successfully delivering complex technical programs, preferably involving cloud platforms and AI technologies.
  • Strong understanding of cloud computing architectures, AI/ML deployment, and integration challenges.
  • Exceptional interpersonal and communication skills, enabling you to influence without authority and build cross-organizational support.
  • A high threshold for navigating ambiguity and ability to balance strategic priorities with rapid, high-quality execution.
  • Thrive in fast-paced, scaling environments with the ability to bring order to chaos.
  • Passionate about Anthropic's mission and committed to ensuring AI is developed safely.
  • Bachelor’s degree or an equivalent combination of education, training, and/or experience
  • A field relevant to the role as demonstrated through coursework, training, or professional experience
  • Years of experience required will correlate with the internal job level requirements for the position

Nice To Haves

  • Direct experience with a hyperscaler's managed AI platform — Amazon Bedrock, Google Vertex AI, or Azure AI Foundry — including how partners list, launch, and onboard customers on it.
  • Background in ML inference, model serving infrastructure, or accelerator-based compute.
  • Owned joint engineering roadmap or dependency tracking, driving incident follow-through, and converting open issues into a prioritized plan both sides commit to
  • Experience with release engineering, deployment automation, or CI/CD for systems that ship to multiple targets or environments.

Responsibilities

  • Partner with engineering leaders to define, scope, and sequence major technical initiatives for cloud partnerships and AI model deployment, and own the plans, timelines, and resourcing to land them.
  • Own launch readiness for Claude models on partner cloud platforms: checklist, blocker tracking, joint go/no-go with the partner, and post-launch stability follow-through.
  • Act as the primary technical interface to cloud partner engineering orgs — owning the relationship, the shared roadmap, and day-to-day coordination on deployment, capacity, and incidents.
  • Drive cross-functional alignment across internal engineering, product, and go-to-market teams to land joint deliverables with the partner.
  • Provide clear and transparent reporting on program status, issues, and risks to executives and stakeholders.

Benefits

  • competitive compensation
  • benefits
  • optional equity donation matching
  • generous vacation
  • parental leave
  • flexible working hours
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service