Engineering Manager, Agent Prompts & Evals

AnthropicSan Francisco, NY
6dHybrid

About The Position

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Role Anthropic is looking for an Engineering Manager to lead the Agent Prompts & Evals team. This team owns the infrastructure that lets Anthropic ship model and prompt changes with confidence — the eval frameworks, system prompt pipelines, and regression-detection systems that every model launch depends on. When a new Claude model is ready to ship, this team is the one answering “is it actually better in our products?” When a product team wants to change how Claude behaves, this team owns the tooling that tells them whether they broke something. It’s a platform team whose platform is model behavior itself. The team sits deliberately at the seam between product engineering and research. You’ll partner closely with other evals groups across the company on shared infrastructure and methodology, with product teams who are shipping features on top of Claude, and with the TPMs and research PMs driving model launches. The pace is set by the model release cadence, and the team operates as both a platform owner and a hands-on partner during launch periods. You don’t need a research background, but you do need to want to learn how to measure things like “is Claude being too sycophantic” or “did web search get worse.” The best version of this role is someone who’s built strong platform or devtools teams before and is excited to apply that skillset to a domain where the thing you’re measuring is a language model.

Requirements

  • 8+ years in software engineering with 3+ years managing engineering teams, including experience leading a platform, infra, or developer-tooling team where your customers were other engineers
  • A track record of building “pits of success” — tooling and process that made it easy for other teams to do the right thing without needing to understand all the details
  • Comfort managing a team with a mixed charter: platform ownership, service-to-other-teams, and a launch-driven operational rhythm, all at once
  • Enough technical depth to engage on system design, review pipeline architecture, and be credible in debates with strong ICs — you don’t need to be writing code by hand every day, but you should be able to read it, review it, and be comfortable leveraging Claude to understand, design, and occasionally build.
  • A product mindset and willingness to wear multiple hats when the work calls for it
  • Demonstrated ability to build and maintain peer relationships with partner orgs that have different cultures and incentives — negotiating ownership, aligning roadmaps, and holding ground when it matters without being territorial about it
  • Experience recruiting and closing senior ICs in a competitive market

Nice To Haves

  • Prior exposure to LLM evals, ML experimentation platforms, or model quality work — even tangentially
  • Experience with A/B testing infrastructure, feature flagging, or gradual rollout systems
  • Background in devtools, CI/CD platforms, or testing infrastructure at scale
  • A history of managing teams that sit between two larger orgs and making that position an asset rather than a liability
  • Interest in AI safety and alignment — not required, but it makes the “why” of the work land harder

Responsibilities

  • Lead and grow a team of prompt engineers and platform software engineers
  • Own the product-side eval platform: the frameworks, dashboards, bulk runners, and CI integrations that product teams use to measure Claude’s behavior and catch regressions before they ship
  • Own system prompt infrastructure: versioning, deployment, rollback, and review tooling for the prompts that run in production across claude.ai , the API, and agentic surfaces
  • Be a steady hand through model launches — these are the team’s highest-stakes operational moments and the EM is the backstop when things get chaotic
  • Build durable collaboration with other evals groups across the company; this means real work on ownership boundaries, shared roadmaps, and avoiding tragedy-of-the-commons on shared eval infrastructure
  • Recruit, close, and retain engineers who want to work at the intersection of product engineering and model behavior
  • Shape where the team invests next: there are credible paths into frontier eval development, model launch automation, and deeper prompt engineering support, and part of the job is sequencing them
  • Push the team toward measuring things that are hard to measure — behavioral drift, prompt quality, harness parity — not just things that are easy

Benefits

  • competitive compensation and benefits
  • optional equity donation matching
  • generous vacation and parental leave
  • flexible working hours
  • a lovely office space in which to collaborate with colleagues
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service