Research Product Manager, Model Behaviors

AnthropicSan Francisco, NY
2dHybrid

About The Position

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role: As a Product Manager for Model Behaviors, you will partner with the Alignment Finetuning team to define and shape Claude's character, behaviors, and reinforcement signals—work that directly influences how millions of people experience AI. You will systematically identify high-priority behavioral improvements, coordinate across Research, Product, and Safeguards teams, and accelerate our ability to ship well-aligned models. The ideal candidate combines deep user empathy with the judgment to navigate nuanced behavior questions where there are no clear right answers.

Requirements

  • Have a deep passion and curiosity for AI and LLMs.
  • Use AI regularly.
  • Have 5+ years in product management leading scaled conversational AI products.
  • Are a first-principles thinker with the ability to navigate and execute amidst ambiguity, flexing into different domains based on the business problem at hand and finding simple, easy-to-understand solutions
  • Have a track record of delivering products and features to end-users (consumer or end-user b2b focus)
  • Have strong user empathy and the ability to synthesize vague or contradictory feedback into actionable priorities
  • Have strong judgment and model taste, with the ability to make tradeoffs when there is no clear right answer
  • Have a strong grasp of ML concepts and are willing to go deep on technical solutions
  • Have intellectual curiosity without ego—comfortable asking questions and learning independently
  • Think creatively about the risks and benefits of new technologies, moving beyond past checklists and playbooks
  • Have a creative, hacker spirit and love solving puzzles
  • We require at least a Bachelor's degree in a related field or equivalent experience.

Responsibilities

  • Define behavioral defaults and steerability constraints
  • Develop and maintain taxonomies of model behaviors across capabilities
  • Identify, triage, and prioritize behavior issues and opportunities, coordinating input from Users, Research, Product, and Safeguards teams
  • Amplify alignment research breakthroughs, translating them into product, process, and model improvements
  • Deeply understand user interaction patterns to identify behavior improvements that make Claude more helpful and safe
  • Contribute to evals that measure alignment progress
  • Identify and scale initiatives and tools that help researchers ship alignment improvements faster

Benefits

  • competitive compensation and benefits
  • optional equity donation matching
  • generous vacation and parental leave
  • flexible working hours
  • a lovely office space in which to collaborate with colleagues
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service