About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role: As a Product Manager for Model Behaviors, you will partner with the Alignment Finetuning team to define and shape Claude's character, behaviors, and reinforcement signals—work that directly influences how millions of people experience AI. You will systematically identify high-priority behavioral improvements, coordinate across Research, Product, and Safeguards teams, and accelerate our ability to ship well-aligned models. The ideal candidate combines deep user empathy with the judgment to navigate nuanced behavior questions where there are no clear right answers.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
1,001-5,000 employees