Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. We are looking for Research Scientists to develop and productionize advanced autonomy evaluations on our Frontier Red Team. Our goal is to develop and implement a gold standard of advanced autonomy evals to determine the AI Safety Level (ASL) of our models. This will have major implications for the way we train, deploy, and secure our models, as detailed in our Responsible Scaling Policy (RSP). We believe that developing autonomy evals is one of the best ways to study increasingly capable and agentic models. If you've thought particularly hard about how models might be agentic and associated risks, and you've built an eval or experiment around it, we'd like to meet you.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
Bachelor's degree