Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. We are looking for a research-oriented engineer to develop the methods that make our safety evaluations representative, robust, and informative. You'll work on questions like: How do we measure whether a model is safe? How do we create evaluations that reflect real-world usage rather than synthetic benchmarks? How do we know our graders are accurate? This role sits at the intersection of applied ML research and engineering. You'll design experiments to improve how we evaluate model behavior, then ship those methods into pipelines that inform model training and deployment decisions. Your work will directly shape how Anthropic understands and improves the safety of our models across misuse, prompt injection, and user well-being.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
11-50 employees