Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. The Horizons team leads Anthropic's reinforcement learning (RL) research and development, playing a critical role in advancing our AI systems. We've contributed to every Claude release, with significant impact on the autonomy, coding, and reasoning capabilities of Anthropic's models. We're hiring for the Cybersecurity RL team within Horizons. As a Research Engineer, you'll help to safely advance the capabilities of our models in secure coding, vulnerability remediation, and other areas of defensive cybersecurity. This role blends research and engineering, requiring you to both develop novel approaches and realize them in code. Your work will include designing and implementing RL environments, conducting experiments and evaluations, delivering your work into production training runs, and collaborating with other researchers, engineers, and cybersecurity specialists across and outside Anthropic. The role requires domain expertise in cybersecurity paired with interest or experience in training safe AI models. For example, you might be a white hat hacker who's curious about how LLMs could augment or transform your work, a security engineer interested in how AI could help harden systems at scale, or a detection and response professional wondering how models could enhance defensive workflows.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
1,001-5,000 employees