Anthropic is an AI safety and research company working to build reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our customers and society as a whole. As a Safeguards Enforcement Lead focusing on frontier model abuse, you will serve as the dedicated program owner for one of the most consequential and fast-moving abuse vectors on our platform: actors who misuse Anthropic's models to train competing AI systems in violation of our usage policies and terms of service. Safety is core to our mission, and you'll own how we identify, investigate, and act against this category of harm — from developing the detection playbook to seeing enforcement cases through to resolution.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level