About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Team The Alignment Special Ops team identifies and executes some of the most neglected, high-leverage projects across Anthropic’s Alignment org and beyond. We’re a small team with a broad mandate and our work takes us across the entire company (and often, the broader safety research ecosystem). You will accelerate technical research, incubate new research efforts, and drive high-priority initiatives that don’t have a natural home elsewhere (e.g., the Anthropic Fellows Program ). About the Role You’ll own 3–4 special projects at a time. These are generally ambiguous, cross-functional problems that need someone to define the goal and approach, build the plan, coordinate the team, and drive to a result. This role is in-person in San Francisco, CA.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level