Technical Program Management, Alignment

AnthropicSan Francisco, NY
$210,000 - $290,000Onsite

About The Position

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Team The Alignment Special Ops team identifies and executes some of the most neglected, high-leverage projects across Anthropic’s Alignment org and beyond. We’re a small team with a broad mandate and our work takes us across the entire company (and often, the broader safety research ecosystem). You will accelerate technical research, incubate new research efforts, and drive high-priority initiatives that don’t have a natural home elsewhere (e.g., the Anthropic Fellows Program ). About the Role You’ll own 3–4 special projects at a time. These are generally ambiguous, cross-functional problems that need someone to define the goal and approach, build the plan, coordinate the team, and drive to a result. This role is in-person in San Francisco, CA.

Requirements

  • 5+ years in chief-of-staff, program management, operations, or similar roles in a research, technical, or fast-moving environment (e.g., consulting, startups)
  • Can take a loosely-scoped problem, define a goal, break it into concrete steps, and execute without waiting for direction
  • Have built and managed teams, programs, or functions from scratch
  • Write clearly and concisely—you default to a one-page doc over a five-page one
  • Are comfortable making decisions with incomplete information
  • Can hold the details of multiple workstreams simultaneously while context-switching between them
  • Are deeply motivated by Anthropic’s mission of ensuring the world safely manages the transition through transformative AI
  • Bachelor’s degree or an equivalent combination of education, training, and/or experience
  • A field relevant to the role as demonstrated through coursework, training, or professional experience

Nice To Haves

  • Experience working directly with researchers, especially in AI safety or machine learning
  • Familiarity with the AI safety research landscape, key organizations, and ongoing debates

Responsibilities

  • Scope, plan, and drive model evaluation projects end-to-end: understand the goal with researchers, coordinate contributors across teams & externally, staff efforts, and produce deliverables on a tight deadline
  • Manage external research collaborators—onboarding, expectation-setting, contracts, and handling edge cases as they arise
  • Synthesize complex information into decision-relevant inputs for leadership so they can move quickly
  • Identify new projects the company should take on, make the case in writing, get buy-in, and execute
  • Run the Alignment team offsite and similar events, including managing logistics, agendas, and delegation

Benefits

  • competitive compensation and benefits
  • optional equity donation matching
  • generous vacation and parental leave
  • flexible working hours
  • a lovely office space in which to collaborate with colleagues
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service