Research Manager, AI Safety

Cambridge Boston Alignment InitiativeCambridge, MA
$100,000 - $145,000Hybrid

About The Position

The Cambridge Boston Alignment Initiative (CBAI) is a nonprofit research organization dedicated to ensuring a safe and beneficial transition to advanced AI systems through research and education. CBAI produces original research and accelerates AI safety research via fellowship programs. Following a successful 2025 launch, CBAI is scaling rapidly in 2026, expanding its fellowship cycles, cohort size, and team. The organization is seeking Research Managers to support cutting-edge work in areas such as interpretability, AI control, formal verification, evaluations, and AI governance & policy. The role involves both research management (70% FTE) and program management & development (30% FTE). CBAI is committed to advancing AI safety and mitigating catastrophic AI risks.

Requirements

  • Experience supporting complex intellectual work, such as teaching, managing technical teams, conducting research, consulting, coordinating academic programs, or providing substantive feedback.
  • Substantial analytical work experience (academia, policy analysis, consulting, or industry research) to recognize solid research plans, identify blockers, and suggest concrete next steps.
  • Skilled at providing constructive feedback that advances work.
  • Excellent communication skills, including explaining complex concepts clearly and giving constructive feedback effectively.
  • Proactive communication when unsure or noticing potential problems.
  • Genuine empathy, active listening skills, and a servant leadership approach.
  • Enjoy helping others succeed and take pride in their accomplishments.
  • Skilled at managing stakeholders and coordinating between multiple parties (researchers, advisors, administrators).
  • Mission-motivated with strong alignment to CBAI's mission, familiarity with AGI safety and catastrophic AI risks.
  • Passion for contributing meaningfully to reducing AI catastrophic risks.
  • Organized and conscientious, able to keep complex projects on track, follow through reliably, and maintain clear communication.
  • Receptive to feedback and continuously improving approach.
  • Curious and adaptable, excited to learn about diverse research agendas and work with researchers from varied backgrounds.
  • Actively seek out tools and approaches to improve fellowship effectiveness.
  • Bachelor's degree or higher in Computer Science, Mathematics, Statistics, Economics, Public Policy, Political Science, or a related field.
  • Research experience demonstrating strong methodological knowledge.
  • Genuine interest in building a career in AI safety research.
  • U.S. work authorization required (OPT accepted).

Nice To Haves

  • Previous involvement in AI safety/alignment programs or similar field-building initiatives.
  • Published research in interpretability, AI control, or adjacent agendas.
  • Experience managing research programs or academic initiatives.

Responsibilities

  • Conduct frequent 1-1s with fellows, providing feedback on research progress, coaching through challenges (e.g., debugging, literature scaffolds, data collection & analysis, methodology development), and supporting experiment/hypothesis testing.
  • Provide feedback on fellows' research to foster a rigorous approach.
  • Connect fellows with relevant resources, literature, and opportunities.
  • Communicate with fellows' mentors to define clear research objectives and support research progression.
  • Contribute to the fellow selection process by reviewing and interviewing candidates.
  • Design reading group curriculum components and workshop programs.
  • Curate a speaker event series based on fellow profiles and recent studies.
  • Support special projects aligned with strengths (e.g., applicant selection, evaluation frameworks, mentor onboarding).
  • Meet weekly with program leadership to enhance feedback loops and improve the program.
  • Stay current on technical AI alignment or governance developments relevant to fellows' work.
  • Prepare weekly briefs on recent field developments for fellows.

Benefits

  • 5% 403(b) match contribution
  • Comprehensive health insurance
  • Generous PTO policy
  • Meals provided during weekdays
  • Employer-paid commuter benefits
  • Reimbursement for work-related technology and/or home office expenses
  • Opportunity to closely contribute to frontier research
  • Potential for co-authorship in collaborations
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service