Research Manager, AI Safety

Cambridge Boston Alignment InitiativeCambridge, MA
Hybrid

About The Position

The Cambridge Boston Alignment Initiative (CBAI) is a nonprofit research organization dedicated to ensuring a safe and beneficial transition to advanced AI systems through research and education. CBAI produces original research and accelerates AI safety research via fellowship programs. Following a successful launch, CBAI is scaling rapidly in 2026, expanding its fellowship cycles, cohort size, and team. The Research Manager will collaborate with research fellows and mentors on cutting-edge work in areas such as interpretability, AI control, formal verification, evaluations, and AI governance & policy. The role requires technical research experience and/or governance & policy research experience. The position is primarily based in Cambridge, MA, with potential for hybrid flexibility and access to co-working spaces in Berkeley and NYC for specific circumstances. A start date of May 2026 is anticipated. CBAI is an Equal Opportunity Employer.

Requirements

  • Experience supporting complex intellectual work, such as teaching, managing technical teams, conducting research, consulting, coordinating academic programs, or providing substantive feedback.
  • Understanding of the research process from an internal perspective, with substantial analytical work experience (academia, policy analysis, consulting, or industry research).
  • Skill in providing constructive feedback that advances work, strengthens arguments, tightens methodology, and clarifies findings, even outside core expertise.
  • Excellent communication skills, including clear explanation of complex concepts and effective constructive feedback.
  • Proactive communication when unsure or noticing potential problems.
  • Interpersonal skills including genuine empathy, active listening, and a servant leadership approach.
  • Enjoyment in helping others succeed and taking pride in their accomplishments.
  • Skill in managing stakeholders and coordinating between multiple parties (researchers, advisors, administrators).
  • Mission-motivated with strong alignment to CBAI's mission and familiarity with AGI safety and catastrophic AI risks.
  • Passion for contributing meaningfully to reducing AI catastrophic risks and accomplishing as much as possible.
  • Organized and conscientious, with the ability to keep complex projects on track, follow through reliably, and maintain clear communication.
  • Receptive to feedback and continuously improving approach.
  • Curious and adaptable, excited to learn about diverse research agendas and work with researchers from varied backgrounds.
  • Actively seeking tools and approaches to improve fellowship effectiveness.
  • Bachelor's degree or higher in Computer Science, Mathematics, Statistics, Economics, Public Policy, Political Science, or a related field.
  • Research experience demonstrating strong methodological knowledge.
  • Genuine interest in building a career in AI safety research.
  • U.S. work authorization required (OPT accepted).

Nice To Haves

  • Previous involvement in AI safety/alignment programs or similar field-building initiatives.
  • Published research in interpretability, AI control, or adjacent agendas.
  • Experience managing research programs or academic initiatives.

Responsibilities

  • Conduct frequent 1-1s with fellows, providing feedback on research progress, helping overcome obstacles, coaching through challenges like debugging and literature scaffolding, and supporting data collection, analysis, methodology development, and hypothesis testing.
  • Provide feedback on fellows' research to foster a rigorous approach.
  • Connect fellows with relevant resources, literature, and opportunities.
  • Communicate with fellows' mentors to define research objectives and support research progression.
  • Contribute to the fellow selection process by reviewing and interviewing candidates.
  • Design reading group curriculum components and workshop programs.
  • Curate a speaker event series based on fellow profiles and recent studies.
  • Support special projects aligned with strengths (e.g., applicant selection, evaluation frameworks, mentor onboarding).
  • Meet weekly with program leadership to enhance feedback loops and improve the program.
  • Stay current on technical AI alignment or governance developments relevant to fellows' work.
  • Prepare weekly briefs on recent developments in the field for fellows.

Benefits

  • 5% 403(b) match contribution
  • Comprehensive health insurance
  • Generous PTO policy
  • Meals provided during weekdays
  • Employer-paid commuter benefits
  • Reimbursement for work-related technology and/or home office expenses
  • Opportunity to closely contribute to frontier research
  • Potential co-authorship in collaborations
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service