AI Behavior Researcher - Child Safety and Mental Health

TransluceSan Francisco, CA
Onsite

About The Position

Transluce is a fast-moving nonprofit research lab building the public tech stack for AI evaluation and oversight. We are pioneering research into how AI chatbots and companions impact mental health and child safety, and we’re improving outcomes for millions of sensitive AI interactions with vulnerable users. We are looking for interdisciplinary researchers who can bridge domain expertise in mental health, child safety, or adjacent social science with enough technical fluency to actively develop and refine evaluation methods. You don't need to be a pure ML engineer, but you should be comfortable working inside existing evaluation pipelines: adapting user simulators, refining judge prompts and rubrics, and collaborating with external domain experts to validate what you're measuring. As an early member of a highly collaborative team, you will learn and grow quickly, working directly with leading AI researchers, frontier AI labs, and prominent child safety and mental health experts.

Requirements

  • Quantitative research background in AI evaluation, HCI, psychology, social data science, public health, or a related field.
  • Enough ML and programming fluency to navigate and modify existing evaluation codebases, even if you wouldn't build infrastructure from scratch.
  • Reliable and trustworthy results: meticulous, good experimental design, epistemic self-awareness and transparency.
  • Ability to balance between the needs of AI researchers and domain experts, as well as between researchers and senior decision makers.
  • Hands-on experience working directly with domain experts to integrate their expertise into system design.

Responsibilities

  • Build and extend Transluce’s AI evaluation methods and oversight tools for mental health, child safety, and related topics.
  • Identify and measure important emerging AI behaviors that may impact vulnerable users.
  • Identify gaps in current evaluations and design methods to address them.
  • Adapt existing methods to cover new domains, monitor evolving safety issues and trends, and inform key decisions in industry and government.
  • Carry out human subjects research to inform and validate evaluation design.
  • Build and manage relationships with clinicians, social science researchers, affected users, civil society, and relevant safety researchers at frontier labs.

Benefits

  • Sponsoring international visas
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service