Transluce is a fast-moving nonprofit research lab building the public tech stack for AI evaluation and oversight. We are pioneering research into how AI chatbots and companions impact mental health and child safety, and we’re improving outcomes for millions of sensitive AI interactions with vulnerable users. We are looking for interdisciplinary researchers who can bridge domain expertise in mental health, child safety, or adjacent social science with enough technical fluency to actively develop and refine evaluation methods. You don't need to be a pure ML engineer, but you should be comfortable working inside existing evaluation pipelines: adapting user simulators, refining judge prompts and rubrics, and collaborating with external domain experts to validate what you're measuring. As an early member of a highly collaborative team, you will learn and grow quickly, working directly with leading AI researchers, frontier AI labs, and prominent child safety and mental health experts.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Education Level
No Education Listed