The Center for AI Safety (CAIS) is a leading research and advocacy organization focused on mitigating societal-scale risks from AI. We address AI’s toughest challenges through technical research, field-building initiatives, and policy engagement, along with our sister organization, Center for AI Safety Action Fund. As a Senior Research Scientist here, you will lead and execute high-impact research that advances the safety and reliability of frontier AI systems, taking ownership of ambitious open problems and seeing them through to publication. We expect senior scientists to set a high bar for research quality and push the team’s thinking forward. You’ll design and run experiments on large language models, build the tooling needed to train and evaluate models at scale, and turn results into publishable research. You’ll collaborate closely with CAIS researchers and external academic and commercial partners, using our compute cluster to run large-scale training and evaluation. The work spans areas like AI honesty, robustness, transparency, and trojan/backdoor behaviors, aimed at reducing real-world risks from advanced AI systems.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Education Level
Ph.D. or professional degree