Joining us as a Research Engineer, you'll be at the forefront of tackling one of the most critical challenges in AI today: safety and alignment. Your work will be pivotal in understanding and mitigating the risks of advanced AI, conducting foundational research to make our models safer, and solving the core technical problems of AI alignment—ensuring our models behave in accordance with human values and intentions. The Safety team is dedicated to pioneering and implementing techniques that make our models more robust, honest, and harmless. As a Research Engineer, you will bridge the gap between theoretical research and practical application, writing high-quality code to test hypotheses and integrating successful safety solutions directly into our products. Your research will not only protect millions of users but also contribute to the broader scientific community's understanding of how to build safe, beneficial AI.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Education Level
Ph.D. or professional degree
Number of Employees
51-100 employees