We're building a talent-dense, high-agency AI safety team at Lila that will engage all core teams within the organization (science, model training, lab integration, etc.), to prepare for risks from scientific superintelligence. The initial focus of this team will be to build and implement a bespoke safety strategy for Lila, tailored to its specific goals and deployment strategies. This will involve technical safety strategy development, broader ecosystem engagement, safety-focused evaluations, safety systems to mitigate risks, and a safety research agenda that explores longer-term needs such as oversight of superintelligent scientific systems. We’re seeking a Technical Mitigations Lead, to lead the build out of safety systems at Lila for the safe deployment of our scientific capabilities to the world. Given the novelty of Lila’s workflows, integrating frontier-class language models with narrow scientific tools and lab-based automation, this role will require the design and deployment of technical safeguards beyond the current state-of-the-art. We expect the person in this role to start off the initial mitigations build-out, and then slowly build a team to support this function.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed