Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. As part of the Security & Privacy Research Team at Google DeepMind, you will play a key role in researching, designing, and building the technology that will help us work with and control advanced AI systems. Agents are increasingly used to handle long-horizon tasks. We already see this in coding, where advances are moving us from simple code autocomplete to advanced AI systems that write code, build, test, and debug all on their own. As we get closer to AGI, we need guarantees that help control and guide these agents, especially in scenarios where the agent’s capabilities may exceed those of the systems tasked with monitoring it. Go beyond traditional security assumptions to model how a highly capable agent could misuse its access, exfiltrate data, or establish a rogue deployment within complex production environments. Develop techniques for monitoring advanced agents. How can we detect emergent deception, collusion, or obscured long-term plans before they result in harmful actions? Create novel evaluation methodologies and benchmarks to measure the effectiveness of different control strategies against highly capable simulated adversaries.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
Ph.D. or professional degree
Number of Employees
5,001-10,000 employees