Snapshot Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. About Us The Agentic Red Team is a specialized, high-velocity unit within Google DeepMind Security. Our mission is to close the "Agentic Launch Gap"; the critical window where novel AI capabilities outpace traditional security reviews. Unlike traditional red teams, we operate with extreme agility, embedding directly with product teams as both a consulting partner and an exploitation arm. We rely on Google Core for foundational system-level protections, allowing us to focus exclusively on model and agent-layer risks. Through rapid-response security engineering and the development of "Auto Red Teaming" techniques, we turn immediate findings into robust defensive strategies. The Role As the Security Lead for the Agentic Red Team, you will direct a specialized unit of AI Researchers and Offensive Security Engineers focused on adversarial AI and agentic exploitation. Operating as a technical player-coach, you will architect complex, multi-turn attack scenarios while managing cross-functional partnerships with Product Area leads and Google security to influence launch criteria. You will drive the evolution of AI safety by bridging manual exploration with automated regression pipelines, ensuring non-deterministic risks are identified, measured, and mitigated before deployment.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level