The Agentic Red Team is a specialized, high-velocity unit within Google DeepMind Security. Our mission is to close the "Agentic Launch Gap"—the critical window where novel AI capabilities outpace traditional security reviews. Unlike traditional red teams that hand off reports and move on, we operate with extreme agility, embedding directly with product teams as both a consulting partner and an exploitation arm. We act as a "special forces" unit capable of jumping into high-priority launches, relying on Google Core for foundational system-level protections so we can focus exclusively on model and agent-layer risks. As a Senior Security Engineer on the Agentic Red Team, you will be the primary technical executor of our adversarial engagements. You will work "in the room" with product builders, identifying architectural flaws during the design phase long before formal reviews begin. Your core focus will be to perform complex, multi-turn attacks on production-level AI models, specifically targeting agentic behaviors like tool usage and reasoning chains. You will not only find vulnerabilities but also help close the loop by contributing to "Auto Red Teaming" frameworks and defensive strategies, ensuring that your findings are codified into reusable guardrails for all Google agent developers.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior