Senior Security Engineer, Agentic Red Team

DeepMind
6d$166,000 - $244,000

About The Position

The Agentic Red Team is a specialized, high-velocity unit within Google DeepMind Security. Our mission is to close the "Agentic Launch Gap"—the critical window where novel AI capabilities outpace traditional security reviews. Unlike traditional red teams that hand off reports and move on, we operate with extreme agility, embedding directly with product teams as both a consulting partner and an exploitation arm. We act as a "special forces" unit capable of jumping into high-priority launches, relying on Google Core for foundational system-level protections so we can focus exclusively on model and agent-layer risks. As a Senior Security Engineer on the Agentic Red Team, you will be the primary technical executor of our adversarial engagements. You will work "in the room" with product builders, identifying architectural flaws during the design phase long before formal reviews begin. Your core focus will be to perform complex, multi-turn attacks on production-level AI models, specifically targeting agentic behaviors like tool usage and reasoning chains. You will not only find vulnerabilities but also help close the loop by contributing to "Auto Red Teaming" frameworks and defensive strategies, ensuring that your findings are codified into reusable guardrails for all Google agent developers.

Requirements

  • Bachelor's degree in Computer Science, Information Security, or equivalent practical experience.
  • Experience in Red Teaming, Offensive Security, or Adversarial Machine Learning.
  • Strong coding skills in Python, Go, or C++ with experience building security tools or automation.
  • Technical understanding of LLM architectures, agentic workflows (e.g., chain-of-thought reasoning), and common AI vulnerability classes.

Nice To Haves

  • Hands-on experience developing exploits for GenAI models (e.g., prompt injection, adversarial examples, training data extraction).
  • Experience working in a consulting capacity with product teams or in a fast-paced "startup-like" environment.
  • Familiarity with AI safety benchmarks, evaluation frameworks, and fuzzing techniques.
  • Ability to translate complex probabilistic risks into actionable engineering fixes for developers.

Responsibilities

  • Execute Agile Red Teaming: Conduct rapid, high-impact security assessments on agentic services, focusing on vulnerabilities unique to GenAI such as prompt injection, tool-use escalation, and autonomous lateral movement.
  • Develop Advanced Exploits: Engineer and execute complex attack sequences that exploit non-deterministic model behaviors, agentic logic errors, and data poisoning vectors.
  • Build Automated Defenses: Write code to transform manual vulnerability discoveries into automated regression testing frameworks ("Auto Red Teaming") that prevent regression in future model versions.
  • Embed with Product Teams: Partner directly with developers during the design and build phases to provide immediate feedback, effectively shortening the feedback loop between offensive findings and defensive engineering.
  • Curate Threat Intelligence: Maintain and expand a library of agent-specific attack patterns and exploit primitives to establish robust release criteria for new models.

Benefits

  • bonus
  • equity
  • benefits
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service