AI Red Team Lead Engineer

U.S. BankMinneapolis, MN
Hybrid

About The Position

The AI Red Team Lead Engineer leads the execution and evolution of offensive security activities focused on AI/ML systems, platforms, and integrations, in addition to traditional enterprise attack surfaces. Owns the design, execution, and reporting of AI-focused red team operations, adversarial testing, and threat emulation exercises targeting models, data pipelines, AI-enabled applications, and supporting infrastructure. Acts as a senior technical authority and program lead for AI red teaming, partnering with teams to identify, validate, and communicate AI-related risks. Drives maturity through repeatable testing methodologies, automation, custom tooling, and clear articulation of business impact.

Requirements

  • Bachelor's degree, or equivalent work experience
  • Eight plus years of relevant experience
  • Thorough understanding of the applicable information security systems, policies, and procedures
  • Effective communication, presentation skills, leadership, problem-solving and analytical skills
  • Proven collaboration and influencing skills

Nice To Haves

  • Hands-on experience testing or securing AI/ML systems, including LLMs or other model classes
  • Knowledge of AI threat models and attack techniques including, but not limited to, prompt injection, model extraction, training data poisoning, inference abuse, hallucination exploitation
  • Familiarity with AI platforms and tooling (e.g., model APIs, orchestration frameworks, evaluation pipelines)
  • Significant red team experience, including adversary emulation and multi-stage attack chains
  • Proven skill developing proof-of-concept exploits and custom offensive tooling
  • Strong understanding of red team and offensive AI techniques and tooling
  • Expertise defeating or bypassing endpoint and AI-adjacent security controls (EDR/XDR, API protections, guardrails)
  • Experience with cloud, containerized, and AI-hosting environments
  • Proficiency in one or more languages (e.g., Python, PowerShell, Go, C/C++, Shell)
  • Ability to translate research into operational tooling
  • Exceptional written and verbal communication skills

Responsibilities

  • Lead AI Red Team operations, including adversarial testing of: Foundation and custom models (LLMs, vision, speech, decision systems), Model deployment environments (APIs, plugins, agents, RAG pipelines), Training, evaluation, and inference pipelines, Data ingestion, labeling, and governance controls
  • Design and execute AI-specific threat emulation aligned to real-world adversaries, misuse scenarios, and emerging attack techniques (e.g., prompt injection, data poisoning, model inversion, jailbreaks, supply chain risks).
  • Develop and maintain custom AI red team tooling, frameworks, and automation to scale testing and improve repeatability.
  • Perform security research into emerging AI attack techniques, model vulnerabilities, and defensive gaps.
  • Partner with detection, engineering, and governance teams to support purple-team and control validation activities.
  • Contribute to AI security standards, testing guidance, and program strategy.
  • Mentor and provide technical leadership to red team engineers.

Benefits

  • Healthcare (medical, dental, vision)
  • Basic term and optional term life insurance
  • Short-term and long-term disability
  • Pregnancy disability and parental leave
  • 401(k) and employer-funded retirement plan
  • Paid vacation (from two to five weeks depending on salary grade and tenure)
  • Up to 11 paid holiday opportunities
  • Adoption assistance
  • Sick and Safe Leave accruals of one hour for every 30 worked, up to 80 hours per calendar year unless otherwise provided by law
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service