About The Position

We are seeking a Senior AI Security Assurance Engineer to lead the offensive verification of our AI systems and pipelines. This role will serve as the dedicated AI security lead within the Security Assurance organization, reporting directly to the Head of Security Assurance. While not a traditional covert red team position, the role requires similar deep adversarial thinking and the ability to evaluate AI systems with the mindset of a determined skeptic. Your mission: verify and innovate. You’ll be responsible for independently evaluating, challenging, and validating the security, safety, and integrity of all AI initiatives across the company, including AI embedded in products, internal AI use cases, training pipelines, model lifecycle management, and supporting infrastructure. This is not a compliance role, it’s a hands-on, experimental one. At Zoom, Security Assurance encompasses Offensive Security (product, infrastructure, hardware, red team), PSIRT, Product Vulnerability Management, and Bug Bounty. This role will also operate across all of these domains as the organization’s primary authority on AI-related risk and capabilities/implementation. You will be the AI expert in efforts to develop scalable, intelligent systems that automate and amplify Security Assurance at Zoom. You’ll both break and build, challenging assumptions in our AI infrastructure, features, and tools while creating tools to continuously expose and mitigate critical risk. The Security Assurance team at Zoom is an adversarial, high-leverage group focused on finding and reducing the company’s most critical security risks. We work from an attacker’s mindset and operate well beyond checklists, audits, and standard SDLC gates, targeting the vulnerabilities and systemic failures that escape existing controls. The team covers offensive security (vulnerability research, red teaming, hardware security), PSIRT, product vulnerability management, bug bounty, and emerging AI security. We apply deep technical rigor and clear risk judgment to drive concrete product and platform changes. We value evidence over assumptions and curiosity over comfort. This team is for truth-seekers who want their work to measurably reduce risk at global scale.

Requirements

  • Have deep understanding of generative AI systems (transformers, diffusion models, multi-agent frameworks) and their security failure modes.
  • Have experience building or adapting novel AI/ML methods to real-world security problems.
  • Demonstrate proficiency in Python, ML frameworks (PyTorch, TensorFlow, Hugging Face, LangChain), and modern cloud/data environments.
  • Be skilled at uncovering the true behavior and limitations of AI and platform systems through experimentation, code review, and automated adversarial techniques.
  • Be skilled at setting direction, advising peers, and communicating high-impact risks to executives.
  • Be unafraid to challenge assumptions or expose uncomfortable truths in service of user and system safety.
  • Demonstrate experience in red teaming, exploit development, or vulnerability research.

Responsibilities

  • Leading adversarial verification of AI systems: Design and execute deep, unconstrained assessments of AI models, pipelines, and agents, testing guardrails, safety layers, and data boundaries through offensive experimentation.
  • Uncovering gaps between promise and practice: Identify where AI security, safety, or privacy controls fail under pressure. Surface the mismatch between claims and reality.
  • Assessing the full AI lifecycle: Evaluate data, training, and deployment pipelines for risks like model poisoning, prompt injection, or fine-tuning abuse.
  • Developing AI-powered security discovery systems: Research, prototype, and operationalize machine learning–driven approaches to automatically detect, predict, and prioritize vulnerabilities and behavioral deviations in Zoom’s products and platform.
  • Automating and scaling offensive operations: Build AI-based frameworks to scale red teaming, vulnerability discovery, and bug bounty triage. Use LLMs, anomaly detection, and pattern learning to enhance automation and coverage.
  • Adapting cutting-edge research: Integrate the latest findings from offensive security research, autonomous agents, and AI-driven vulnerability analysis into Zoom’s security assurance programs.
  • Shaping AI security methodologies: Build frameworks for continuous AI-driven adversarial testing, automated validation, and system monitoring that scale across teams and products.
  • Translating findings into impact: Communicate verified risks and systemic weaknesses clearly to engineering and leadership, pairing technical insight with strategic direction.
  • Staying ahead of the curve: Track evolving AI architectures, attack vectors, and defenses, turning new research into offensive and defensive capability.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service