About The Position

Apple’s Security Engineering & Architecture (SEAR) organization is responsible for the security of all Apple products. Passionate about safeguarding our users, we lead with offence proactively uncovering and eliminating vulnerabilities before attackers ever get the chance. As AI systems become deeply integrated into operating systems, developer tools, and user experiences, they introduce entirely new attack surfaces vulnerable to prompt injection, agentic privilege escalation, data exfiltration, and AI-assisted exploitation at unprecedented scale. Think you have the creativity and determination to break these systems? Join us and help secure the next generation of intelligent platforms used by billions of people. In this role, you will identify and exploit vulnerabilities in AI-powered features and agentic systems across Apple platforms. The AI systems themselves are the attack surface. You will help to build offensive capabilities against autonomous systems and anticipate how adversaries may exploit AI enabled systems in the wild. You will join a team working with world-class offensive security researchers. The work is critical directly shapes the security posture of Apple. You will conduct offensive research into AI-specific attack classes, including prompt injection, agentic data exfiltration and lateral movement, persistence mechanisms in AI workflows, AI-assisted vulnerability discovery and exploitation.

Requirements

  • Solid grounding in common vulnerability classes (memory corruption, logic flaws, auth bypass)
  • Proven experience in security research, vulnerability discovery, or offensive security (e.g., browsers, 0-click, messaging systems, distributed systems, or AI platforms)
  • Strong understanding of modern AI/LLM systems and their failure modes (e.g., prompt injection, data exfiltration, model misuse)
  • Experience applying AI/ML tools (e.g., LLMs, agents) to automate or augment security research workflows

Nice To Haves

  • Experience attacking or defending agentic systems (multi-step AI workflows, tool-using agents, MCP-style integrations)
  • Familiarity with prompt injection techniques, obfuscation (e.g., encoding-based bypasses), and model manipulation strategies
  • Experience building or evaluating AI-driven vulnerability discovery pipelines
  • Understanding of browser-based AI integrations and risks (e.g., agentic browsing, data boundary violations)
  • Knowledge of capability-based security models or policy enforcement systems for AI agents
  • Experience with reverse engineering and low-level systems (IDA, Ghidra, LLDB)
  • Proficiency in one or more: Python, C/C++, Swift, Objective-C
  • Familiarity with Apple platforms (iOS, macOS) and their security architecture

Responsibilities

  • Identify and exploit vulnerabilities in AI-powered features and agentic systems across Apple platforms.
  • Build offensive capabilities against autonomous systems.
  • Anticipate how adversaries may exploit AI enabled systems in the wild.
  • Conduct offensive research into AI-specific attack classes, including prompt injection, agentic data exfiltration and lateral movement, persistence mechanisms in AI workflows, AI-assisted vulnerability discovery and exploitation.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service