About The Position

At Apple, we believe privacy is a fundamental human right. Our Security Engineering & Architecture (SEAR) organization is at the forefront of protecting billions of users worldwide, building security into every product, service, and experience we create. The SEAR ML Security Engineering team combines cutting-edge machine learning with world-class security engineering to defend against evolving threats at unprecedented scale. We're responsible for developing intelligent security systems for Apple Intelligence that protect Apple's ecosystem while preserving the privacy our users expect and deserve. We're seeking a staff-level ML Security Research Scientist who operates at the intersection of applied research and production impact. You'll lead original security research on agentic ML systems deployed at scale—driving secure agentic design directly into shipping products, identifying real vulnerabilities in tool-using models and designing adversarial evaluations that reflect actual attacker behavior. You'll work at the boundary between research, platform engineering, and product security, translating findings into architectural decisions, launch requirements, and long-term hardening strategies that protect billions of users. Your impact will be measured by risk reduction in production systems that ship. DESCRIPTION This role focuses on applied security research for production ML systems, with an emphasis on agentic and tool-using models deployed at scale. You will lead research efforts that surface real security risks in shipped or near-shipped systems, and you will drive mitigations that integrate cleanly into Apple’s ML platforms and products. You will operate at the boundary between research, platform engineering, and product security, conducting original research grounded in real system behavior and translating it into concrete design changes, launch requirements, and long-term hardening strategies. Impact is measured by risk reduction in production, not theoretical results alone.

Requirements

  • Ph.D. or equivalent experience in machine learning, security, systems, or a related field
  • Demonstrated experience in applied ML security, adversarial ML, or systems security with real-world impact
  • Strong experimental and engineering skills, with an emphasis on reproducibility and operational relevance

Nice To Haves

  • Experience researching or securing LLM-based or tool-augmented ML systems
  • Ability to work fluidly across research, engineering, and security review processes
  • Track record of influencing production systems through research-driven insights
  • Publications in top venues are a plus, but production impact is the primary signal

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service