About The Position

Apple Services Engineering (ASE), the team behind iCloud and Media services and the infrastructure that powers it, is looking for a Senior AI Security Engineer to partner with engineering teams working on new products and features. You will collaborate with developers, site reliability engineers, and security teams to protect ASE services and design a secure foundation for services at Apple. Your work will include full end-to-end security assurance activities including security architecture, threat modeling, some security testing, and risk management. You will be working with partner teams in security engineering, privacy, and offensive security to keep Apple's services secure for our users. If you love diving into different complex technical systems, sharing security improvements, and making security better, we want to talk with you! DESCRIPTION In this role, you will be the primary security team point of contact for several large engineering efforts. You will work with engineering teams throughout their development lifecycle. You will conduct security reviews and develop threat models and use the insights from these engagements to build standard methodologies. You will help define, automate and advocate for platform-wide security improvements. You will partner with your colleagues to raise the security bar for all engineering teams at Apple. As a senior technical lead responsible for the security of Apple's internet-facing services and backend infrastructure, you will be: Innately curious, listening for nuances and digging into details to understand systems and their weaknesses; Able to identify areas that are ripe for improvement and establishes appropriate security goals; Experienced and comfortable establishing relationships with teams to drive security improvements; Current on new security technologies, vulnerabilities, and methodologies; An excellent verbal and written communicator; Able to develop proof of concept systems to automate security recommendations, vulnerability discovery, and process workflows; Able to use data to drive security review efficiency and prioritize high-value security team engagement Responsible for security decisions impacting millions of users.

Requirements

  • 5 or more years conducting security reviews, threat modeling, tracking findings, and communicating risk to engineering and leadership
  • At least 3 years focused on AI/ML systems security, including hands-on experience with LLM application security, prompt injection defenses, AI model security controls, and securing the infrastructure these systems run on
  • Demonstrated expertise in securing agent-based systems and AI integrations, including experience with Model Context Protocol (MCP) server security, agent orchestration frameworks, and understanding of attack surfaces in agentic workflows (e.g., tool use vulnerabilities, context poisoning, unauthorized actions)
  • Deep knowledge of AI-specific threat modeling and risk assessment, including familiarity with frameworks like OWASP Top 10 for LLMs, MITRE ATLAS, and ability to identify threats unique to AI systems such as training data poisoning, model extraction, adversarial inputs, and supply chain risks in AI dependencies
  • Experience with securing API integrations and data flow controls in AI contexts, including knowledge of authentication/authorization patterns for AI services, data sanitization for LLM inputs/outputs, secrets management in agent systems, and implementing guardrails for AI-generated content and actions
  • Conversant in at least one programming language such as Python, Java, Go, or Swift

Nice To Haves

  • Bachelor's Degree or equivalent experience preferred
  • Bonus points for community contributions like public CVEs, bug bounty recognition, open source tools, blogs, etc.
  • Experience securing cloud infrastructure for AI/ML workloads, including container orchestration (Kubernetes), GPU-enabled compute environments, model serving infrastructure, and implementing security controls for AI training and inference pipelines (network segmentation, secrets management, runtime protection, resource isolation)
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service