AIML - Staff ML Engineer, Responsible AI

AppleSan Francisco, CA
7d

About The Position

Join Us in Shaping the Future of Generative AI at Apple! Are you passionate about making AI systems safer, more inclusive, and globally representative? Apple is seeking an expert Machine Learning Engineer to shape the future of responsible AI for the next generation of generative features. In this role, you will lead the responsible AI lifecycle end-to-end: assessing risks, defining policies, developing mitigation strategies, and driving continuous improvements. Your work will directly influence how we evaluate, align, and monitor the safety of large language and multimodal models. As part of Apple’s Responsible AI group within the Human-Centered Machine Intelligence (HCMI) organization, you’ll collaborate with cross-functional partners to minimize unintended consequences across people, systems, and society while elevating feature capabilities and the overall user experience. Together, we’ll anticipate challenges, measure real-world impact, and deliver trusted, high‑quality AI experiences to users around the globe. You’ll also contribute to forward‑looking research in fairness, robustness, uncertainty, and safety — pushing the boundaries of responsible AI at scale. DESCRIPTION Our team leads Responsible AI efforts for a global generative AI product in a highly cross-functional environment. The ideal candidates will define safety policies in collaboration with leadership, design, engineering, legal, and regulatory teams, ensuring alignment with product goals. These individuals will work on architecture mitigation and safety alignment strategies for generative models, drive integration in production. Additionally, they will work on developing models, tools, datasets, and evaluation methods to monitor, diagnose failures, and improve the safety of generative models throughout the deployment lifecycle. We do all these by incorporating human and automated feedback, post‑launch to continuously improve feature safety and user trust.

Requirements

  • 3+ years of proven ability in machine learning, including work with generative models (Transformers, LLMs, VLMs), NLP, or Computer Vision
  • Proficiency in Python and data science libraries (e.g. Pandas) with strong skills in data analysis, visualization, and applied ML workflows
  • Excellent interpersonal skills and proven ability to translate sophisticated technical insights for cross‑functional partners, senior leadership, and executives
  • Strong analytical and independent problem-solving skills, with ability to navigate ambiguity
  • Experience designing and supporting human and automated evaluations, particularly with complex, nuanced, or multi‑labeled data
  • Hands‑on experience collecting and analyzing language, vision, or multimodal datasets
  • Background in failure analysis, quality engineering, or robustness testing for ML‑driven systems
  • Must be comfortable working with sensitive or potentially offensive content

Nice To Haves

  • BS, MS, or PhD in Computer Science, Machine Learning, or related field, or equivalent experience
  • Proven success contributing in a highly cross‑functional environment
  • Experience shipping complex AI systems at global scale
  • Background in model explainability, uncertainty estimation, or interpretability
  • Curiosity and research interest in fairness, bias, and the societal impacts of generative AI
  • Passion for building innovative, high‑impact products that draw upon interdisciplinary skills

Responsibilities

  • assessing risks
  • defining policies
  • developing mitigation strategies
  • driving continuous improvements
  • evaluate, align, and monitor the safety of large language and multimodal models
  • collaborate with cross-functional partners to minimize unintended consequences across people, systems, and society while elevating feature capabilities and the overall user experience
  • anticipate challenges
  • measure real-world impact
  • deliver trusted, high‑quality AI experiences to users around the globe
  • contribute to forward‑looking research in fairness, robustness, uncertainty, and safety
  • define safety policies in collaboration with leadership, design, engineering, legal, and regulatory teams, ensuring alignment with product goals
  • work on architecture mitigation and safety alignment strategies for generative models, drive integration in production
  • develop models, tools, datasets, and evaluation methods to monitor, diagnose failures, and improve the safety of generative models throughout the deployment lifecycle
  • incorporating human and automated feedback, post‑launch to continuously improve feature safety and user trust
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service