ML Safety Research Engineer

AppleSan Francisco, CA
3d

About The Position

Our team, part of Apple Services Engineering, is looking for an ML Research Engineer to lead the design and continuous development of automated safety benchmarking methodologies. In this role, you will investigate how media-related agents behave, develop rigorous evaluation frameworks and techniques, and establish scientific standards for assessing risks they pose and safety performance. This role supports the development of scalable evaluation techniques that ensure our engineers have the right tools to assess candidate models and product features for responsible and safe performance. The capabilities you build will allow for the generation of benchmark datasets and evaluation methodologies for model and application outputs, at scale, to enable engineering teams to translate safety insights into actionable engineering and product improvements. This role blends deep technical expertise with strong analytical judgment to develop tools and capabilities for assessing and improving the behavior of advanced AI/ML models. You will work cross-functionally with Engineering and Project Managers, Product, and Governance teams to develop a suite of technologies to ensure that AI experiences are reliable, safe, and aligned with human expectations. The successful candidate will take a proactive approach to working independently and collaboratively on a wide range of projects. In this role, you will work alongside a small but impactful team, collaborating with ML and data scientists, software developers, project managers, and other teams at Apple to understand requirements and translate them into scalable, reliable, and efficient evaluation frameworks.

Requirements

  • Advanced degree (MS or PhD) in Computer Science, Software Engineering, or equivalent research/work experience
  • 1+ years of work experience either as a postdoc or in the industry
  • Strong research background in empirical evaluation, experimental design, or benchmarking
  • Strong proficiency in Python (pandas, NumPy, Jupyter, PyTorch, etc.)
  • Deep familiarity with software engineering workflows and developer tools
  • Experience working with or evaluating AI/ML models, preferably LLMs or program synthesis systems
  • Strong analytical and communication skills, including the ability to write clear reports
  • Proficiency in Python (pandas, NumPy, Jupyter, PyTorch, etc.)
  • Experience working with large datasets, annotation tools, and model evaluation pipelines
  • Familiarity with evaluations specific to responsible AI and safety, hallucination detection, and/or model alignment concerns
  • Ability to design taxonomies, categorization schemes, and structured labeling frameworks
  • Ability to interpret unstructured data (text, transcripts, user sessions) and derive meaningful insights
  • Strong ability to stitch together qualitative and quantitative insights into actionable guidance; strong ability to communicate complex architectures and systems to a variety of stakeholders
  • Education in Data Science, Linguistics, Cognitive Science, HCI, Psychology, Social Science, or a related field

Nice To Haves

  • Publications in AI/ML evaluation or related fields
  • Experience with automated testing frameworks
  • Experience constructing human-in-the-loop or multi-turn evaluation setups
  • Intermediate or Advanced Proficiency in Swift
  • Familiarity with RAG systems, reinforcement learning, agentic architectures, and model fine-tuning
  • Expertise in designing annotation guidelines and validation instruments and techniques
  • Background in human factors, social science, and/or safety assessment methodologies
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service