About The Position

Join a team at the forefront of defending Apple's ecosystem. We build the large-scale machine learning systems that protect millions of users from emerging threats and ensure the integrity of our products. We are looking for an experienced Applied ML Engineer who has a proven track record of shipping production models. The ideal candidate is passionate about tackling complex safety and security challenges using state-of-the-art techniques. In this role, you will design, build, and deploy the critical machine learning systems that are foundational to the safety of Apple's products, all while upholding our deep commitment to user privacy. DESCRIPTION As engineer on this team, you will own the full lifecycle of our abuse detection machine learning models. You will collaborate closely with researchers to understand the threat landscape and partner with software and product teams to deploy robust, scalable defenses. We believe the most effective security systems are built by engineers who can translate adversarial insights into production-ready code. Your work will directly contribute to the architecture of Apple's AI platform and protect users from real-world harm. Here is what you will do: Design, build, and deploy production-grade ML models to detect and mitigate abuse across multiple modalities (text, image, audio). Own the full ML lifecycle: from prototyping and data analysis to deployment, monitoring, and the continuous improvement of models in production. Drive the data strategy to continuously improve model performance by analyzing distribution gaps, contributing to synthetic data pipelines, and creating automated annotation systems. Architect end-to-end systems for monitoring platform activity, detecting misuse, and triggering automated enforcement actions in real-time. Collaborate with cross-functional partners in engineering, research, and product to define project requirements, establish technical direction, and deliver robust security solutions.

Requirements

  • 2+ years experience shipping machine learning models to production. You have owned the end-to-end lifecycle of a model, from development to deployment and maintenance.
  • Strong familiarity with research fundamentals, machine learning principles, and development methodologies around LLMs, foundation models, and diffusion models
  • Proficient programming skills in Python and deep learning toolkits (e.g. JAX, PyTorch, Tensorflow)
  • Ability to work with sensitive and offensive content as part of building robust security and abuse detection systems.

Nice To Haves

  • BS, MS or PhD in Computer Science, Machine Learning, or related fields or an equivalent qualification acquired through other avenues
  • Hands-on experience with fine-tuning or aligning large language models for security or safety applications.
  • Experience building large-scale data processing pipelines and ML infrastructure.
  • Experience driving technical projects and collaborating with large, diverse, cross-functional teams.

Responsibilities

  • Design, build, and deploy production-grade ML models to detect and mitigate abuse across multiple modalities (text, image, audio).
  • Own the full ML lifecycle: from prototyping and data analysis to deployment, monitoring, and the continuous improvement of models in production.
  • Drive the data strategy to continuously improve model performance by analyzing distribution gaps, contributing to synthetic data pipelines, and creating automated annotation systems.
  • Architect end-to-end systems for monitoring platform activity, detecting misuse, and triggering automated enforcement actions in real-time.
  • Collaborate with cross-functional partners in engineering, research, and product to define project requirements, establish technical direction, and deliver robust security solutions.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service