About The Position

The On-Device Machine Learning team at Apple is responsible for enabling the Research to Production lifecycle of innovative machine learning models that power magical user experiences on Apple's hardware and software platforms. Apple is the best place to do on-device machine learning, and this team sits at the heart of that area, working with research, SW engineering, HW engineering, and products. The team builds critical infrastructure that begins with onboarding the latest machine learning architectures to embedded devices, optimization toolkits to optimize these models to better suit the target devices, machine learning compilers and runtimes to complete these models as efficiently as possible, and the benchmarking, analysis and debugging toolchain needed to improve on new model iterations. This infrastructure underpins most of Apple's critical machine learning workflows across Camera, Siri, Health, Vision, etc., and as such is an integral part of Apple Intelligence. Our group is seeking an ML Infrastructure Engineer, with a focus on ML Insights and Forecasting. The role entails exploring new trends in ML architectures, getting them running with our on device stack, and building infra to enable regular coverage of these models. We are building the first end-to-end developer experience for ML development that, by taking advantage of Apple's vertical integration, allows developers to iterate on model authoring, optimization, transformation, execution, debugging, profiling and analysis. This role provides a great opportunity to bring the latest ML architectures and trends to our on device inference stack. Work includes prototyping to get new ideas working, building infrastructure to enable regular coverage, and collaborating with inference stack teams to make any changes needed to enable new architectures/features as well as deliver full machine performance. The role further offers a learning platform to dig into the latest research about on-device machine learning, an exciting ML frontier! Possible example areas include model visualization, efficient inference algorithms, model compression, and/or ML compilers/run-time.

Requirements

  • Masters or PhDs in Computer Science or relevant disciplines.
  • Experience in system performance analysis and optimizing ML models for edge inference.

Nice To Haves

  • Experience with standard ML concepts such as Transformers, CNNs or Stable Diffusion.

Responsibilities

  • Explore the latest ML model architectures and prototype getting these running on device.
  • Build infrastructure to enable at scale testing of new ML features.
  • Analyze achieved performance vs roofline models on Apple's hardware.
  • Analyze telemetry data to understand how users are using ML on device.
  • Identify gaps in today's ML inference stack and work with XF teams to prioritize and address these.
  • Collaborate extensively with ML and hardware teams across Apple.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Industry

Computer and Electronic Product Manufacturing

Education Level

Master's degree

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service