About The Position

Apple’s Platform Acceleration & Compute Efficiency (PACE) is a high-leverage team operating at the critical intersection of our ML organizations, underlying compute infrastructure, and core platform tooling. Our mission is to empower Apple’s software engineering teams with efficient, scalable compute. By driving out operational friction and optimizing the broader machine learning ecosystem, we directly accelerate the pace of development across the company. As foundation models become increasingly central to Apple's user experiences, maximizing the efficiency of our ML compute is paramount. In this role, you will focus relentlessly on compute efficiency, ensuring that Apple’s models run as fast, reliably, and cost-effectively as possible. You will tackle massive optimization challenges, from maximizing hardware utilization across GPUs, TPUs, and custom Apple Silicon, to shaping workload scheduling and capacity allocation for large model serving. We are seeking a Senior Architect with deep expertise in ML infrastructure to act as a linchpin for Apple’s foundational inference strategy. You will be instrumental in defining, establishing, and monitoring compute efficiency metrics across the software engineering organization. By partnering closely with model developers and infrastructure providers, your work will directly reduce serving costs, shape core engineering decisions, and enable the highly efficient, scalable inference required to power Apple Intelligence for hundreds of millions of users.

Requirements

  • MS or PhD in a relevant field
  • Direct experience with foundation model serving, inference, and training at scale
  • Familiarity with PyTorch, JAX, cluster management (Slurm, Kubernetes), or GPU/TPU hardware
  • Prior experience in efficiency, FinOps, or capacity planning
  • Experience negotiating technical roadmaps with platform or infrastructure teams
  • Background in technical and financial decision-making (TCO modeling, cost optimization)

Responsibilities

  • Own and support ML compute management for Apple’s inference workloads (GPU, TPU, and custom silicon) to enable large-scale model serving.
  • Collaborate closely with Apple Intelligence and ML engineering teams to understand roadmaps and resource pain points to develop and implement resource strategies.
  • Optimize Apple’s ML workloads by driving performance improvements, maximizing resource utilization, and reducing service costs through deep root cause analysis that shapes both engineering decisions and the end customer experience.
  • Architect solutions for large-scale optimization problems, including capacity allocation, workload scheduling, and cost reduction, enabling Apple's AI-driven experiences.
  • Advocate on behalf of Apple’s ML engineers to bring a consolidated view of ML platform and model inference requirements to Apple’s internal infrastructure platform providers and 3rd party public cloud providers.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

Ph.D. or professional degree

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service