About The Position

Apple is where individual imaginations gather together, committing to the values that lead to great work. Every new product we build, service we create, or Apple Store experience we deliver is the result of us making each other’s ideas stronger. That happens because each of us believes we can create something wonderful and share it with the world, changing lives for the better. It’s the diversity of our people and their thinking that inspires the innovation that runs through everything we do. When we bring everybody in, we can do the best work of our lives. Here, you’ll do more than join something — you’ll add something! As part of the Machine Learning Platform Technologies - ML Compute organization, you’ll connect the world’s best researchers with the world’s best AI infrastructure to take on the most challenging problems in machine learning and build the rock-solid foundation for some of Apple’s most innovative products. And this is Apple, so your team will innovate across the entire stack: hardware, software, algorithms — it’s all here. DESCRIPTION We are seeking a Distinguished Engineer to provide technical leadership in building and evolving next-generation AI infrastructure at Apple. In this role, you will shape the architecture and long-term technical strategy for large-scale training and inference systems, working at the intersection of AI research, systems, and cloud infrastructure. Your work will directly influence how frontier and production models are trained, deployed, and scaled across diverse accelerator platforms.

Requirements

  • Bachelor’s degree in Computer Science, relevant technical field, or equivalent practical experience
  • 15 years of experience designing and building large-scale distributed systems
  • Proficiency in at least 1 backend language (e.g., Python, C++, Go, Rust)
  • Proficiency in cloud-native architectures and orchestration platforms (e.g., Kubernetes)
  • Hands-on experience working with ML accelerators such as GPUs and TPUs

Nice To Haves

  • Master’s degree or PhD in Computer Science or related technical fields
  • Experience supporting distributed training and/or inference workloads in production
  • Expertise in ML systems performance profiling, debugging, and optimization
  • Understanding of deep learning frameworks (e.g., PyTorch, JAX) and their underlying architectures
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service