About The Position

We’re building the next generation of improvements in making Apple Intelligence more aware, more personal, and even more deeply integrated with the ecosystem of Apple products that millions of users know and love today. We’re enabling our intelligent systems with access to computer vision, additional sensing, and enhanced reasoning capabilities, all while upholding the strong privacy values our users can trust. We’re looking for software engineers to build user experiences that will harness cutting-edge sensing and AI technology to bring new and delightful experiences to life. DESCRIPTION In this role, you will work across the stack, from low-level system software and frameworks to end-user experiences. Your work will be highly cross-functional, requiring collaboration with design, algorithms, software, services, privacy, security, performance, and hardware teams across Apple to define requirements and build end to end user facing features. You will write software using Swift, evaluate AI/ML algorithms, debug complex cross-device interactions, and contribute to a culture of shipping high-quality production code. You will also need to apply excellent UX intuition to identify and shape feature opportunities, iterate with design, and define technical requirements that drive development.

Requirements

  • 5+ years in software development
  • Bachelor’s degree in a related field or equivalent experience
  • Strong programming skills in any programming language (preferably at least one of Swift, Objective-C, C++) and deep understanding of data structures, memory management, and concurrency
  • Systems thinking, including ability to break down ambiguous problems and drive clarity on critical details
  • Demonstrated ability to translate user experience design into technical requirements
  • Solid cross-functional collaboration and technical communication skills

Nice To Haves

  • Shipping customer-facing features or products to production at scale
  • Developing for Apple platforms using Apple system frameworks like SwiftUI and ARKit
  • Building & shipping features using LLMs and/or Machine Learning algorithms, including on-device inference, data-driven validation, requirement definition, and collaboration with algorithm teams
  • Processing sensor data (e.g. image/video, audio, motion) and/or developing AR applications
  • Developing & validating personalization features, including working with sensitive data such as conversation transcripts or Health data
  • Developing under strict privacy and security constraints, including techniques like secure data processing, and/or privacy-by-design principles
  • Developing system software or frameworks, including API definition and performance optimization (particularly for resource constrained or real-time systems)
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service