About The Position

In this role, you'll develop the next frontier of location intelligence, in partnership with teams across sensing, Siri, Maps, and system frameworks. You'll work on problems from research through production deployment: Design and implement location state estimation algorithms that fuse multi-modal sensor data (GPS, WiFi positioning, accelerometer, altimeter, barometer) to build a rich understanding of user context and mobility patterns Develop on-device machine learning models for place inference, route prediction, and behavioral forecasting that operate within strict power and memory constraints Build data processing pipelines that aggregate, filter, and cluster real-world sensor data on mobile devices, balancing intelligence with resource constraints Implement sophisticated algorithms for background location awareness and semantic understanding — then integrate them into production code running on hundreds of millions of devices Collect and analyze real-world datasets to train models, validate performance, and iterate on algorithm design Test rigorously. Dogfood your work. Collect metrics across diverse user populations and edge cases. An issue that affects 1% of a billion devices is a big issue. Optimize for the full system: CPU, memory, power consumption, and radio usage. Our software needs to provide a high level of intelligence while sipping battery—this is one of the most exciting engineering challenges in mobile computing. A dedication to users' privacy and security is core to how Apple does business. We want their devices to exhibit the high level of intelligence and proactivity that can only come from deep contextual understanding. We don't want their sensitive data coming back to Apple or being exposed to third parties. Other companies solve similar problems in very different ways. Our way is more work. We believe it's worth it.

Requirements

  • 5+ years experience developing commercial software, preferably systems-level or embedded software running on resource-constrained devices
  • Strong programming skills in C, C++, Objective-C, or Swift, with solid foundation in algorithms, data structures, and computational complexity
  • Working knowledge of statistics and probability, including comfort with histograms, probability distributions, Bayesian inference, and hypothesis testing
  • Experience evaluating and optimizing system performance: memory footprint, CPU usage, power consumption, and I/O

Nice To Haves

  • Deep expertise in location technologies: GPS/GNSS positioning, WiFi-based localization, indoor positioning, sensor fusion for state estimation, or IMU-based dead reckoning. If you've built location estimators that fuse multiple sensor modalities, we especially want to hear from you.
  • Experience with machine learning for time-series data, spatial data, or behavioral prediction. On-device ML experience (model size optimization, quantization, power-efficient inference) is a strong plus.
  • Background in signal processing, Kalman filtering, particle filters, or other probabilistic state estimation techniques.
  • Experience with clustering algorithms (DBSCAN, hierarchical clustering, etc.) and unsupervised learning applied to spatial or temporal data.
  • Track record of shipping production systems that operate at scale under resource constraints (mobile, embedded, or edge computing environments).
  • Strong collaboration skills and ability to work effectively across teams with diverse expertise. At Apple, you'll partner closely with teams in sensing, connectivity, privacy, and application frameworks. You'll need to communicate clearly, plan collaboratively, and execute flexibly.
  • Experience with performance profiling tools (Instruments, dtrace, etc.) and systematic optimization of CPU, memory, and power usage.
  • Experience with large-scale data analysis for offline algorithm development, model validation, and performance evaluation across diverse user populations.

Responsibilities

  • Design and implement location state estimation algorithms that fuse multi-modal sensor data (GPS, WiFi positioning, accelerometer, altimeter, barometer) to build a rich understanding of user context and mobility patterns
  • Develop on-device machine learning models for place inference, route prediction, and behavioral forecasting that operate within strict power and memory constraints
  • Build data processing pipelines that aggregate, filter, and cluster real-world sensor data on mobile devices, balancing intelligence with resource constraints
  • Implement sophisticated algorithms for background location awareness and semantic understanding — then integrate them into production code running on hundreds of millions of devices
  • Collect and analyze real-world datasets to train models, validate performance, and iterate on algorithm design
  • Test rigorously. Dogfood your work. Collect metrics across diverse user populations and edge cases.
  • Optimize for the full system: CPU, memory, power consumption, and radio usage.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service