About The Position

We’re starting to see the incredible potential of multimodal foundation and large language models, and many applications in the computer vision and machine learning domain that previously appeared infeasible are now within reach. We are looking for a highly motivated and skilled Machine Learning Integration Engineer to join our team in the Video Computer Vision group and help us enable that potential for realtime human understanding on Apple devices. The Video Computer Vision org has pioneered human-centric real-time features such as FaceID, FaceKit, and Gaze and Hand gesture control which have changed the way millions of users interact with their devices. We balance research and product requirements to deliver Apple quality, pioneering experiences, innovating through the full stack, and partnering with HW, SW and AI teams to shape Apple's products and bring our vision to life. DESCRIPTION As part of the Video Computer Vision (VCV) team, you will support our ML development and continuously improve our features. Working closely with ML algorithms engineers, data scientists and quality assurance teams, you’ll help deploy state-of-the-art computer vision technologies on Apple devices, balancing performance with the compute and power constraints of on-device inference.

Requirements

  • Bachelor's degree in Computer Science or related discipline, and 2 years relevant industry experience.
  • Strong foundational knowledge in Computer Science.
  • Extensive programming experience in Python, Swift, C++.
  • Experience working with PyTorch.
  • Experience with machine learning model development lifecycle, including data preprocessing, model training, evaluation, and deployment.
  • Foundational understanding of machine learning: ML algorithms and development pipelines, with the ability to work effectively with ML practitioners and integrate ML components into production systems.

Nice To Haves

  • Experience with CoreFoundation, RealityKit and CoreML.
  • Hands-on experience with CI/CD pipelines and practices.
  • Experience with live camera streaming applications: Understanding of real-time video pipelines, image transformations, and rendering loops.
  • Familiarity with common computer vision techniques (e.g., object detection, segmentation, tracking, pose estimation), sequence models for real-time inference and LLMs optimized for on-device performance.

Responsibilities

  • support our ML development
  • continuously improve our features
  • help deploy state-of-the-art computer vision technologies on Apple devices, balancing performance with the compute and power constraints of on-device inference
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service