About The Position

As a technical leader within the team, you will own the end-to-end architecture, rapid prototyping, and productization of advanced machine learning-based auto-focus algorithms. You will maneuver through ambiguity independently to drive the long-term ML roadmap for AF, spearheading the design of novel learning-based systems integrated on Apple camera platforms which achieve seamless auto-focus user experience in any scene condition in both bright and low light. You will determine methods and procedures on complex projects, frequently acting as the representative for your area while leading cross-functional work to deploy enhanced machine learning based AF features. This includes coordinating the activities of sub-teams to create sophisticated architecture, training and tooling for machine learning based auto-focus development. You will partner deeply with the SOC architecture team to influence future silicon designs, the hardware team to evaluate new camera components impacting auto-focus, and the firmware team to optimize system-level flows for machine learning algorithms. The ideal candidate is a visionary problem-solver who thinks originally, resolves highly complex issues in creative ways, and is a proven mentor capable of inspiring innovation among others.

Requirements

  • MS in Computer Science, Machine Learning, Electrical Engineering, or a related field.
  • Experience in defining datasets for machine learning network training on low-level vision tasks as well as dataset curation and data augmentation strategies for robust training.
  • Expertise in modern machine learning (ML) frameworks and libraries, specifically PyTorch or TensorFlow/TFLite/LiteRT.
  • Strong software engineering and architectural skills, highly skilled at coding in Python and C.

Nice To Haves

  • Experience with machine learning for practical low level computer vision applications including one or more of the following areas: auto-focus, stereo disparity/depth, depth estimation, defocus/blur estimation, optical flow estimation, sensor fusion.
  • Experience with defining datasets for training temporal networks.
  • Good knowledge of optics (Point Spread Functions, Depth-of-Field, etc.) and image quality metrics impacting critical image sharpness evaluation (Modulation Transfer Function, Spatial Frequency Response, Acutance, Blur/Defocus Estimation etc).
  • Track record of pioneering innovation comprising publications in top-tier computer vision conferences (e.g., CVPR, ICCV, ECCV) and/or patents.

Responsibilities

  • Own the end-to-end architecture, rapid prototyping, and productization of advanced machine learning-based auto-focus algorithms.
  • Drive the long-term ML roadmap for AF, spearheading the design of novel learning-based systems integrated on Apple camera platforms.
  • Determine methods and procedures on complex projects, frequently acting as the representative for your area while leading cross-functional work to deploy enhanced machine learning based AF features.
  • Coordinate the activities of sub-teams to create sophisticated architecture, training and tooling for machine learning based auto-focus development.
  • Partner deeply with the SOC architecture team to influence future silicon designs, the hardware team to evaluate new camera components impacting auto-focus, and the firmware team to optimize system-level flows for machine learning algorithms.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service