About The Position

We are seeking a Computer Vision Engineer to join our Autonomy engineering team to advance the visual perception and vision-based autonomy capabilities of our UAVs and public safety products. You will design, implement, and optimize real-time computer vision algorithms that enable robust localization, mapping, and visual navigation in challenging environments. In this role, you will work across VIO, VSLAM, depth and reconstruction pipelines, and visual scene understanding, collaborating closely with software, autonomy, controls, and hardware teams to bring vision algorithms into production UAV systems.

Requirements

  • Bachelor’s, Master’s, or PhD in Computer Science, Robotics, Electrical Engineering, or a related field with a minimum of 3+ years of industry experience. Note - we are considering candidates for mid-career, senior, and principal positions.
  • Strong programming skills in C++ and Python, with experience building real-time systems.
  • Experience developing computer vision or perception systems for robotics or UAVs, with a foundation in VSLAM, VIO, and/or related topics. Proficiency with standard frameworks and modern computer vision techniques.
  • Familiarity with implementing CV models or pipelines on embedded systems, GPUs, or hardware accelerators.
  • Hands-on experience with robotics or UAV testing, including data collection, system debugging, and field validation.

Nice To Haves

  • Deep knowledge of sensor fusion and tightly coupled vision–IMU systems.
  • Experience with machine learning–based perception, including training and optimizing deep models for edge hardware.
  • Background in vision-based navigation, visual servoing, or perception-driven autonomy.
  • Strong understanding of real-time systems, GPU optimization, or high-performance computer vision.
  • Familiarity with UAV safety, reliability, and regulatory considerations for autonomous systems.
  • Ability to help shape architecture, mentor engineers, and drive cross-functional technical decisions as needed.
  • Experience with ROS, PX4, MAVSDK, and/or similar robotics middleware.

Responsibilities

  • Research, design, and implement vision-based localization and mapping algorithms, including VIO and VSLAM.
  • Develop real-time computer vision pipelines for tracking, depth estimation, stereo/mono reconstruction, and dense/semi-dense mapping.
  • Architect and optimize vision-centric sensor fusion systems combining cameras, IMUs, LiDAR, radar, and other sensors for robustness in diverse environments.
  • Build perception algorithms enabling vision-based navigation, including feature tracking, obstacle detection, and perception-driven flight behaviors.
  • Develop computer vision and machine learning models for scene understanding, object detection, and dynamic obstacle identification.
  • Implement and optimize CV pipelines on embedded GPU or accelerator platforms, focusing on high performance and low latency.
  • Validate perception and autonomy performance through simulation, hardware-in-the-loop, and real-world flight testing.
  • Collaborate with cross-functional teams to ensure seamless integration with autonomy, controls, mechanical, and firmware systems.
  • Contribute to technical strategy, establish best practices, and help guide the evolution of the perception stack as the system scales and matures.

Benefits

  • Comprehensive medical, dental and vision plans for our employees and their families
  • 401K plan
  • Maternity and paternity leave
  • Flexible Time Off (Exempt) / Paid time off (Non-Exempt)
  • Flexible work environment
  • Orca pass (for those in Puget Sound)
  • Free parking (Seattle office)
  • Free snacks, drinks and espresso (Seattle office)
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service