Computer Vision Engineer

9 MothersAustin, TX
19hOnsite

About The Position

We are seeking a Computer Vision Engineer to serve as the “eyes” of our autonomous c‑sUAS platforms. You will design, implement, and optimize the entire perception pipeline, specializing in low‑latency, high‑frame‑rate processing to track small, fast objects with zero margin for error. You should be comfortable building models and systems from the ground up, moving beyond simply utilizing existing frameworks. This means going beyond just “using” tools like YOLO — instead, building models from scratch and demonstrating a fundamental understanding of the problem space.

Requirements

  • Programming/libraries: Strong background in Python or C++ (for performance).
  • Expertise in computer vision libraries such as OpenCV and image‑processing modules in PyTorch or TensorFlow.
  • Architecture: Hands‑on experience building embedded computer vision systems for real-time applications involving small, fast objects.
  • Geometric vision: Deep understanding of camera calibration, image rectification, and 3D geometry.
  • Compliance: This position requires access to export-controlled information under ITAR. Only U.S. persons are permitted to access such information. Must be willing to submit to a background check.

Nice To Haves

  • Prior defense startup experience
  • Security clearance or ability to obtain one
  • Passion for building robots or engineering projects as a hobby

Responsibilities

  • Architect and implement the entire embedded computer vision pipeline using high‑performance languages such as Python and C++.
  • Utilize real‑time object detection models (e.g., YOLO) and CNNs optimized for speed to achieve world-class detection and object tracking capabilities.
  • Optimize code for low‑latency, high‑frame‑rate performance on resource‑constrained embedded systems, with practical experience in NVIDIA Jetson environments and camera pipelines.
  • Apply geometric vision principles (calibration, rectification, and 3D geometry) to translate 2D camera footage into accurate 3D coordinates for fire control and guidance systems.
  • Collaborate closely with robotics and AI teams to ensure perception data is reliable for autonomous decision‑making and precise countermeasure guidance.

Benefits

  • Competitive salary + early equity
  • Opportunity to build systems the Department of Defense actively needs
  • New lab equipped with Jetsons, scopes, and 3‑D printers
  • Direct influence on product and technology roadmap
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service