About The Position

As an Embedded Perception Engineer, you will own how our autonomous systems perceive and understand the world — in real time, in operational environments, and under demanding maritime conditions. This role goes beyond model training. You will architect, deploy, and optimize perception pipelines that run reliably on embedded edge hardware. You’ll work across vision models, sensor fusion, and high-performance inference to deliver robust situational awareness to autonomous surface vessels executing real-world missions. If you thrive in dynamic environments, enjoy squeezing maximum performance from embedded AI systems, and want to see your work field-tested and mission-proven, this role offers immediate and meaningful impact.

Requirements

  • Bachelor’s or higher in Computer Science, Electrical Engineering, Robotics, Machine Learning, or related field
  • 3+ years developing and deploying computer vision or perception systems in C++ and/or Python on Linux
  • Experience with modern deep learning frameworks (PyTorch, TensorFlow)
  • Hands-on experience with large vision models (ViT, CLIP, SAM, etc.)
  • Experience implementing sensor fusion across heterogeneous modalities
  • Proficiency with NVIDIA Triton, TensorRT, ONNX Runtime, or equivalent inference tooling
  • Ability to quickly navigate and understand complex codebases
  • Strong systems-thinking mindset
  • U.S. Citizenship and eligibility for U.S. security clearance

Nice To Haves

  • Experience deploying perception systems on embedded platforms (Jetson, FPGA, similar)
  • Familiarity with maritime or aerial sensor environments
  • Experience optimizing models for edge inference
  • Knowledge of autonomy frameworks (ROS2, MOOS-IvP)
  • Experience with safety-critical or real-time systems
  • Familiarity with military systems and defense acquisition processes

Responsibilities

  • Own end-to-end perception pipelines deployed to operational systems
  • Deliver high-reliability solutions aligned with mission requirements
  • Design, train, and optimize perception models for maritime environments
  • Develop object detection, classification, and tracking systems
  • Work with large vision models (ViT, CLIP, SAM, or similar)
  • Fuse multi-modal sensor data (camera, radar, lidar) into unified perception outputs
  • Collaborate with hardware teams to select and integrate sensors
  • Develop and validate low-level sensor drivers for high-integrity data pipelines
  • Deploy models using NVIDIA Triton Inference Server
  • Optimize inference for latency and throughput on edge hardware
  • Apply quantization, pruning, and other optimization techniques
  • Integrate perception outputs into autonomy stacks alongside mission software and platform teams
  • Support troubleshooting of perception performance in operational environments
  • Define metrics and evaluation frameworks to monitor deployed system health
  • Investigate perception failures and drive resolution across teams
  • Partner with operators and customers to translate field challenges into engineering solutions

Benefits

  • 100% Employer paid Health, Dental and Vision Insurance for you and your families
  • Life Insurance (Employer Paid)
  • Ability to participate in the companies 401k program (Matching)
  • Unlimited PTO policy with an enforced 2 week minimum
  • Equity Package
  • Work / Home Office Stipend
  • Global Entry
  • 16 Week Paid Parental Leave
  • Monthly Health and Wellness Stipend
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service