Perception Engineer

Aurelius SystemsSan Francisco, CA
Hybrid

About The Position

We are seeking a skilled Perception Engineer to join our software team and work on end‑to‑end perception and sensor‑fusion stack. You will develop, train, and deploy vision and sensor based models; manage data pipelines; and enable real‑time detection and tracking for our laser‑based defense system. This role bridges data science, software engineering, and robotics to deliver reliable, high‑throughput perception performance on edge hardware.

Requirements

  • 2–6+ years in computer vision, sensor fusion, or robotics perception roles
  • Strong C++ with deep ML engineering experience
  • Hands‑on with ML frameworks (TensorFlow, PyTorch) and real‑time inference engines (TensorRT, OpenVINO)
  • Computer vision, tracking, and detection in real-time, real-world conditions
  • Familiarity with ROS2, Docker, and CI/CD for ML pipelines
  • Experience with multi‑sensor calibration and data synchronization
  • BS, MS, or PhD in CS, EE, Robotics, or equivalent.
  • "U.S. Person" status as defined by U.S. law (U.S. citizens, legal permanent residents, or certain protected classes of asylees and refugees).

Nice To Haves

  • Edge‑AI optimization (quantization, pruning)
  • Experience with FPGA or embedded GPU platforms
  • Background in defense or safety‑critical systems
  • Familiarity with cybersecurity guidelines and secure coding practices

Responsibilities

  • Design, train, validate and fine-tune machine‑learning and deep‑learning models (e.g., YOLO, RT-DETR, CNNs) for object detection, classification, and segmentation.
  • Integrate and fuse data from multi‑modal sensors (RGB, thermal, LiDAR/ToF, IMU, encoders) to produce robust, real‑time Regions of Interest (ROIs).
  • Research, implement, and as-needed develop high and low-level image-processing techniques, such as deconvolution, low SNR detection, and motion-isolation techniques.
  • Collaborate with hardware teams to integrate and troubleshoot sensors (global‑shutter and rolling‑shutter cameras, thermal imagers, LiDAR/ToF modules, IMUs) over GigE Vision, USB3 Vision, CAN, SPI, and I²C protocols; develop and debug embedded firmware in C/C++ (or Rust) for microcontrollers (STM32, NXP, TI) and FPGAs using VHDL/Verilog within RTOS environments (FreeRTOS, Zephyr).
  • Build scalable data ingestion, labeling, augmentation, and storage pipelines (simulated and field data) ensuring 100k+ labeled frames accuracy.
  • Optimize inference frameworks for edge deployment (GPU/FPGA), achieving ≥500 Hz end‑to‑end throughput.
  • Develop dashboards and telemetry for drift analysis, hardware health monitoring, performance metrics, and automated retraining triggers.
  • Author clear technical docs; mentor junior engineers on best practices in vision, sensor‑fusion, and embedded firmware engineering.
  • Determines development needs by directly analyzing technical and physical limitations of our goals.

Benefits

  • Competitive salary + equity
  • United Health Care medical, dental, and vision coverage
  • Flexible 18 days PTO + 5 sick days
  • Travel to field test events and range days
  • Covered daily lunches and office snacks + drinks
  • E-bike / scooter stipend ( Up to $500)
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service