Perception Engineer

Aurelius SystemsSan Francisco, CA
Onsite

About The Position

Aurelius Systems is a VC-backed defense tech startup building autonomous, edge-deployed directed energy systems for counter-UAS. We build laser weapons to shoot down drones. We are a small team of engineers, former US military operators, and subject matter experts scaling America's directed energy dominance. We are seeking a skilled Perception Engineer to join our software team and work on end-to-end perception and sensor-fusion stack. You will develop, train, and deploy vision and sensor-based models; manage data pipelines; and enable real-time detection and tracking for our laser-based defense system. This role bridges data science, software engineering, and robotics to deliver reliable, high-throughput perception performance on edge hardware.

Requirements

  • 2–6+ years in computer vision, sensor fusion, or robotics perception roles
  • Strong C++ with deep ML engineering experience
  • Hands-on with ML frameworks (TensorFlow, PyTorch) and real-time inference engines (TensorRT, OpenVINO)
  • Computer vision, tracking, and detection in real-time, real-world conditions
  • Familiarity with ROS2, Docker, and CI/CD for ML pipelines
  • Experience with multi-sensor calibration and data synchronization
  • BS, MS, or PhD in CS, EE, Robotics, or equivalent. Track record matters more than degree.
  • Extreme bias for action. You ship working perception on real hardware, not slideware
  • You debug from first principles, not intuition alone
  • Comfortable with ambiguity and fast iteration in a startup environment
  • Clear communicator across software, hardware, and operator-facing surfaces
  • Self directed. You identify what needs to happen next and do it without being told
  • This role requires access to export-controlled information or items that require "U.S. Person" status.

Nice To Haves

  • Edge-AI optimization (quantization, pruning)
  • Experience with FPGA or embedded GPU platforms
  • Background in defense or safety-critical systems
  • Familiarity with cybersecurity guidelines and secure coding practices

Responsibilities

  • Design, train, validate and fine-tune machine-learning and deep-learning models (e.g., YOLO, RT-DETR, CNNs) for object detection, classification, and segmentation.
  • Integrate and fuse data from multi-modal sensors (RGB, thermal, LiDAR/ToF, IMU, encoders) to produce robust, real-time Regions of Interest (ROIs).
  • Research, implement, and as-needed develop high and low-level image-processing techniques, such as deconvolution, low SNR detection, and motion-isolation techniques.
  • Collaborate with hardware teams to integrate and troubleshoot sensors (global-shutter and rolling-shutter cameras, thermal imagers, LiDAR/ToF modules, IMUs) over GigE Vision, USB3 Vision, CAN, SPI, and I²C protocols; develop and debug embedded firmware in C/C++ (or Rust) for microcontrollers (STM32, NXP, TI) and FPGAs using VHDL/Verilog within RTOS environments (FreeRTOS, Zephyr).
  • Build scalable data ingestion, labeling, augmentation, and storage pipelines (simulated and field data) ensuring 100k+ labeled frames accuracy.
  • Optimize inference frameworks for edge deployment (GPU/FPGA), achieving ≥500 Hz end-to-end throughput.
  • Develop dashboards and telemetry for drift analysis, hardware health monitoring, performance metrics, and automated retraining triggers.
  • Author clear technical docs; mentor junior engineers on best practices in vision, sensor-fusion, and embedded firmware engineering.
  • Determines development needs by directly analyzing technical and physical limitations of our goals.

Benefits

  • Competitive salary + equity
  • United Health Care medical, dental, and vision coverage
  • Flexible 18 days PTO + 5 sick days
  • Travel to field test events and range days
  • Covered daily lunches and office snacks + drinks
  • E-bike / scooter stipend ( Up to $500)
  • Direct access to leadership and real ownership over your work
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service