Staff Perception Engineer (US Based)

Andromeda RoboticsSan Francisco, CA
42dOnsite

About The Position

This role balances architectural leadership with strategic, hands-on work. You will lead the design and implementation of Abi’s perception stack, translating raw sensor data into semantic understanding that supports autonomous navigation and conversational AI. This dual challenge demands a systems thinker who can architect scalable perception pipelines and also implement foundational components to prove the architecture’s effectiveness. You will bridge multiple domains—sensor fusion, computer vision, audio processing, and ML deployment—while optimising for real-time performance on our embedded Jetson Orin AGX platform. You'll be the founding perception hire, collaborating deeply with autonomy, conversational AI, gestures, controls, audio engineering, and ML teams. You'll work with product owners to translate user needs into technical requirements, and with the broader engineering team to ensure your perception outputs enable downstream systems to flourish.

Requirements

  • Bachelor’s or Master’s degree in Computer Science, Robotics, Electrical Engineering, or related field
  • 5-7+ years of experience building and shipping perception systems for robotics or autonomous vehicles
  • Deep expertise in computer vision (object detection, tracking, 3D reconstruction, camera calibration)
  • Strong fundamentals in sensor fusion (Kalman filtering, probabilistic estimation, multi-modal integration)
  • Real-time embedded systems experience (CUDA, TensorRT, ROS2)
  • Proven architectural skills with hands-on coding ability in C++ and Python
  • Pragmatic mindset balancing off-the-shelf and custom solutions

Nice To Haves

  • Audio processing background, including beamforming and source localisation
  • Experience with face recognition systems and liveness detection
  • Expertise with depth sensors such as stereo vision, structured light, or LiDAR
  • Skills in human pose estimation and social gaze prediction mechanisms
  • Experience optimising ML models for edge deployment
  • A PhD in relevant fields (computer vision, robotics, signal processing)

Responsibilities

  • Architect the Perception Stack: Design and own the full system from raw sensors through semantic understanding, including interface contracts, compute and latency budgeting, and robust architectural decisions.
  • Lead Cross-Functional Design: Facilitate design workshops with autonomy, conversational AI, audio engineering, ML, and hardware teams to align resources and interface definitions.
  • Implement Strategically: Develop core perception capabilities like face recognition, speaker diarisation, person detection and tracking, and sensor fusion to validate and instantiate the architecture.
  • Own Production Systems: Build with production-readiness in mind, including graceful degradation, monitoring, debugging tools, and deployment pipelines.
  • Make Technical Decisions: Drive build vs. buy, algorithm choices, and embedded deployment optimisations, focusing on performance and resource constraints.
  • Scale the Team: Support recruitment and mentoring as the perception team grows.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service