Robot Perception Engineer

Rhoda aiPalo Alto, CA

About The Position

At Rhoda AI, we're building the full-stack foundation for the next generation of humanoid robots — from high-performance, software-defined hardware to the foundational models and video world models that control it. Our robots are designed to be generalists capable of operating in complex, real-world environments and handling scenarios unseen in training. We work at the intersection of large-scale learning, robotics, and systems, with a research team that includes researchers from Stanford, Berkeley, Harvard, and beyond. We're not building a feature; we're building a new computing platform for physical work — and with over $400M raised, we're investing aggressively in the R&D, hardware development, and manufacturing scale-up to make that a reality. We're looking for a Robot Perception Engineer to develop and maintain the perception systems that give our humanoid robots a real-time understanding of the world around them. You'll own the software that transforms raw sensor data into reliable, actionable perception — from sensor integration and calibration to safety-certified object detection and streaming data to our foundation models and teleoperation stack.

Requirements

  • 4+ years of experience in robotics perception, computer vision, or a closely related field
  • Strong software engineering fundamentals in C++, Python, or Rust
  • Hands-on experience with sensor integration and calibration (cameras, LiDAR, IMUs, depth sensors)
  • Experience building real-time perception pipelines on embedded or edge compute platforms
  • Familiarity with streaming and networking systems for low-latency sensor data transport
  • Experience with ROS/ROS2 or similar robotics middleware in production or research contexts
  • Ability to debug across the full stack — from driver-level sensor issues to perception behavior on live hardware

Nice To Haves

  • Experience with safety-certified or safety-critical perception systems (e.g., ISO 26262, IEC 61508, or similar)
  • Background in multi-sensor fusion and calibration for robust perception under real-world conditions
  • Familiarity with teleoperation systems and the latency and reliability constraints they impose
  • Experience deploying and optimizing perception models on edge or onboard compute (quantization, TensorRT, etc.)
  • Familiarity with humanoid or legged robot platforms and the unique perception challenges they present
  • Prior work on early-stage hardware programs (prototype or pre-production robots)

Responsibilities

  • Develop and maintain real-time perception pipelines — integrating and calibrating sensors (cameras, LiDAR, IMUs, depth sensors) with our humanoid robot platforms
  • Build and optimize object detection, tracking, and scene understanding systems for safe operation in complex, unstructured environments
  • Own the streaming infrastructure that delivers sensor data reliably to foundation models, teleoperation systems, and onboard control stacks
  • Work closely with the AI/ML team to bridge perception outputs and learned policy inputs — ensuring low-latency, high-reliability data delivery
  • Implement and validate safety-certified perception components, including object detection and collision avoidance, to meet real-world deployment requirements
  • Support bring-up and field testing of new camera hardware and sensor configurations
  • Contribute to system reliability, fault detection, and recovery logic for robust real-world perception

Benefits

  • Over $400M raised, investing aggressively in R&D, hardware development, and manufacturing scale-up
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service