Robotics Research Engineer

Physical IntelligenceSan Francisco, CA

About The Position

Physical Intelligence is bringing general-purpose AI into the physical world. We are a team of engineers, scientists, roboticists, and company builders developing foundation models and learning algorithms to power the robots of today and the physically-actuated devices of the future. In this role, you will work at the intersection of hardware, software, and large-scale model training to develop effective autonomous robot policies. You’ll have the opportunity to work across the full stack behind state-of-the-art vision-language-action models: from designing robotic systems and data collection pipelines that produce high-quality training data, to developing learning algorithms that turn that data into capable, reliable policies. You’ll help shape the datasets, infrastructure, and research directions that define how these systems are built.

Requirements

  • Experience training machine learning models for robot control, ideally with policies that have been deployed and validated on real robots.
  • Hands-on experience with the robotics full stack, including controls, robot runtime software, perception, state estimation, SLAM, and basic hardware bring-up and debugging.
  • Strong software engineering and infrastructure skills, including building data pipelines, training systems, evaluation frameworks, and tools for rapid iteration.
  • The ability to move seamlessly between research and implementation: designing experiments, training models, debugging failures, and improving system performance end to end.
  • Comfort working hands on with robotic hardware.

Responsibilities

  • Build autonomous robot policies that operate robustly in the real world.
  • Work across the full stack of robot learning, from hardware and data collection to training, evaluation, and deployment.
  • Create new data collection methods and pipelines to generate the high-quality data that powers state-of-the-art robot models.
  • Develop and refine vision-language-action models and learning algorithms for general-purpose manipulation and control.
  • Curate and shape large-scale datasets, task distributions, and training recipes for robot pretraining and adaptation.
  • Run fast, rigorous experiments to identify bottlenecks, uncover failure modes, and improve policy performance.
  • Collaborate closely with researchers and engineers across robotics, infrastructure, and ML systems.
  • Help define the technical roadmap for general-purpose physical intelligence.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service