Robotics Research Internship-Locomotion & Planning (Summer 2026)

FieldAIIrvine, CA
15h$45 - $60Onsite

About The Position

Field AI is building the future of autonomy—from rugged terrain to real-world deployment. We’re on a mission to develop intelligent, adaptable robotic systems that operate beyond simulation and thrive in unpredictable environments. We are offering a Summer 2026 internship focused on learning-based locomotion and planning for PhD students interested in advancing autonomous legged robot capabilities. As a research intern, you will work at the intersection of reinforcement learning, locomotion control, and learned planning, developing integrated systems that enable robots to move and navigate intelligently through complex, unstructured environments. You will collaborate closely with Field AI research scientists and engineers to design experiments, develop locomotion and planning systems, and validate ideas in simulation and on real hardware. This internship emphasizes building tightly integrated learning-based systems that connect low-level locomotion with high-level planning, translating research into practical, deployable capabilities for real-world robotics.

Requirements

  • Current PhD student in Robotics, Computer Science, Mechanical Engineering, AI/ML, or a closely related field.
  • Research experience in reinforcement learning for continuous control, locomotion, or learning-based planning.
  • Strong foundation in contact dynamics, control theory, and kinematics.
  • Proficiency in Python and/or C++, with experience using robotics or ML tooling.
  • Familiarity with physics-based simulators such as Isaac Gym, Isaac Lab, MuJoCo, or PyBullet.
  • Experience designing experiments and evaluating results on robotic systems (simulation or hardware).
  • Curiosity, initiative, and a strong interest in building autonomous systems that operate in the real world.

Nice To Haves

  • Hands-on experience with legged robot platforms (quadrupeds, wheeled-quadrupeds, bipedal systems, or exoskeletons).
  • Experience with sim-to-real transfer for locomotion or planning policies.
  • Background in learning-based planning, motion planning, or terrain-adaptive control.
  • Familiarity with ROS or ROS2.
  • Publications, preprints, or open-source contributions in locomotion, RL, planning, or control.
  • Experience deploying neural network controllers on resource-constrained or real-time robotic platforms.
  • Interest in bridging cutting-edge research with practical, field-ready robotic systems.

Responsibilities

  • Advance RL-Based Locomotion and Learned Planning Research Design, implement, and evaluate reinforcement learning pipelines that tightly integrate locomotion control with learning-based planning. Explore how learned planners can inform and adapt locomotion behaviors across varied terrain and dynamic conditions. Contribute to research projects from early-stage ideas through simulation experiments and on-robot validation.
  • Bridge Locomotion and Planning Across the Sim-to-Real Gap Develop and refine sim-to-real transfer strategies, including domain randomization, system identification, and adaptive methods, for integrated locomotion-planning systems. Build and leverage GPU-accelerated simulation environments (Isaac Gym, Isaac Lab, MuJoCo) for scalable training and evaluation. Test and iterate on policies using real legged robot platforms in unstructured environments.
  • Build Systems That Connect Research to Deployment Translate research concepts into working robotic systems tested on real hardware. Develop experimental setups and tooling to support data collection, evaluation, and reproducibility. Help ensure locomotion and planning systems are robust, field-relevant, and ready for iterative improvement.
  • Collaborate Across the Full Robotics Stack Work closely with systems engineers, perception experts, and embedded teams to close the loop between learning and execution. Incorporate real-world telemetry and field data to refine models and improve generalization. Engage with researchers and engineers across the team to align experiments with broader autonomy goals.
  • Rapidly Iterate and Learn Prototype quickly, run experiments in simulation and on hardware, and analyze results rigorously. Balance exploratory research with concrete deliverables over the course of the internship. Debug system-level issues spanning simulation, software, hardware, and learning.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service