Machine Learning Engineer (Robotics / Embodied AI)

Avatar RoboticsSan Francisco, CA
6hOnsite

About The Position

We're hiring a Machine Learning Engineer to lead our progression from teleoperation to autonomy. You'll use cutting -edge Vision -Language -Action (VLA) models and imitation learning to make our teleoperated robots increasingly autonomous over time. This role spans the full ML lifecycle: data capture and organization, model training and evaluation, deployment to edge devices, and continuous improvement based on real -world performance. You'll work at the intersection of robotics, computer vision, and foundation models—turning thousands of hours of human demonstrations into autonomous capabilities. Avatar Robotics is building flexible robot fleets to revolutionize industrial work across the country. We're on a mission to make every tedious and dangerous warehouse/factory job virtual, safe, and semi -autonomous. With proven AI approaches and long distance teleoperation, you'll join a team that deploys a physical work solution that's scalable now, not later. We envision a world where millions of machines will make our goods and consumables more affordable and accessible than ever, while critical workers operate these robot fleets from the comfort of their homes. At Avatar Robotics, you'll create the workforce of the future—in one of the largest markets ($1T+ manual labor market in the US alone). We're a small but powerful team at the early innings of deploying thousands of units into facilities worldwide.

Requirements

  • 3+ years of hands -on ML experience, preferably in robotics, computer vision, or embodied AI.
  • Strong foundation in deep learning frameworks (PyTorch, JAX, TensorFlow) and training large models.
  • Experience with imitation learning, behavior cloning, or reinforcement learning in physical systems.
  • Proficiency with computer vision techniques (object detection, segmentation, point cloud processing).
  • Understanding of robotics fundamentals (kinematics, control theory, sensor fusion).
  • Strong Python skills and experience with ML libraries.

Nice To Haves

  • Bonus: Experience with Vision -Language -Action models, foundation models, or deploying models to edge devices.

Responsibilities

  • Design and implement data collection pipelines for synchronized sensor streams and task annotations from teleoperated robots.
  • Build data processing systems for cleaning, labeling, and organizing multi -modal data (RGB -D, LiDAR, proprioceptive feedback).
  • Develop and train Vision -Language -Action models, imitation learning policies, and behavior cloning systems.
  • Deploy trained models to edge compute devices with real -time inference constraints.
  • Collaborate with Robotics and Teleop teams to define autonomous capabilities and integrate models into the control stack.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service