Robotics Manipulation R&D Intern

CreateMe Technologies, Inc.Newark, CA
Hybrid

About The Position

CreateMe is an AI robotics company pioneering automated manufacturing for soft materials, starting with apparel. Built on the belief that the Future of Fashion is Bonded™, we've developed a unified platform that combines advanced robotics, proprietary adhesive bonding, and Physical AI to deliver a new standard of precision, consistency, and speed. By replacing stitch-based construction with digitally applied adhesives and automated material handling, our platform enables localized, on-demand production that reduces waste, shortens supply chains, and improves recyclability by design. Core technologies include Pixel™ micro adhesive bonding, the MeRA™ robotic assembly system, and Thermo(re)set™ reversible adhesive science. With more than 95 patents across robotics, adhesives, and Physical AI, we're defining the infrastructure for bonded manufacturing globally. As a Robotics Manipulation R&D Intern, you'll work shoulder-to-shoulder with our robotics team on one of the hardest open problems in Physical AI: learning-based manipulation of highly deformable fabrics. This is a hands-on research role centered on vision-language-action (VLA) models and modern robot learning pipelines — from data through training through real-robot rollouts. You'll work on meaningful slices of our learning stack: validating data collection pipelines, training and tuning policies, running them on real robots, and helping stand up our simulation pipeline for fabric manipulation.

Requirements

  • Currently pursuing a Master's or PhD in Robotics, Computer Science, Machine Learning, EE/ME, or a related technical field.
  • Strong foundation in modern robot learning — imitation learning and/or VLA models (π0 / π0.5, OpenVLA, RT-2, or similar). You should be comfortable reading recent papers in this space and reasoning about architecture and training tradeoffs.
  • Hands-on experience training and fine-tuning deep learning models in PyTorch, including familiarity with practical issues: data loaders, mixed precision, distributed training, debugging training instability, evaluation harnesses.
  • Strong Python skills. Comfortable working in a real codebase, not just notebooks.
  • Experience working with real-world robot data or deploying learned models on physical robots (course projects, lab work, or prior internships all count).
  • Familiarity with ROS/ROS2 and standard robotics tooling.
  • Solid fundamentals in linear algebra, probability, and basic kinematics/dynamics.
  • Candidates must have full authorization to work in the United States.

Nice To Haves

  • Direct research experience with deformable object manipulation (fabrics, textiles, cables, soft matter) or with VLA fine-tuning on real-robot datasets.
  • Experience with Isaac Sim / Isaac Lab, MuJoCo, or other modern sim environments — especially with deformable / cloth physics.
  • Experience with imitation learning frameworks (LeRobot, Diffusion Policy, ACT) and teleop stacks.
  • Experience with egocentric / human demonstration data and models that consume it (e.g., video encoders, pose-conditioned policies, human-to-robot retargeting).
  • Experience building task progress / reward / success-detection models from video.
  • Familiarity with sim-to-real techniques (domain randomization, system ID, co-training).
  • Publications or preprints at CoRL, ICRA, RSS, NeurIPS, CVPR, or similar venues.
  • A product-oriented mindset: willingness to cut scope, instrument experiments, and ship something that actually runs on real hardware.

Responsibilities

  • Validate end-to-end pipelines that take human-collected fabric manipulation data through to trained policies.
  • Train VLA / imitation learning models (e.g., π0 / π0.5, ACT, diffusion policies) and roll them out on real robot hardware for fabric manipulation tasks (e.g., pick-and-place of fabric).
  • Iterate on data quality, model architecture, and training recipe to push success rates upward.
  • Train and evaluate models for task progress estimation from human manipulation data.
  • Explore architectures that can consume egocentric video and pose data and output progress / phase signals usable by downstream policies or evaluation harnesses.
  • Help stand up and validate a simulation pipeline for fabric manipulation — importing assets, tuning cloth physics to usable fidelity, setting up bimanual robot scenes, and training policies via imitation learning with exploratory RL work on top.
  • Contribute to sim-to-real transfer experiments.
  • Run disciplined experiments with proper tracking, dataset versioning, and clear ablations.
  • Communicate findings crisply so the team can make decisions quickly.
  • Partner with mechanical, hardware, and data teammates to unblock data issues, shape end-effector iterations, and translate findings into the team's broader roadmap.

Benefits

  • Pay rate is hourly
  • Eligible for overtime
  • Some work-from-home flexibility
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service