Matter is building the AI-native autonomy stack for physical manufacturing in the United States. We operate as a contract manufacturer, deploying software and autonomy in our own factories, which gives us something most AI companies don’t have: a live production environment as a training ground. Our long-term vision is to become the infrastructure layer for American manufacturing, the way AWS became infrastructure for software. We are hiring a Research Scientist to lead the development and deployment of Vision-Language-Action (VLA) models for robotic manipulation in live manufacturing work cells. This is not a lab role. You will train models, close the Sim2Real loop, and deploy them on physical robots running production programs. Matter’s Sim2Real pipeline spans NVIDIA Isaac Sim, physics-accurate virtual builds of our modular assembly equipment, and 100% data collection from real factory operations. You will operate at the center of this flywheel design, improving models with every production run. WHY MATTER Most VLA research is validated in a lab or on a tabletop. At Matter, your models run on a production factory floor, handling real parts for real customers. The feedback loop is immediate and grounded. The training data is yours because the factory is yours. No one else in this space has that combination at the stage we’re at.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
Ph.D. or professional degree