About The Position

Matter is building the AI-native autonomy stack for physical manufacturing. We own and operate factories as controlled learning environments — collecting data from every stage of production to train and validate AI systems that run on real hardware. We are hiring an AI Research Engineer to lead model training and adaptation for MatterOS — spanning pre-training data, domain-specific fine-tuning, and post-training alignment for manufacturing tasks. You will work across the full model development lifecycle: from curating the manufacturing corpus and designing synthetic data pipelines, to fine-tuning general-purpose VLMs and VLAs for factory-specific tasks, to building the evaluation frameworks that define what “good” looks like in a production context.

Requirements

  • Strong background in machine learning research with practical experience training large models (VLMs, LLMs, or robot policies)
  • Hands-on experience with pre-training data pipelines, synthetic data generation, or multi-modal training at scale
  • Familiarity with fine-tuning and alignment techniques: LoRA, RLHF, DPO, instruction tuning
  • Proficiency in PyTorch; experience with NVIDIA Isaac Sim, MuJoCo, or physics-based simulation engines
  • Ability to design evaluation frameworks that go beyond standard benchmarks to measure task-specific real-world performance
  • Systems thinking: you understand that training data quality and domain alignment matter more than model scale in constrained physical applications

Nice To Haves

  • PhD or graduate degrees in machine learning, robotics, or a related field
  • Experience with embodied AI, robot learning, or sim-to-real transfer
  • Background in industrial or manufacturing domains
  • Experience with distributed training on GPU clusters (SLURM, DeepSpeed, FSDP)

Responsibilities

  • Design and manage synthetic data generation pipelines using physics simulation (NVIDIA Isaac Sim, MuJoCo): domain randomization, procedural scene generation, and sim-to-real transfer validation
  • Curate and annotate the manufacturing training corpus: CAD data, process plans, equipment programs, assembly sequences, sensor streams, and quality inspection records
  • Fine-tune and adapt pre-trained VLMs, VLAs, and LLMs to manufacturing-specific tasks: DFM co-pilot, fault detection, assembly instruction generation, and robotic policy learning
  • Implement post-training alignment methods (RLHF, DPO, or preference optimization) to align model behavior with manufacturing task success criteria
  • Build domain-specific evaluation benchmarks: manipulation accuracy, instruction-following fidelity, process compliance, and failure mode detection
  • Collaborate with the VLA Research Scientist and AI Infrastructure Engineer to close the training loop between simulation, real factory data, and deployed models
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service