Founding Robot Learning Research Engineer

Avomind
1d$100,000 - $180,000

About The Position

Our client is a vertically integrated robotics company building dexterous factory automation, backed by Jeff Dean, Pieter Abbeel, and senior leaders from OpenAI. This mission demands tight integration across robot learning, hardware, factory operations, and a consumer-facing brand. The founding team includes the creator of the DROID dataset, a 15-year veteran building intelligent factory automation from scratch, a 40-year operator who has built factories from the ground up, and an SVP from a $10B+ global consumer brand. Our client's thesis: the biggest bottleneck in robot learning is data, and the best way to solve it is to generate it as a byproduct of revenue. They have their own Vietnam factory, where workers use handheld devices shaped like our robot's hands to generate morphology-matched demonstration data at scale through real production work. That data trains the robots that will automate the factory. Experimentation, data collection, model training, and deployment collapse into one self-sustaining in-house loop. Their systems run in a factory, not a lab, under real constraints: throughput, uptime, reliability, and fine-grained manipulation on actual materials. Location: US and Vietnam. Some will be US-based with targeted Vietnam trips; others will base there full-time. We trust you to figure out what your work needs and get it done. US Office: City is still being decided, and founding team input is part of that process. Stage: Early-stage robot learning research and system development. Reports to: Co-founder / CTO. Core mandate: Own the research, development, and scaling of robot learning systems. Your job is to build the learning pipeline that turns our factory data engine into dexterous robots. This is a founding research engineering role. You'll need to think rigorously about hard problems, figure out what the right systems are, and then build them. This role spans four areas, all live simultaneously from day one: Research and problem decomposition: Break hard manipulation challenges into testable hypotheses and resolve them through rapid experiments. Core model development: End-to-end ownership for architectures, training pipelines, and evaluation systems for dexterous automation. Data collection infrastructure: Build the pipelines that make scalable, continuous data collection possible at a working factory. Robot and sensor systems: Develop software for robot, camera, and sensor systems that keep data flowing cleanly. You’ll build these systems yourself with full ownership in a lean founding environment, bringing in help where you need it as things scale. As the infrastructure stabilizes, the work shifts toward what it's ultimately about: building better models, advancing learning paradigms, and pushing the frontier of dexterous automation. If you want deep ownership on a hard problem from day one, this is the role.

Requirements

  • PhD or equivalent depth in robot learning through research or hands-on systems building.
  • Strong track record training large-scale VLAs, VLMs, diffusion models, or world models from pretraining through finetuning.
  • Has shipped a working policy on real hardware: data collection, training, and real-world execution.
  • Deep hands-on experience with real robot hardware: bring-up, ROS/ROS2, joint control, inverse kinematics, and building the stack from scratch, not just using it.
  • Experience building inside and expanding upon robot simulators.
  • Leverages AI-assisted development aggressively to maximize output across a wide stack.
  • Low ego, evidence-driven, and comfortable with ambiguity and incomplete infrastructure.
  • Thrives in small, tight-knit teams: collaborative, friendly, and easy to work with.

Responsibilities

  • Research and problem decomposition: Hypothesis-driven development across simulation, real robot hardware, and ML systems. Break hard problems into testable hypotheses and resolve them through rapid experiments. Isolate variables in a complex, simultaneously-live stack where hardware, software, training data, and models interact.
  • Core model development: VLAs, VLMs, diffusion models, world models, and reward models. Supervised, unsupervised, and RL-based paradigms. PyTorch/JAX with distributed training and inference optimization. Large-scale pretraining, post-training, and finetuning of foundation models. Design evaluation benchmarks grounded in real factory task performance. Maximize research progress per dollar through efficient training, lean inference, and smart compute allocation.
  • Data collection infrastructure: Real-time multi-modal capture (vision, force, proprioception) with tight time synchronization. High-throughput disk writes, standardized dataset formats at scale, cloud transfer pipelines, and full versioning for reproducible learning.
  • Robot and sensor systems: ROS/ROS2, robot bring-up and commissioning, robot control stack, teleoperation software, and hardware interfacing across cameras, force sensors, SLAM devices, and positional encoders. Keeping heterogeneous hardware running continuously under industrial uptime constraints is the core challenge.
  • Hardware collaboration: Work closely with the Founding Hardware team to ensure learning systems and hardware co-evolve.

Benefits

  • Short Vietnam Deployments: Fully covered (housing, flights, visa).
  • Full Vietnam Relocation:Visa support, relocation flights, and flexibility for family needs.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Entry Level

Education Level

Ph.D. or professional degree

Number of Employees

1-10 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service