About The Position

Mecka AI is building the data infrastructure layer for robotics and embodied AI. We design and operate global systems for data capture, data labeling, and hardware-enabled workflows used by leading AI labs and robotics companies. Our datasets power models that learn from the physical world — enabling robots to understand, reason, and act in real environments. As robotics systems evolve, combining real-world data with simulation-driven learning is critical to unlocking robust, generalizable behavior. We’re hiring a Computer Vision Researcher with a focus on simulation-driven learning for robotics. This role sits at the intersection of vision, simulation, and control. You’ll work on using real-world data to inform simulated environments, and apply techniques like reinforcement learning and contact modeling to improve motion — particularly for hands, manipulation, and lower-body movement. You’ll work closely with data, engineering, and customer teams to bridge the gap between captured data → simulation → deployable behavior.

Requirements

  • MSc or PhD in robotics, computer vision, machine learning, or a related field
  • Strong experience with simulation environments (e.g., Isaac Gym, MuJoCo, or similar)
  • Experience applying reinforcement learning to control or robotics problems
  • Strong programming skills in Python (C++ is a plus)
  • Solid understanding of vision, state estimation, and/or perception systems
  • Deeply curious about how robots learn and move
  • Comfortable working across research and engineering boundaries
  • Able to move from idea → experiment → iteration quickly
  • Excited by messy, real-world problems — not just clean benchmarks
  • Motivated to build systems that actually get used

Nice To Haves

  • Experience working on dexterous manipulation, hands, or locomotion
  • Experience modeling contact-rich interactions in simulation
  • Experience working on sim-to-real transfer
  • Familiarity with vision-language-action (VLA) or multimodal systems
  • Experience working with large-scale real-world datasets

Responsibilities

  • Build and iterate on simulation environments for robotic learning
  • Use real-world datasets to inform and improve simulated environments
  • Apply reinforcement learning (RL) to learn contact-rich behaviors and motion policies
  • Focus on improving dexterous manipulation and lower-body motion
  • Develop pipelines that translate captured video and sensor data into usable simulation inputs
  • Work on perception systems that support simulation fidelity (pose, state estimation, object understanding)
  • Align real-world data distributions with simulation environments
  • Model physical interactions (contact, force, constraints) in simulation
  • Improve smoothness, stability, and realism of learned motion
  • Help bridge sim-to-real gaps for manipulation and locomotion
  • Design experiments to evaluate model performance in simulation and real-world settings
  • Analyze failure modes and iterate on data, models, and environments
  • Work with customers to validate whether data + simulation outputs meet their needs
  • Work closely with: Data teams (capture + labeling pipelines) Engineering teams (infrastructure + deployment) External customers (robotics / AI labs)
  • Translate research ideas into practical, usable systems

Benefits

  • Work on core problems in simulation-driven robotics learning
  • Help define how real-world data and simulation interact at scale
  • Partner with leading AI labs and robotics companies
  • High ownership and direct impact on product and research direction
  • Opportunity to push forward how robots learn manipulation and movement
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service