About The Position

As Lead Research Scientist, you will define and drive Centific’s research agenda in Vision AI, multimodal foundation models, simulation-first learning, agentic AI, and embodied intelligence. You will lead a small team of researchers, engineers, and interns while contributing directly to model design, large-scale training, benchmarking, and external scientific visibility. This role is for someone who has gone beyond applying existing models and has materially advanced architectures, training methods, datasets, or evaluation frameworks in AI, robotics, vision, autonomous driving, or multimodal learning.

Requirements

  • Ph.D. in Computer Science, Robotics, Machine Learning, Computer Vision, Autonomous Systems, or a related field.
  • Strong publication record in top venues such as CVPR, ICCV, ECCV, NeurIPS, ICLR, ICML, CoRL, RSS, or leading autonomous driving/robotics venues.
  • 5+ years of research experience in academia, industry, or advanced R&D environments.
  • Demonstrated experience building or advancing large-scale foundational models, novel architectures, or training methods in multimodal AI, vision, robotics, autonomous driving, embodied AI, world models, or simulation-based learning.
  • Deep expertise in PyTorch and/or JAX, GPU training, distributed experimentation, and large-scale model development.
  • Proven ability to lead ambitious technical programs and mentor junior researchers.

Nice To Haves

  • Publications or patents in multimodal foundation models, dexterous robotics, autonomous driving, spatial intelligence, simulation-based learning, manipulation, or embodied AI.
  • Strong experience in Vision AI, including perception, tracking, grounding, 3D scene understanding, video understanding, sensor fusion, or multimodal reasoning.
  • Familiarity with agentic AI systems, tool-using agents, planning frameworks, and memory-based architectures; experience with agentic memory, knowledge graphs, or long-horizon reasoning systems is a plus.
  • Experience with Isaac Sim, MuJoCo, OpenUSD/Omniverse, Open3D, PyTorch3D, NeRF/3DGS, or related simulation and 3D stacks.
  • Familiarity with imitation learning, reinforcement learning, planning, MPC, control, teleoperation data pipelines, or policy learning for robotics and autonomous systems.
  • Experience with Ray, Kubernetes, Triton, TensorRT, Docker, W&B, or large-scale training and deployment infrastructure.
  • Background in trustworthy AI, robotics safety, evaluation, or explainability for autonomous systems.

Responsibilities

  • Lead high-impact research in multimodal foundation models, world models, embodied AI, vision-language-action systems, and agentic AI.
  • Develop new approaches for perception, temporal reasoning, spatial intelligence, affordance understanding, autonomous decision-making, and sim2real transfer.
  • Advance challenging robotics capabilities including dexterous manipulation, contact-rich interaction, bimanual coordination, long-horizon task execution, navigation in dynamic environments, and robust action under uncertainty.
  • Contribute to large-scale model building, including multimodal pretraining, distributed training, fine-tuning, distillation, and evaluation of models for vision, robotics, and autonomous systems.
  • Help shape research relevant to autonomous driving and mobile autonomy, including scene understanding, multimodal sensor reasoning, planning-aware perception, and edge-case robustness.
  • Guide integration of research with simulation and digital twin platforms such as Isaac Sim, Isaac Lab, MuJoCo, Omniverse, or related environments.
  • Establish rigorous benchmarks and reproducible evaluation frameworks for robustness, safety, generalization, manipulation success, policy performance, and real-world deployment readiness.
  • Mentor Ph.D. interns and engineers, and help build a strong research culture grounded in rigor, speed, originality, and scientific excellence.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service