Research Engineer / Scientist (SLAM)

World LabsSan Francisco, CA
12h$250,000 - $350,000

About The Position

About World Labs: We build foundational world models that can perceive, generate, reason, and interact with the 3D world — unlocking AI's full potential through spatial intelligence by transforming seeing into doing, perceiving into reasoning, and imagining into creating. We believe spatial intelligence will unlock new forms of storytelling, creativity, design, simulation, and immersive experiences across both virtual and physical worlds. We bring together a world-class team, united by a shared curiosity, passion, and deep backgrounds in technology — from AI research to systems engineering to product design — creating a tight feedback loop between our cutting-edge research and products that empower our users. Role Overview We’re looking for a SLAM Specialist to design, implement, and advance state-of-the-art simultaneous localization and mapping systems that enable accurate, robust spatial understanding from real-world sensor data. This role is focused on modern SLAM techniques—both classical and learning-based—with an emphasis on scalable state estimation, sensor fusion, and long-term mapping in complex, dynamic environments. This is a hands-on, research-driven role for someone who enjoys working at the intersection of robotics, computer vision, and probabilistic inference. You’ll collaborate closely with research scientists, ML engineers, and systems teams to translate cutting-edge SLAM ideas into production-ready capabilities that form the backbone of our world modeling stack.

Requirements

  • 6+ years of experience working on SLAM, state estimation, robotics perception, or related areas.
  • Strong foundation in probabilistic estimation, optimization, and geometric vision (e.g., bundle adjustment, factor graphs, Kalman filtering).
  • Deep experience with one or more SLAM paradigms (visual, visual-inertial, lidar, multi-sensor, or hybrid systems).
  • Proficiency in Python and/or C++, with hands-on experience building research or production-grade SLAM systems.
  • Experience with numerical optimization libraries and/or robotics frameworks.
  • Familiarity with learning-based perception or representation learning and how it can augment classical SLAM pipelines.
  • Strong understanding of real-world sensor characteristics, calibration, synchronization, and noise modeling.
  • Proven ability to work in ambiguous, fast-moving environments and drive projects from concept through deployment.
  • A strong sense of ownership and engineering rigor: you care deeply about correctness, stability, and measurable improvements.
  • Enjoy collaborating with a small, high-caliber team and raising the technical bar through thoughtful design, experimentation, and code quality.

Responsibilities

  • Design and implement modern SLAM systems for real-world environments, including visual, visual-inertial, lidar, or multi-sensor configurations.
  • Develop robust localization and mapping pipelines, including pose estimation, map management, loop closure, and global optimization.
  • Research and prototype learning-based or hybrid SLAM approaches that combine classical geometry with modern machine learning methods.
  • Build and maintain scalable state estimation frameworks, including factor graph optimization, filtering, and smoothing techniques.
  • Develop sensor fusion strategies that integrate cameras, IMUs, depth sensors, lidar, or other modalities to improve robustness and accuracy.
  • Analyze failure modes in real-world SLAM deployments (e.g., perceptual aliasing, dynamic scenes, drift) and design principled solutions.
  • Create evaluation frameworks, benchmarks, and metrics to measure SLAM accuracy, robustness, and performance across large datasets.
  • Optimize performance across the stack, including real-time constraints, memory usage, and compute efficiency, for large-scale and production systems.
  • Collaborate with reconstruction, simulation, and infrastructure teams to ensure SLAM outputs integrate cleanly with downstream world modeling and rendering pipelines.
  • Contribute to technical direction by proposing new research ideas, mentoring teammates, and helping define best practices for localization and mapping across the organization.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service