SLAM Specialist

World LabsSan Francisco, CA
1d

About The Position

At World Labs, we’re building Large World Models—AI systems that understand, reason about, and interact with the physical world. Our work sits at the frontier of spatial intelligence, robotics, and multimodal AI, with the goal of enabling machines to perceive and operate in complex real-world environments. We’re assembling a global team of researchers, engineers, and builders to push beyond today’s limitations in artificial intelligence. If you’re excited to work on foundational technology that will redefine how machines understand the world—and how people interact with AI—this role is for you. About World Labs: World Labs is an AI research and development company focused on creating spatially intelligent systems that can model, reason, and act in the real world. We believe the next generation of AI will not live only in text or pixels, but in three-dimensional, dynamic environments—and we are building the core models to make that possible. Our team brings together expertise across machine learning, robotics, computer vision, simulation, and systems engineering. We operate with the urgency of a startup and the ambition of a research lab, tackling long-horizon problems that demand creativity, rigor, and resilience. Everything we do is in service of building the most capable world models possible—and using them to empower people, industries, and society. Role Overview We’re looking for a SLAM Specialist to design, implement, and advance state-of-the-art simultaneous localization and mapping systems that enable accurate, robust spatial understanding from real-world sensor data. This role is focused on modern SLAM techniques—both classical and learning-based—with an emphasis on scalable state estimation, sensor fusion, and long-term mapping in complex, dynamic environments. This is a hands-on, research-driven role for someone who enjoys working at the intersection of robotics, computer vision, and probabilistic inference. You’ll collaborate closely with research scientists, ML engineers, and systems teams to translate cutting-edge SLAM ideas into production-ready capabilities that form the backbone of our world modeling stack.

Requirements

  • 6+ years of experience working on SLAM, state estimation, robotics perception, or related areas.
  • Strong foundation in probabilistic estimation, optimization, and geometric vision (e.g., bundle adjustment, factor graphs, Kalman filtering).
  • Deep experience with one or more SLAM paradigms (visual, visual-inertial, lidar, multi-sensor, or hybrid systems).
  • Proficiency in Python and/or C++, with hands-on experience building research or production-grade SLAM systems.
  • Experience with numerical optimization libraries and/or robotics frameworks.
  • Familiarity with learning-based perception or representation learning and how it can augment classical SLAM pipelines.
  • Strong understanding of real-world sensor characteristics, calibration, synchronization, and noise modeling.
  • Proven ability to work in ambiguous, fast-moving environments and drive projects from concept through deployment.
  • A strong sense of ownership and engineering rigor: you care deeply about correctness, stability, and measurable improvements.
  • Enjoy collaborating with a small, high-caliber team and raising the technical bar through thoughtful design, experimentation, and code quality.

Responsibilities

  • Design and implement modern SLAM systems for real-world environments, including visual, visual-inertial, lidar, or multi-sensor configurations.
  • Develop robust localization and mapping pipelines, including pose estimation, map management, loop closure, and global optimization.
  • Research and prototype learning-based or hybrid SLAM approaches that combine classical geometry with modern machine learning methods.
  • Build and maintain scalable state estimation frameworks, including factor graph optimization, filtering, and smoothing techniques.
  • Develop sensor fusion strategies that integrate cameras, IMUs, depth sensors, lidar, or other modalities to improve robustness and accuracy.
  • Analyze failure modes in real-world SLAM deployments (e.g., perceptual aliasing, dynamic scenes, drift) and design principled solutions.
  • Create evaluation frameworks, benchmarks, and metrics to measure SLAM accuracy, robustness, and performance across large datasets.
  • Optimize performance across the stack, including real-time constraints, memory usage, and compute efficiency, for large-scale and production systems.
  • Collaborate with reconstruction, simulation, and infrastructure teams to ensure SLAM outputs integrate cleanly with downstream world modeling and rendering pipelines.
  • Contribute to technical direction by proposing new research ideas, mentoring teammates, and helping define best practices for localization and mapping across the organization.

Benefits

  • Base salary plus equity awards and annual performance bonus
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service