Senior Autonomy Engineer

Brain CorpSan Diego, CA
Onsite

About The Position

As a Senior Autonomy Engineer on our R&D team, you'll help define the next generation of software that lets robots perceive, learn, and act in unstructured indoor environments. You'll work across the modern autonomy stack — from learned perception and prediction to mapping and motion planning — and ship capabilities that generalize across our fleet. We're looking for someone equally comfortable reading a fresh arXiv paper, writing production C++/Python, and debugging behavior on a real robot in the lab. You'll help set technical direction, raise the bar on engineering quality, and mentor others on the team.

Requirements

  • Master’s Degree. or Ph.D. in Computer Science, Robotics, Electrical Engineering, or a related field — or equivalent demonstrated experience.
  • 5+ years of relevant industry or research experience building autonomy, perception, or ML systems
  • Strong fluency in Python and C++ in a Linux environment.
  • Demonstrated track record of taking research ideas and papers into deployed implementations.
  • Depth in one or more of: machine learning (supervised, self-supervised, imitation, RL), SLAM and state estimation, motion planning, or 3D perception.
  • Hands-on experience with PyTorch (and/or JAX), modern training pipelines, and contemporary architectures (transformers, diffusion models, vision-language-action models).
  • Experience designing robotic systems with ROS 2 (or comparable middleware) and contemporary simulation tools such as Isaac Sim, MuJoCo, or Gazebo.
  • Comfort with the full ML lifecycle: data curation and labeling strategy, large-scale training, offline and online evaluation, and continuous deployment to production hardware.
  • Solid systems and software architecture instincts; pragmatism about when a learned approach beats a classical one and vice versa.
  • Familiarity with modern engineering practices: CI/CD, code review, observability, and iterative delivery (i.e. Agile, Scrum).

Nice To Haves

  • Contributions to open-source robotics or ML projects, publications at top venues (CoRL, RSS, ICRA, NeurIPS, CVPR, ICML), or experience with on-device acceleration (TensorRT, ONNX, custom CUDA kernels).

Responsibilities

  • Design, train, and deploy machine learning systems for perception, SLAM, prediction, and motion planning that enable safe navigation around people and obstacles.
  • Build and improve learned components — including transformer-based perception, vision-language models for scene understanding, diffusion or imitation-learning policies, and neural/Gaussian-splatting approaches to mapping — and integrate them with classical estimation and planning where it makes sense.
  • Translate state-of-the-art research (papers, open-source releases, conference talks) into production-quality implementations on the robot.
  • Develop data engines and evaluation infrastructure that turn fleet logs into training data, regression tests, and shipped improvements.
  • Own features end-to-end: from prototype, through sim and on-robot validation, to fleet rollout, with measurement of real-world impact.
  • Improve runtime performance of perception, mapping, and planning on embedded GPU/accelerator hardware (quantization, distillation, kernel work where warranted).
  • Contribute to internal frameworks, simulation tooling, and developer experience that compound team velocity.
  • Provide guidance and mentorship to engineers across robotics, ML, and the software systems that support them.

Benefits

  • Named a top workplace by the San Diego Union Tribune and USA today in 2025
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service