About The Position

The Behavior Understanding and Evaluation team at Motional is responsible for defining how we measure and validate autonomous vehicle behavior at scale. To prepare for large-scale driverless deployment, manual review and static metrics thresholds are no longer sufficient. We aim to build automated, statistically rigorous systems using cutting-edge machine learning techniques to understand and evaluate our vehicles' performance, both in real-world deployment and within simulated environments. Determining whether a simulation "Passed" or "Failed" is a particular challenge given the multi-modal complexity of real-world human driving.This is a research-forward engineering role focused on helping us build a Next-Generation Semantic Validator: a production-grade machine learning evaluation system that learns the distribution of valid human driving behavior and uses it as a "Safety Ruler" for autonomous vehicle releases. This internship is based in our Boston office and requires in-office days each week.

Requirements

  • Currently a PhD in Computer Science, Robotics, Machine Learning, Statistics, or a related field.
  • Strong foundation in scientific and statistical methodologies.
  • Expertise in Machine Learning and Deep Learning, specifically modern Sequence Modeling (Transformers, Self-Attention, and Cross-Attention applied to time-series or trajectory data).
  • Hands-on experience with Generative AI paradigms (e.g., treating motion as a generative next-token prediction task, or using Diffusion Models).
  • Strong grasp of Probabilistic ML and uncertainty quantification to distinguish between "rare but safe" behaviors and out-of-distribution failures.
  • Strong software engineering skills in Python and standard deep learning frameworks (PyTorch or TensorFlow).

Nice To Haves

  • Domain knowledge in Autonomous Vehicles, specifically Motion Planning, Kinematics, or Behavior Prediction.
  • Familiarity with Conformal Prediction or other distribution-free uncertainty quantification techniques.
  • Experience with Inverse Reinforcement Learning (IRL) or inferring intent from human driving.
  • Experience with JAX / Flax and composable function transformations (jit, vmap) for high-performance computing.

Responsibilities

  • Develop Advanced Models: Leverage Transformer-based Generative Models (e.g., Trajectory Transformers or MotionLM architectures) to learn the 'grammar' of valid human driving knowing ground truth of the past and future, allowing rapid assessment of large scale simulations that may otherwise require a human in the loop
  • Establish Statistical Safety Guarantees: Pioneer the definition and implementation of key evaluation metrics using techniques such as Conformal Prediction to establish rigorous, dynamic safety envelopes around predicted trajectories.
  • Benchmark Methods: Own the benchmarking initiative comparing traditional geometric methods (e.g., single trajectory comparison) against cutting-edge generative / ML based approaches to prove a reduction in "False Fails."
  • Collaborate: Partner cross-functionally with Behaviors, Actions, Simulation, System Engineering, and Research to share insights on multi-modal truth and probabilistic safety.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service