In this role, you will develop vision-language-action models for our onboard Behavior & Planning stack, with the goal of improving safe and robust decision-making in complex, long-tail driving scenarios. You will work on multimodal models that connect scene understanding, contextual reasoning, and planning-relevant representations for real-world autonomous driving. This role is focused on advancing state-of-the-art VLAs for autonomy, including model development, large-scale training, fine-tuning, evaluation, and onboard optimization. You will work closely with partners across behavior, planning, perception, systems, and infrastructure to translate research advances into practical capabilities deployed on our vehicles. If you are excited about building and deploying cutting-edge VLA systems for real-world robotics, we'd love to hear from you.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Education Level
Ph.D. or professional degree
Number of Employees
501-1,000 employees