About The Position

Motional's CORE team is responsible for our vehicle's Compute and Onboard Runtime Environment. We are creating a world leading AI compute platform for autonomous vehicles. This is the system that executes the software and neural networks that make our vehicles autonomous. The Next-Gen Technologies team is part of CORE. We work at the intersection of software engineering, machine learning, sensors, and hardware compute platforms to evolve Motional's on-board vehicle architecture. If you are a software engineer and love the idea of working on embedded AI hardware and software systems to create the next generation of autonomous vehicles, we would love to talk with you.

Requirements

  • Experience with machine learning accelerators, including GPUs, NPUs, TPUs, and their programming environments, including CUDA, TensorRT, or similar technologies.
  • Strong experience with modern C++ development in a Linux environment.
  • Experience with parallel and high-performance computing.
  • Comfortable with experimentation and evaluating different options.
  • A degree in Software Engineering, Computer Science, Electrical or Electronic Engineering, or similar technical field of study, or equivalent practical experience.

Nice To Haves

  • Experience with PyTorch, TensorFlow, ONNX, and/or other ML frameworks.
  • Experience with embedded systems development for ARM-based system-on-chip architectures.
  • Experience working in a MLOps or DevOps environment.
  • Passion for self-driving technology and its potential for positive impact on the world.

Responsibilities

  • Help improve the compute performance of current and next-generation autonomous driving systems through full lifecycle development.
  • Focus on ML model deployment, integration of multiple ML models, and ML model optimization on embedded compute platforms.
  • Analyze ML workload performance on various hardware processors and optimize ML models.
  • Design, develop, test, integrate, and optimize software and tools on various ML compute architectures.
  • Collaborate with deep learning experts to enable algorithms on GPU, NPU, and other ML accelerator architectures.
  • Optimize the utilization of GPU/NPU resources and sharing of GPU/NPU access across multiple programs.
  • Lead designs to determine system needs and improve the ML software stack.
  • Advise peers and management on technical matters.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Industry

Transportation Equipment Manufacturing

Education Level

Bachelor's degree

Number of Employees

1,001-5,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service