ML Compiler Engineer

RivianPalo Alto, CA
118d

About The Position

Rivian is on a mission to keep the world adventurous forever. This goes for the emissions-free Electric Adventure Vehicles we build, and the curious, courageous souls we seek to attract. As a company, we constantly challenge what’s possible, never simply accepting what has always been done. We reframe old problems, seek new solutions and operate comfortably in areas that are unknown. Our backgrounds are diverse, but our team shares a love of the outdoors and a desire to protect it for future generations. We are looking for an ML Compiler Engineer with deep expertise in compiling deep learning models for hardware acceleration in autonomous systems. In this position, you will work closely with the Software, Hardware, System, and SoC groups to develop the infrastructure to compile state-of-the-art machine learning models used in ADAS systems to execute efficiently on the SoC. You will focus on researching state of the art perception models and develop optimization pipelines for the quantized versions of these models customized to provide real-time performance and energy efficiency on next-generation autonomy hardware. This compiler enables HW-SW codesign and would result in developing efficient building blocks for state-of-the-art machine learning models. You will be collaborating with other cross functional teams in understanding the workloads, enabling running workloads on HW and help define the future enhancements to hardware and models.

Requirements

  • Ph.D. or M.S. in Computer Engineering, Electrical Engineering, Computer Science, or related field with a focus on ML compilers, embedded systems, or hardware-aware AI.
  • Hands-on experience with quantized model deployment, ML compilation stacks, and code generation for embedded or heterogeneous compute systems.
  • Strong understanding of computer vision models (e.g., object detection, segmentation) and their optimization for edge inference.
  • Proficiency in deep learning frameworks (e.g., PyTorch, TensorFlow) and their low-level IRs or export formats (e.g., ONNX).
  • Solid programming skills in C++, Python

Nice To Haves

  • Prior experience working with hardware-software co-design, especially for autonomous or robotics platforms.
  • Deep knowledge of numerical precision trade-offs, quantization-aware training (QAT), and dynamic/static quantization flows.
  • Familiarity with embedded real-time constraints and hardware profiling/debugging tools.
  • Familiarity with rearchitecting models to best suit hardware capabilities.

Responsibilities

  • Research state of the art perception models in collaboration with the ADAS SW teams
  • Lead the development of optimizations for mapping quantized perception models (e.g., CNNs, Transformers, LLMs) to embedded and heterogeneous hardware platforms.
  • Design and implement hardware-aware optimizations, including quantization strategies, model compression, memory-efficient representations, and operator fusion, targeted to custom accelerators
  • Collaborate with hardware teams to co-optimize model architecture and compute pipeline under real-time constraints (latency, throughput, power).
  • Benchmark and analyze system performance across platforms and iterate to achieve optimal deployment efficiency.
  • Partner with perception, systems, and autonomy teams to align model optimization efforts with hardware roadmap and real-world autonomy requirements.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service