Are you passionate about pushing the limits of real-time large language model inference? Join NVIDIA’s TensorRT Edge-LLM team and help shape the next generation of edge AI for automotive and robotics. We build the software stack that enables Large Language, Vision-Language, and Multimodal (LLM/VLM/VLA) models to run efficiently on embedded and edge platforms — delivering cutting-edge generative AI experiences directly on-device. What you’ll be doing: Develop and evolve a state-of-the-art inference framework in modern C++ that extends TensorRT with autoregressive model serving capabilities, including speculative decoding, LoRA, MoE, and KV cache management. Design and implement compiler and runtime optimizations tailored for transformer-based models running on constrained, real-time platforms. Collaborate with teams across CUDA, kernel libraries, compilers, and robotics to deliver high-performance, production-ready solutions. Contribute to CUDA kernel and operator development for critical transformer components such as attention, GEMM, and MoE. Benchmark, profile, and optimize inference performance across diverse embedded and automotive environments. Stay ahead of the rapidly evolving LLM/VLM ecosystem and bring emerging techniques into product-grade software.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level