We are now looking for a Senior Machine Learning Engineer for Quantized Inference! NVIDIA is seeking machine learning engineers to accelerate the discovery and deployment of efficient inference recipes for LLMs. A recipe defines which operators are transformed into low-precision or sparsified variants unlocking throughput and latency gains without regressing accuracy nor verbosity. Recipes may incorporate techniques such as rotations, block scaling to attenuate outlier impact, or improved calibration data drawn from SFT/RL pipelines. Pushing the frontier of inference efficiency requires a holistic view of the workload. The candidate will navigate the full design space: identifying which layers are sensitive to quantization relative to their inference cost, diagnosing why specific recipes fail, and adapting training techniques such as quantization-aware distillation or targeted fine-tuning to recover accuracy where needed. Our team develops quantized and sparse recipes that ship and run at scale across NVIDIA's LLM product portfolio. Our recipes directly determine the cost and latency of serving models to millions of users. We collaborate with inference framework teams (vLLM, TRT-LLM) to ensure recipes translate into real throughput gains, and with post-training teams to source calibration data and co-design quantization-aware training curricula.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior