We are now looking for a Senior Deep Learning Architect for LLM Inference! NVIDIA is at the forefront of the generative AI revolution. Our Inference Benchmarking (IB) team is specifically focused on advanced inference server performance for Large Language Models (LLMs). If you're passionate about pushing the boundaries of GPU hardware and software performance and understand terms like pre-fill phase, generation phase, paged attention, MoE, Tensor Parallel, Llama, Mixtral, and HuggingFace, then this could be a great role for you!