About The Position

We are now looking for a Senior Deep Learning Software Engineer, LLM Performance! NVIDIA is seeking an experienced Deep Learning Engineer passionate about analyzing and improving the performance of LLM inference! NVIDIA is rapidly growing our research and development for Deep Learning Inference and is seeking excellent Software Engineers at all levels of expertise to join our team. Companies around the world are using NVIDIA GPUs to power a revolution in deep learning, enabling breakthroughs in areas like LLM, Generative AI, Recommenders and Vision that have put DL into every software solution. Join the team that builds the software to enable the performance optimization, deployment and serving of these DL solutions. We specialize in developing GPU-accelerated Deep learning software like TensorRT, DL benchmarking software and performant solutions to deploy and serve these models. Collaborate with the deep learning community to implement the latest algorithms for public release in TensorRT LLM, VLLM, SGLang and LLM benchmarks. Identify performance opportunities and optimize SoTA LLM models across the spectrum of NVIDIA accelerators, from datacenter GPUs to edge SoCs. Implement LLM inference, serving and deployment algorithms and optimizations using TensorRT LLM, VLLM, SGLang, Triton and CUDA kernels. Work and collaborate with a diverse set of teams involving performance modeling, performance analysis, kernel development and inference software development.

Requirements

  • Bachelors, Masters, PhD, or equivalent experience in relevant fields (Computer Engineering, Computer Science, EECS, AI).
  • At least 8 years of relevant software development experience.
  • Excellent Python/C/C++ programming, software design and software engineering skills
  • Experience with a DL framework like PyTorch, JAX, TensorFlow.

Nice To Haves

  • Prior experience with a LLM framework or a DL compiler in inference, deployment, algorithms, or implementation
  • Prior experience with performance modeling, profiling, debug, and code optimization of a DL/HPC/high-performance application
  • Architectural knowledge of CPU and GPU
  • GPU programming experience (CUDA or OpenCL)

Responsibilities

  • Performance optimization, analysis, and tuning of LLM, VLM and GenAI models for DL inference, serving and deployment in NVIDIA/OSS LLM frameworks.
  • Scale performance of LLM models across different architectures and types of NVIDIA accelerators.
  • Scale performance for max throughput, minimum latency and throughput under latency constraints.
  • Contribute features and code to NVIDIA/OSS LLM frameworks, inference benchmarking frameworks, TensorRT, and Triton.
  • Work with cross-collaborative teams across generative AI, automotive, image understanding, and speech understanding to develop innovative solutions.

Benefits

  • You will also be eligible for equity and benefits .

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

Ph.D. or professional degree

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service