About The Position

We are now looking for a Senior Deep Learning Software Engineer, TensorRT Performance! NVIDIA is seeking an experienced Deep Learning Engineer passionate about analyzing and improving the performance of NVIDIA’s inference ecosystem! NVIDIA is rapidly growing our research and development for Deep Learning Inference and is seeking excellent Software Engineers at all levels of expertise to join our team. Companies around the world are using NVIDIA GPUs to power a revolution in deep learning, enabling breakthroughs in areas like Generative AI, Recommenders and Vision that have put DL into every software solution. Join the team that builds the software to enable the performance optimization, deployment and serving of these DL inference solutions. We specialize in developing GPU-accelerated deep learning inference software like TensorRT, DL benchmarking software and performant solutions to deploy and serve these models. Collaborate with the deep learning community to integrate TensorRT into OSS frameworks like TensorRT-EdgeLLM and PyTorch. Identify performance opportunities and optimize SoTA models across the spectrum of NVIDIA accelerators, from datacenter GPUs to edge SoCs. Implement graph compiler algorithms, frontend operators and code generators across NVIDIA’s inference ecosystem. Work and collaborate with a diverse set of teams involving workflow improvements, performance modeling, performance analysis, kernel development and inference software development.

Requirements

  • Bachelors, Masters, PhD, or equivalent experience in relevant fields (Computer Science, Computer Engineering, EECS, AI).
  • At least 3 years of relevant software development experience.
  • Strong C++, Python programming and software engineering skills
  • Experience with DL frameworks (e.g. PyTorch, JAX, TensorFlow, ONNX) and inference libraries (e.g. TensorRT, TensorRT-LLM, vLLM, SGLang, FlashInfer).
  • Experience with performance analysis and performance optimization

Nice To Haves

  • Strong foundation and architectural knowledge of GPUs.
  • Deep understanding of modern deep learning models and workloads (e.g. Transformers, Recommenders, ASR, TTS, Visual Understanding).
  • Proficiency in one of the deep learning programming domain specific languages (e.g. CUDA/TileIR/CuTeDSL/cutlass/Triton).
  • Prior contributions to major LLM inference frameworks (e.g. vLLM) or prior experience with graph compilers in deep learning inference (e.g. TorchDynamo/TorchInductor).
  • Prior experience optimizing performance for low-latency, resource-constrained systems or embedded AI pipelines (e.g. Jetson systems or other edge AI accelerators).

Responsibilities

  • Establish groundbreaking performance benchmarking methodologies and analysis workflows and identify performance issues and opportunities for NVIDIA’s inference ecosystem (e.g. TensorRT/TensorRT-EdgeLLM/Torch-TensorRT)
  • Contribute features and code to NVIDIA/OSS inference frameworks including but not limited to TensorRT/TensorRT-EdgeLLM/Torch-TensorRT.
  • Develop new model pipelines for NVIDIA’s inference ecosystem with optimized performance including but not limited to areas like quantization, scheduling, memory management, and distributed inference to set the gold standard for Gen AI performance.
  • Work with cross-collaborative teams inside and outside of NVIDIA across generative AI, automotive, robotics, image understanding, and speech understanding to set directions and develop innovative inference solutions.
  • Scale performance of deep learning models across different architectures and types of NVIDIA accelerators.

Benefits

  • You will also be eligible for equity and benefits .
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service