About The Position

We are now looking for a Senior DL Algorithms Engineer! We are seeking a highly skilled Deep Learning Algorithms Engineer with hands-on experience optimizing and deploying Large Language Models (LLMs) and Vision-Language Models (VLMs) in production environments. In this role, you will focus on optimizing and deploying deep learning models for efficient and fast inference across diverse GPU platforms. You will collaborate with research scientists, software engineers, and hardware specialists to bring cutting-edge AI models from prototype to production. As NVIDIA makes inroads into the Datacenter business, our team plays a central role in getting the most out of our exponentially growing datacenter deployments as well as establishing a data-driven approach to hardware design and system software development. We collaborate with a broad cross section of teams at NVIDIA ranging from DL research teams to CUDA Kernel and DL Framework development teams, to Silicon Architecture Teams. As our team grows, and as we seek to identify and take advantage of long-term opportunities, our skillset needs are expanding as well. We have some of the most forward-thinking and hardworking people on the planet working for us. If you're creative and autonomous, we want to hear from you!

Requirements

  • Master’s or PhD in Computer Science, Electrical Engineering, Computer Engineering, or a related field (or equivalent experience)
  • 4+ years of professional experience in deep learning or applied machine learning.
  • Strong foundation in deep learning algorithms, including hands-on experience with LLMs and VLMs
  • Deep understanding of transformer architectures, attention mechanisms, and inference bottlenecks.
  • Proficient in building and deploying models using PyTorch or TensorFlow in production-grade environments.
  • Solid programming skills in Python and C++

Nice To Haves

  • Proven experience deploying LLMs or VLMs at scale in real-world applications.
  • Hands-on experience with model optimization and serving frameworks, such as: TensorRT, TensorRT-LLM, vLLM, SGLang.

Responsibilities

  • Optimize deep learning models for low-latency, high-throughput inference.
  • Convert and deploy models using frameworks such as TensorRT and TensorRT-LLM
  • Understand, analyze, profile, and optimize performance of deep learning workloads on state-of-the-art hardware and software platforms.
  • Collaborate with internal and external researchers to ensure seamless integration of models from training to deployment.

Benefits

  • You will also be eligible for equity and benefits
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service