Nvidia-posted 4 months ago
$148,000 - $287,500/Yr
Senior
Santa Clara, CA
Computer and Electronic Product Manufacturing

NVIDIA is seeking senior engineers who are mindful of performance analysis and optimization to help us squeeze every last clock cycle out of Deep Learning workloads. If you are unafraid to work across all layers of the hardware/software stack from GPU architecture to Deep Learning Framework to achieve peak performance, we want to hear from you! This role offers an opportunity to directly impact the hardware and software roadmap in a fast-growing technology company that leads the AI revolution.

  • Implement language and multimodal model inference as part of NVIDIA Inference Microservices (NIMs).
  • Contribute new features, fix bugs and deliver production code to TRT-LLM, NVIDIA's open-source inference serving library.
  • Profile and analyze bottlenecks across the full inference stack to push the boundaries of inference performance.
  • Benchmark state-of-the-art offerings in various DL models inference and perform competitive analysis for NVIDIA SW/HW stack.
  • Collaborate heavily with other SW/HW co-design teams to enable the creation of the next generation of AI-powered services.
  • PhD in CS, EE or CSEE or equivalent experience.
  • 3+ years of experience.
  • Strong background in deep learning and neural networks, in particular inference.
  • Experience with performance profiling, analysis and optimization, especially for GPU-based applications.
  • Proficient in C++, PyTorch or equivalent frameworks.
  • Deep understanding of computer architecture, and familiarity with the fundamentals of GPU architecture.
  • Proven experience with processor and system-level performance optimization.
  • Deep understanding of modern LLM architectures.
  • Strong fundamentals in algorithms.
  • GPU programming experience (CUDA or OpenCL) is a strong plus.
  • Equity and benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service