About The Position

Our Deep Learning models performance engineering team at NVIDIA is hiring software engineers at all experience levels to build and optimize the libraries and tools that enable Deep Learning Researchers and Engineers to design, develop, and deploy efficient AI applications. We are an ambitious and diverse team that builds optimizations directly into mainstream open source Deep Learning frameworks - PyTorch and JAX, which boost the performance at all levels of NVIDIA's AI stack. Our team has a wide collaborative footprint, working not only with multiple teams across NVIDIA but also with the broader open-source community to deliver SOTA Deep Learning performance on the best AI platform in the world! What you will be doing: Build and support Transformer Engine, the open-source library for accelerating the training of Large Language Models. Collaborate on systems research that improves Deep Learning model performance, such as training using extremely low precision, parallelism methods, etc. Implement, benchmark, and optimize new Deep Learning models such as LLMs straight out of groundbreaking research to scale efficiently on NVIDIA GPUs and systems. Build and contribute to NVIDIA submissions on community benchmarks such as MLPerf. Engage with the open-source community as well as support enterprise customers and partners by delivering the benefits of NVIDIA’s latest hardware and software innovations. Influence the design of new hardware generations and core platform software components for NVIDIA hardware and systems.

Requirements

  • BS or equivalent experience in Computer Science, Electrical Engineering, or a related field.
  • 3+ years of experience in C++ and Python programming.
  • Strong background, experience, or coursework in parallel systems programming, preferably on GPUs.
  • Knowledge of Computer Architecture, Code Optimization, and/or Operating Systems.
  • Proven experience in developing large software projects.
  • Excellent verbal and written communication skills.

Nice To Haves

  • Experience in PyTorch, JAX, or any other DL framework.
  • Experience with performance analysis, profiling, and code optimization techniques, especially with multi-GPU or multi-node systems.
  • Knowledge of modern LLM architectures, attention mechanisms, and/or low-level DL libraries such as cuBLAS, cuDNN, and cuSOLVER.
  • Experience in writing GPU kernels using any of - CUDA, OpenAI Triton, CuTeDSL, Pallas, or other similar libraries.
  • Any past contributions to the open source community and/or experience working with multidisciplinary teams also showcase readiness for the team's responsibilities.

Responsibilities

  • Build and support Transformer Engine, the open-source library for accelerating the training of Large Language Models.
  • Collaborate on systems research that improves Deep Learning model performance, such as training using extremely low precision, parallelism methods, etc.
  • Implement, benchmark, and optimize new Deep Learning models such as LLMs straight out of groundbreaking research to scale efficiently on NVIDIA GPUs and systems.
  • Build and contribute to NVIDIA submissions on community benchmarks such as MLPerf.
  • Engage with the open-source community as well as support enterprise customers and partners by delivering the benefits of NVIDIA’s latest hardware and software innovations.
  • Influence the design of new hardware generations and core platform software components for NVIDIA hardware and systems.

Benefits

  • You will also be eligible for equity and benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service