About The Position

NVIDIA's invention of the GPU 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, we are increasingly known as “the AI computing company”. We're looking for a Senior Performance Compiler Engineer to join our team and work on the open-source Triton compiler project. This opportunity involves working with new technologies and using compilers to improve AI performance on NVIDIA GPUs. Your work will enable breakthroughs in large language models, agents, and other high-impact AI applications, accelerating both training and inference. You will be immersed in a diverse, supportive environment where everyone is inspired to do their best work, pushing the limits of what's possible.

Requirements

  • Bachelor, Masters or Ph.D. degree or equivalent experience in Computer Science, Computer Engineering, Applied Math, or a related field.
  • 8+ years of relevant industry experience in software development.
  • Demonstrated strong C++ programming and software design skills, with an emphasis on performance analysis and debugging.
  • Experienced in parallel programming, including CUDA/OpenCL GPU programming or other parallel models such as OpenMP.
  • Solid understanding of computer architecture and hands-on experience with assembly-level programming.

Nice To Haves

  • Experience in tuning BLAS or deep learning library kernels.
  • Background in numerics and linear algebra.
  • Experience with machine learning compilers like TVM or MLIR.
  • Contributions to open-source projects, especially in the AI/ML or compiler space.
  • Familiarity with the latest research in AI algorithms and numerics as well as a strong track record of contributions to open-source projects, particularly in the AI/ML, compiler, or high-performance computing domains.

Responsibilities

  • Investigating the latest and future NVIDIA GPU hardware architecture and programming models.
  • Working on the frontier of AI by understanding advanced algorithms (like attention sinks and MoEs) and numerics (like block-scaled floating point) to identify new opportunities for optimization.
  • Designing and implementing compiler technology using MLIR to optimize high-level kernel descriptions (written in Triton's Python DSL), with a focus on generating efficient, low-level GPU code.
  • When vital, you'll also be able to use inline PTX to hand-tune critical code paths and extract peak performance from the hardware.
  • Engaging in a dynamic, iterative process of optimization—sometimes starting with the kernel, sometimes with the compiler—to find the most efficient path to peak performance.
  • Collaborating with teams across NVIDIA, including hardware architects and the CUDA compiler team, to influence future products and ensure we are always operating at maximum efficiency.

Benefits

  • With competitive salaries and a generous benefits package, we are widely considered to be one of the technology world’s most desirable employers.
  • You will also be eligible for equity and benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service