About The Position

NVIDIA's GPUs are at the core of modern AI infrastructure, from training large-scale models to running inference in production. This position depends on software as much as hardware, and compiler engineering is a big part of what makes it work. We are looking for an outstanding AI Research Engineer / Applied Scientist focused on Compilers / Low-level optimization to join the team and develop groundbreaking technologies in machine learning compilers and AI systems. We build innovative AI compiler solutions that work together with NVIDIA's software stack to provide comprehensive acceleration for modern machine learning models. NVIDIA is the world leader in accelerated computing, pioneering solutions to challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society.

Requirements

  • M.S./PhD degree in Computer Engineering, Computer Science related technical field (or equivalent experience).
  • 5+ years of experience building AI/ML systems.
  • Strong software engineering skills in Python and at least one systems language (C++ preferred).
  • Hands-on experience training/fine-tuning large models (Transformers, PEFT/LoRA, distributed training).
  • Solid understanding of machine learning fundamentals and experimentation best practices.
  • Experience with reinforcement learning (e.g., policy gradients, actor-critic, offline RL, bandit-style optimization).
  • Knowledge of prompt-engineering techniques
  • Ability to work across research and engineering, from prototype to production.

Nice To Haves

  • Distributed training/inference at scale.
  • Experience working with the NVIDIA NeMo framework.
  • Understanding of GPU performance, experience with benchmarking suites and performance profiling tools.
  • Formal methods or static analysis familiarity for correctness guarantees.
  • CUDA programming experience.

Responsibilities

  • Help trailblaze company efforts in applying AI within conventional compilation pipelines.
  • Design and implement AI-based technology addressing core problems of low-level GPU programming.
  • Build training pipelines for supervised fine-tuning and reinforcement learning (RL/RLHF-style or policy optimization variants).
  • Define model inputs/outputs over compiler low level compiler representations.
  • Develop evaluation frameworks to measure code quality, runtime, compile-time overhead, and correctness.
  • Intelligent (domain` task based) prompt engineering.
  • Collaborate with compiler engineers to integrate learned policies into production toolchains.
  • Prototype and iterate on model architectures, prompts, and fine-tuning strategies for scheduling and allocation tasks.
  • Create datasets from compiler traces, optimization passes, and target-specific performance signals.
  • Apply RL techniques to optimize for downstream objectives (performance, spill reduction, instruction-level parallelism, etc.) and run rigorous experiments, ablations, and benchmarking across workloads and hardware targets.

Benefits

  • competitive salaries
  • generous benefits package
  • equity

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

Ph.D. or professional degree

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service