About The Position

NVIDIA is building advanced compiler technologies to accelerate AI workloads, and we are looking for an engineer focused on performance validation, analysis, and tracking. In this role, you will work at the intersection of deep learning compilers, GPU systems, and automation infrastructure, ensuring that performance improvements are measurable, scalable, and continuously validated over time. Do you want to help drive the performance of next-generation compilers? Are you excited by how GPU performance powers breakthroughs in deep learning, autonomous systems, and high-performance computing? We are seeking a talented Deep Learning Compiler & Tools Engineer focused on CUDA Tile (Performance & Infrastructure) to join our team. You will collaborate closely with compiler developers, infrastructure providers, and hardware teams to build systems that track, analyze, and improve performance across rapidly evolving AI workloads. If you're passionate about performance, systems, and building infrastructure that drives real-world impact, we want to hear from you.

Requirements

  • BS, MS, or PhD (or equivalent experience) in Computer Science, Computer Engineering, Electrical Engineering, Mathematics, or related field
  • 5+ years of software engineering experience, including experience in performance engineering, benchmarking, or systems optimization
  • Strong programming skills in Python (C++ is a plus)
  • Experience with CI/CD systems and automation frameworks
  • Familiarity with hardware-aware performance analysis (GPUs, accelerators, or similar systems)
  • Experience working with deep learning frameworks such as PyTorch, TensorFlow, JAX, or TensorRT
  • Background in data analysis, profiling, and regression tracking
  • Ability to debug complex system-level issues across software and hardware layers

Nice To Haves

  • Experience with GPU performance analysis and optimization
  • Understanding of compiler internals (LLVM, MLIR, CUDA compilation flow)
  • Experience building performance dashboards and large-scale telemetry systems
  • Familiarity with hardware/software co-design or low-level performance tuning
  • Experience with distributed testing infrastructure or large-scale benchmarking systems

Responsibilities

  • Design and develop performance testing frameworks for deep learning compilers and workloads
  • Build and maintain automated pipelines (CI/CD) to continuously track performance across models, hardware, and compiler changes
  • Implement benchmarking systems to measure latency, throughput, and efficiency of AI and HPC workloads
  • Analyze performance trends over time and identify regressions, bottlenecks, and optimization opportunities
  • Partner with compiler and architecture teams to debug and resolve performance issues
  • Develop tools and dashboards for performance visualization, reporting, and insights
  • Enable scalable testing across diverse GPU systems and environments
  • Improve infrastructure to ensure reliable, reproducible, and high-signal performance data

Benefits

  • equity
  • benefits
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service