About The Position

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. Help design and ship an Always‑On, low‑overhead GPU profiling service that runs in production, scales across cluster environments, and delivers actionable insights for ML workloads. You will lead the architecture and hands‑on delivery across system software, drivers, and CUDA to make profiling continuously available and reliable. What you’ll be doing: Drive low‑overhead, high‑reliability implementations in C/C++, including IPC/shared memory, lock‑free buffers, and bounded CPU/memory budgets with clear benchmarks. Lead end‑to‑end feature delivery spanning user‑mode components, driver/platform layers, and performance counter/trace providers. Establish profiling models that integrate with existing ML/AI workflows (e.g., PyTorch/XLA) to turn low‑level signals into actionable insights.

Requirements

  • BS or MS degree or equivalent experience in Computer Engineering, Computer Science, or related degree.
  • 8+ years of system-level C/C++ development, including concurrency, memory management, and performance engineering and experience in system software design, operating systems fundamentals, computer architectures, performance analysis, and delivering production-quality software.
  • Expertise with profiling/tracing stacks for CPU/GPU (e.g., CUPTI, Nsight, performance counters, event correlation) and debugging concurrent systems.
  • Deep hands‑on CUDA and GPU architecture knowledge (runtime/driver APIs, CUDA streams/graphs, kernel behavior).
  • Proven experience designing and shipping production quality system software or drivers with strict reliability, observability, and performance constraints.
  • Demonstrated technical leadership: defining architecture and success metrics, and translating abstract product visions into actionable technical roadmaps with fast-paced, multidisciplinary teams.
  • Strong interpersonal, verbal, and written communication; able to influence across organizations and build trust with external collaborators.

Nice To Haves

  • Track record building continuous/always‑on or multi‑client profiling systems with predictable overhead at scale.
  • Hands-on experience tuning ML training/inference loops based on deep profiling analysis.
  • Familiarity with ML ecosystems (e.g., PyTorch, JAX) and correlating application‑level events with GPU traces/metrics.
  • Strong background in translating profiling data into actionable performance insights (compute vs memory bound, bottleneck triage).
  • Experience with user‑mode driver development and integration with platform permissions/security models.

Responsibilities

  • Drive low‑overhead, high‑reliability implementations in C/C++, including IPC/shared memory, lock‑free buffers, and bounded CPU/memory budgets with clear benchmarks.
  • Lead end‑to‑end feature delivery spanning user‑mode components, driver/platform layers, and performance counter/trace providers.
  • Establish profiling models that integrate with existing ML/AI workflows (e.g., PyTorch/XLA) to turn low‑level signals into actionable insights.

Benefits

  • You will also be eligible for equity and benefits .
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service