About The Position

An applied research team within NVIDIA’s Networking Systems & Software Architecture group is solving some of AI’s hardest infrastructure problems. The team builds systems-level software that moves data between GPUs, nodes, and storage at the speed modern AI demands—spanning low-level transport optimization, hardware-software co-design, and communication frameworks that plug directly into production AI stacks. The team's charter expands into emerging domains including quantum computing interconnects. This Principal Architect role leads the research agenda and architectural direction for how NVIDIA’s AI systems communicate at scale—across GPUs, DPUs, NICs, and heterogeneous storage. It requires someone who defines project scope from scratch, publishes original work, and translates research breakthroughs into production-grade software that ships industry-wide!

Requirements

  • 15+ years in systems software and/or networking with deep expertise in high-performance networking (InfiniBand, RoCE, RDMA, NVLink), communication libraries (e.g. NIXL, NCCL, UCX, MPI, NVSHMEM), and GPU accelerated systems, with track record of defining and delivering complex, cross-team technical initiatives from research concept to production.
  • MS, PhD or equivalent experience in Computer Science, Computer Engineering, Electrical Engineering, or a related field.
  • Deep understanding of computer architecture, memory hierarchies, DMA engines, and OS-level networking.
  • Understanding of ML systems concepts—transformer architectures, KV cache mechanics, model parallelism, or distributed training and inference patterns.
  • Proficiency in programming languages such as C, C++, Rust and Python.

Nice To Haves

  • Knowledge of ML inference frameworks (vLLM, SGLang, TensorRT-LLM) and their communication requirements.
  • CUDA programming and NVIDIA GPU architecture expertise.
  • Proved experience influencing product strategy and technical roadmap at a senior level.
  • Major open-source contributions.

Responsibilities

  • Setting the long-term technical vision for distributed AI communication systems—GPU-to-GPU, GPU-to-storage, and cross-node data movement.
  • Conducting original research and prototyping next-generation networking solutions over RDMA, NVLink, and GPUDirect.
  • Driving hardware-software co-optimization with GPU, DPU, NIC, and network switch.
  • Investigating fundamental bottlenecks in communication runtimes for large-scale AI workloads (KV cache transfer, disaggregated prefill/decode, model parallelism).
  • Integrating networking capabilities into AI serving stacks such as vLLM, SGLang, and TensorRT-LLM.
  • Publishing findings, representing NVIDIA in industry forums and standards bodies, and mentoring senior engineers across the organization.

Benefits

  • competitive salaries
  • comprehensive benefits package
  • equity
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service