About The Position

NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI, digital twins, and next-generation networking is transforming industries and profoundly impacting society — from AI factories and hyperscale data centers to autonomous systems, cloud infrastructure, and high-performance computing. Our internships offer an excellent opportunity to gain hands-on experience with NVIDIA’s industry-leading networking software and hardware stack, including BlueField DPUs, ConnectX SmartNICs, and the DOCA SDK. We’re seeking ambitious, technically strong, and curious students who want to help us push the boundaries of AI networking performance and scalability. Throughout the 12-week full-time internship, you’ll collaborate with experienced engineers on real customer-facing and product-enabling projects that have measurable impact.

Requirements

  • Pursuing a Bachelor’s, Master’s, or PhD program in Computer Science, Computer Engineering, Electrical Engineering, or a related field.
  • Strong programming skills in C or C++; familiarity with Python or Bash scripting is a plus.
  • Understanding of networking fundamentals (Ethernet, TCP/IP, RDMA, or RoCE) and experience with Linux development environments.
  • Solid problem-solving, debugging, and analytical skills.
  • A genuine passion for AI systems, distributed computing, and high-performance networking.

Nice To Haves

  • Prior experience or coursework involving DPDK, DOCA, NCCL, or CUDA.
  • Hands-on work with embedded systems, NCCL, GPU networking, or data-center scale computing.
  • Demonstrated self-learning and innovation (hackathons, open-source contributions, side projects).
  • Strong communication skills and the ability to thrive in collaborative, fast-paced environments.

Responsibilities

  • Work on AI infrastructure networking and systems software for large-scale data-center networking stacks, focusing on performance, reliability, and scalability.
  • Develop and optimize software for ConnectX SmartNICs and BlueField DPUs, including drivers, firmware, and DOCA/DPDK-based data-plane applications.
  • Work on RDMA (RoCE/Ethernet) networking to improve low-latency, high-throughput communication between GPUs, NICs, and DPUs in distributed AI/HPC systems.
  • Debug and tune end-to-end networking performance in large-scale distributed training environments.

Benefits

  • Intern benefits

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Intern

Number of Employees

5,001-10,000 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service