About The Position

Our work at NVIDIA is dedicated towards a computing model focused on visual and AI computing. For two decades, NVIDIA has pioneered visual computing, the art and science of computer graphics, with our invention of the GPU. The GPU has also shown to be spectacularly effective at solving some of the most complex problems in computer science. Today, NVIDIA’s GPU simulates human intelligence, running deep learning algorithms and acting as the brain of computers, robots and self-driving cars that can perceive and understand the world. We are looking to grow our company and teams with the smartest people in the world and there has never been a more exciting time to join our team! NVIDIA’s accelerated computing platform is the foundation of modern HPC and AI. At the core of this platform are the CUDA Core Libraries—C++ and Python libraries that enable developers to write fast, reliable, scalable GPU-accelerated software. We are looking for outstanding interns to contribute to the build, testing, packaging and developer experience to accelerate development and deliver CUDA Core Libraries that power GPU computing for both C++ and Python developers. This includes projects like CCCL (Thrust, CUB, libcudacxx), cuda-python, and numba-cuda. Join the team that builds, tests and packages the foundational libraries, algorithms, language and compiler infrastructure that make CUDA a speed of light delight for developers across a wide range of workloads including deep learning, scientific computing, and data analytics.

Requirements

  • Currently pursuing a BS, MS, or PhD in Computer Science, Computer Engineering, or a related field.
  • Experience with build systems such as CMake, scikit-build-core along with packaging in Conda and/or PyPi.
  • Familiarity with CI/CD systems including GitHub, GitLab or other related platforms along with the use of Docker images to facilitate workflows.
  • Familiarity with debugging the output of build systems and compilers to resolve issues in complex build environments that involve modern C++, CUDA and/or Python libraries.
  • Experience with software libraries or open-source projects, including testing, performance profiling, and code reviews.
  • Ability to work independently and drive a project from exploration to completion.
  • Clear written communication for design discussions and documentation.

Nice To Haves

  • Knowledge of CPU/GPU architecture and how hardware details impact algorithmic performance.
  • Familiarity with binary library compilation, linking, packaging, distribution, ABI compatibility and deployment strategies on Linux and/or Windows.
  • Familiarity with compiler infrastructure and tooling such as LLVM, Clang/LLVM tooling, or MLIR.
  • Comfort navigating and debugging large, multi-language codebases (C++, Python, CMake, GitHub Actions CI systems).
  • Demonstrates interest in developer tools, library design, developer experience and making other developers faster and more productive.

Responsibilities

  • Decomposing and modularizing build processes for reusability across multiple projects.
  • Partnering with the engineering teams to ensure their code can be built and deployed across Conda and PyPi ecosystems on Linux and Windows.
  • Developing robust and modern approaches to packaging compiled code, Python wrappers and their dependencies to get CUDA-enabled packages into the hands of users.
  • Designing CI pipelines that enable rapid build and testing of new code to improve development velocity and intelligently sample architecture, OS and GPU coverage.
  • Collaborating with expert CUDA engineers; participate in design reviews, code reviews, and open-source-style workflows.

Benefits

  • You will also be eligible for Intern benefits .
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service