About The Position

Our work at NVIDIA is dedicated towards a computing model focused on visual and AI computing. For two decades, NVIDIA has pioneered visual computing, the art and science of computer graphics, with our invention of the GPU. The GPU has also shown to be spectacularly effective at solving some of the most complex problems in computer science. Inference is the fastest growing and most competitive area in Generative AI today. It is where AI models impact our daily life, and where ever bit of accuracy and performance matters for quality, safety, and cost. Inference is also constantly evolving, with new acceleration algorithms, use cases, and deployment techniques. As a Product Manager MBA Intern for AI Platform Inference you will be responsible for building the tools, SDKs, and libraries which enables developers' Inference deployments to thrive on NVIDIA GPUs. As NVIDIA Product Managers, our goal is to enable developers to be successful on the NVIDIA Platform, and push the boundaries of what is possible with their AI deployments! For Inference, we are the champions inside NVIDIA for AI developers looking to accelerate their deployments on GPUs. We work directly with developers inside and outside of the company to identify key improvements, create roadmaps, and stay alert on the inference landscape. We also work with NVIDIA leaders to define clear product strategy, and marketing team teams to build go-to-market plans. The Product Management organization at NVIDIA is a small, strong, and impactful group. We focus on enabling deep learning across all GPU use cases and providing great solutions for developers. We are seeking a rare blend of product skills, technical depth, and passion to make NVIDIA great for developers. Does that sounds familiar? If so, we would love to hear from you!

Requirements

  • Currently pursuing an MBA with relevant experience and graduating in December 2026 or May/June 2027.
  • BS degree in Computer Science, Computer Engineering, or similar experience
  • Experience from the early product ideation to bring a product to market
  • Demonstrable knowledge of GenAI or machine learning concepts, particularly around performance optimization, and software development and delivery
  • Strong communication and interpersonal skills

Nice To Haves

  • Technical expertise in AI Inference software and technology
  • Working on Open Source & Github-first developer products with deep customer interactions
  • Knowledge of GPU architecture, HW/SW co-design, and performance profiling

Responsibilities

  • Analyze the product landscape for developer inference products
  • Develop product strategy and go-to-market plans for AI software
  • Collaborate with internal and external stakeholders to build product-based roadmaps for model optimization software
  • Work with leadership to align with and drive company strategy
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service