Senior Product Architect, Storage

NVIDIASanta Clara, CA
9d

About The Position

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by phenomenal technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. As an AI Storage Platform Architect at NVIDIA, this position will be the linchpin between cutting-edge hardware platforms and real-world AI deployments - translating the capabilities of Rubin GPUs, Vera CPUs, BlueField DPUs, NVLink fabric, and Spectrum-X networking into validated, production-ready blueprints. Work hand-in-hand with storage ecosystem partners to co-develop reference architectures for the NVIDIA AI Data Platform and beyond, ensuring that every layer of the stack - compute, fabric, memory, and storage - is optimized for modern AI workloads!

Requirements

  • 12+ years of experience architecting datacenter-scale AI, HPC, or storage infrastructure as a Principal Architect, Solutions Architect, Principal Engineer, or equivalent.
  • Bachelors in Computer Science or related field (or equivalent experience).
  • Deep expertise in AI infrastructure build, including disaggregated inference architectures, LLM training pipelines, and autonomous AI system patterns.
  • Hands-on experience with RDMA (RoCEv2/InfiniBand), high-performance storage protocols (NVMeoF, GPFS, Lustre, or S3-compatible object storage), and low-latency fabric design.
  • Strong understanding of KV Cache management strategies, including tiered memory/storage hierarchies for inference optimization.
  • Familiarity with Retrieval-Augmented Generation (RAG) architectures and the storage, indexing, and retrieval patterns they demand at scale.
  • Experience with NVIDIA DOCA or equivalent DPU/SmartNIC programming frameworks for offloading data plane and storage services.
  • Proven foundation in networking: Spectrum-X Ethernet, InfiniBand, NVLink Switch fabrics, congestion control, and datacenter topologies.

Nice To Haves

  • Proven experience designing reference architectures jointly with storage or infrastructure OEM partners (e.g., NetApp, DDN, VAST, Pure Storage, Dell or similar).
  • Hands-on deployment experience with disaggregated inference systems, including prefill/decode separation, KV Cache offload, and request routing.
  • Deep familiarity with NVIDIA Grace-Hopper, Grace-Blackwell, or upcoming Vera-Rubin platforms and their system-level implications for AI workloads.

Responsibilities

  • Architect end-to-end reference architectures for disaggregated inference (aligned with NVIDIA Dynamo), large-scale foundation model training, and agentic AI pipelines — co-developed with storage and ecosystem partners.
  • Design and validate storage-optimized AI infrastructure, including KV Cache tiering strategies, checkpoint acceleration, and high-throughput dataset pipelines that leverage RDMA and NVMeoF fabrics.
  • Define system-level architectures spanning Rubin graphics processors, Vera central processing units, BlueField data processing units, NVLink interconnects, and Spectrum-X Ethernet to improve efficiency across the full AI lifecycle.
  • Develop and publish reference architectures, whitepapers, and deployment guides for the NVIDIA AI Data Platform and partner-integrated solutions.
  • Drive prototyping, benchmarking, and performance validation of AI infrastructure at scale - diagnosing bottlenecks across compute, networking, and storage layers.
  • Leverage DOCA to architect DPU-offloaded data services including storage acceleration, telemetry, security enforcement, and network virtualization.
  • Collaborate with RAG and autonomous AI teams to build retrieval-optimized storage architectures, including vector database integration, low-latency object access patterns, and inference-aware caching.
  • Partner with customers and collaborators in the ecosystem to co-innovate, deliver proof-of-concepts (POCs) and MVPs that demonstrate end-to-end AI platform performance leadership.

Benefits

  • You will also be eligible for equity and benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service