Senior Staff Engineer

Data Direct Networks
Remote

About The Position

This is an incredible opportunity to be part of a company that has been at the forefront of AI and high-performance data storage innovation for over two decades. DataDirect Networks (DDN) is a global market leader renowned for powering many of the world's most demanding AI data centers, in industries ranging from life sciences and healthcare to financial services, autonomous cars, Government, academia, research and manufacturing. "DDN's A3I solutions are transforming the landscape of AI infrastructure." – IDC “The real differentiator is DDN. I never hesitate to recommend DDN. DDN is the de facto name for AI Storage in high performance environments” - Marc Hamilton, VP, Solutions Architecture & Engineering | NVIDIA DDN is the global leader in AI and multi-cloud data management at scale. Our cutting-edge data intelligence platform is designed to accelerate AI workloads, enabling organizations to extract maximum value from their data. With a proven track record of performance, reliability, and scalability, DDN empowers businesses to tackle the most challenging AI and data-intensive workloads with confidence. Our success is driven by our unwavering commitment to innovation, customer-centricity, and a team of passionate professionals who bring their expertise and dedication to every project. This is a chance to make a significant impact at a company that is shaping the future of AI and data management. Our commitment to innovation, customer success, and market leadership makes this an exciting and rewarding role for a driven professional looking to make a lasting impact in the world of AI and data storage. DDN is seeking a highly experienced Senior Staff Engineer specializing in AI Data Path & Storage to lead hands-on development and integration of advanced storage systems with next-generation AI inference pipelines. This role involves coding, prototyping, and rapidly iterating on solutions in close collaboration with architects to design and deliver high-performance data movement architectures. You will leverage NVIDIA’s NIXL (Inference Transfer Library) alongside the Infinia Data Intelligence Platform to enable ultra-low-latency, high-throughput data movement across GPU, memory, and distributed storage layers, including workloads involving KV cache management and vector database retrieval. The ideal candidate brings deep expertise in distributed storage, GPU data paths, and large-scale system optimization, with a proven track record of building and shipping production-grade AI infrastructure.

Requirements

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
  • 12+ years of experience in storage systems, distributed systems, or performance engineering.
  • Proven track record of architecting and delivering large-scale, high-performance infrastructure systems.
  • Deep expertise in distributed storage architectures (object storage, scalable file systems, or cloud-native storage platforms).
  • Strong understanding of Linux I/O stack, filesystem internals, and storage protocols.
  • Extensive hands-on experience with NVMe, SSD optimization, and high-performance storage environments.
  • Strong experience with RDMA, InfiniBand, or other high-speed data transfer technologies.
  • Solid understanding of GPU computing concepts and CPU–GPU data movement patterns.
  • Proficiency in Python and/or C/C++, with advanced debugging, profiling, and performance tuning skills.
  • Demonstrated ability to optimize latency-sensitive, high-throughput production systems.

Nice To Haves

  • Hands-on experience with NVIDIA NIXL or similar data movement frameworks.
  • Experience with GPU-aware storage pipelines and GPUDirect Storage.
  • Strong understanding of AI inference systems, LLM serving architectures, and KV cache optimization.
  • Experience with Retrieval-Augmented Generation (RAG) pipelines and open vector search ecosystems.
  • Background in high-performance computing (HPC) or hyperscale distributed environments.
  • Expertise in caching strategies, memory tiering, and data locality optimization.
  • Experience designing disaggregated compute and storage architectures.

Responsibilities

  • Lead the design and implementation of high-performance data movement pipelines using NVIDIA NIXL across GPU, CPU, and storage tiers.
  • Architect and drive integration of DDN Infinia with GPU-accelerated inference platforms for large-scale, real-time AI workloads.
  • Own end-to-end optimization of I/O paths between GPU memory and storage using technologies such as NVIDIA GPUDirect Storage, RDMA, and NVMe-over-Fabrics.
  • Define and implement multi-tier storage architectures (NVMe, SSD, object storage) optimized for inference latency, throughput, and scalability.
  • Lead development of advanced KV cache management strategies, including offloading, prefetching, and persistence across distributed storage layers.
  • Partner with AI/ML engineering teams to optimize inference performance in frameworks such as PyTorch and TensorFlow.
  • Establish benchmarking frameworks and lead performance tuning efforts for storage and data movement in production inference environments.
  • Diagnose and resolve complex system bottlenecks across storage, networking, and GPU subsystems.
  • Influence architecture decisions for distributed inference systems, ensuring scalability, resilience, and efficient data locality.
  • Drive engineering excellence through best practices in observability, performance monitoring, automation, and reliability engineering.
  • Mentor junior engineers and provide technical leadership across cross-functional teams.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service