CPU Storage Tech Lead

OpenAISan Francisco, CA

About The Position

We are seeking a CPU & Storage Technical Lead to define and drive the server compute and storage architecture strategy for Stargate infrastructure. In this role, you will own technical direction across CPU platforms, memory configurations, local and disaggregated storage systems, and their integration into large-scale AI clusters. You will evaluate vendor roadmaps, lead platform tradeoff decisions, and ensure compute and storage systems are optimized for training, inference, and supporting services. You will work cross-functionally with hardware engineering, performance modeling, networking, supply chain, and deployment teams, as well as external partners such as AMD, Intel, OEMs, ODMs, and storage vendors. This is a highly strategic role for someone who can operate deeply at the component level while also driving long-range infrastructure decisions.

Requirements

  • Bachelor’s degree in Computer Engineering, Electrical Engineering, Computer Science, or related technical field; advanced degree preferred.
  • 10+ years of experience in server hardware, systems architecture, data center infrastructure, or hyperscale compute platforms.
  • Deep expertise in modern CPU architectures (x86, ARM, accelerator host systems) and server platform design.
  • Strong understanding of memory systems, PCIe/CXL fabrics, NUMA behavior, and platform-level performance constraints.
  • Experience with storage systems including NVMe, SSD qualification, RAID, distributed storage, object/file systems, or high-performance data pipelines.
  • Experience evaluating hardware tradeoffs across performance, cost, power, thermals, and supply availability.
  • Familiarity with GPU clusters and AI training/inference infrastructure strongly preferred.
  • Experience working directly with OEMs, ODMs, silicon vendors, or storage vendors.
  • Strong systems thinking with ability to connect component decisions to fleet-level outcomes.
  • Excellent communication skills with the ability to influence engineering and executive stakeholders.
  • Proven ability to operate in fast-moving, ambiguous environments with high ownership.

Nice To Haves

  • Experience designing infrastructure for large-scale AI or HPC environments.
  • Familiarity with CPU vendor roadmaps across AMD, Intel, and ARM ecosystems.
  • Experience with distributed storage architectures supporting GPU clusters.
  • Knowledge of fleet operations, hardware lifecycle management, and production deployments at scale.
  • Prior experience in hyperscale cloud, AI infrastructure, or advanced compute environments.

Responsibilities

  • Own CPU and storage technical strategy for Stargate compute infrastructure across current and future generations.
  • Evaluate CPU platforms across performance, efficiency, memory bandwidth, PCIe topology, cost, and roadmap alignment.
  • Define storage architectures for AI environments, including boot media, local NVMe, shared storage, caching tiers, metadata services, and high-performance data pipelines.
  • Drive server platform decisions involving CPU, memory, NIC, GPU, and storage subsystem integration.
  • Partner with performance modeling teams to quantify tradeoffs across compute, memory, I/O, and storage bottlenecks.
  • Work with silicon and hardware vendors on roadmap influence, feature requests, qualification plans, and technical escalations.
  • Lead bring-up and validation efforts for new CPU and storage platforms in lab and production environments.
  • Partner with networking and cluster architecture teams to optimize end-to-end node design and data movement.
  • Support supply chain and sourcing teams with technical vendor assessments and second-source strategies.
  • Drive reliability, serviceability, and fleet lifecycle planning for compute and storage platforms.
  • Translate future AI workload requirements into infrastructure platform specifications.
  • Provide technical leadership across cross-functional stakeholders and executive reviews.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service