AI/ML ASIC Architect

SandiskMilpitas, CA
4hOnsite

About The Position

In this AI/ML ASIC Architecture position, you will develop AI Storage Solutions based advanced system architectures and AI/ML Accelerator ASIC architecture specifications for Sandisk’s next generation products. You will drive, initiate, and analyze frontend architecture of the AI/ML Accelerator product. As an AI/ML ASIC Architect you will help drive new architecture initiatives that leverage the state-of-the-art frontend interfaces like UCIe, PCIe, CXL, etc that integrates AI Storage Solutions with xPU in a 3D package system. You will drive the AI Storage Solutions based architecture. You will exercise your technical expertise and excellent communication skills to collaborate with design and product planning with an eye towards delivering innovative and highly competitive adaptive accelerators solutions. Typical activities include writing architecture spec, working with other architects in the team, work with RTL/DV/Simulation/Emulation/FW teams to evaluate these changes and assess the performance, power, area, and endurance of the product. You will work closely with excellent colleague engineers, cope with complex challenges, innovate, and develop products that will change the data centric architecture paradigm.

Requirements

  • Bachelors or Masters or PhD in Computer/Electrical Engineering with 10+ years of hands-on Architecture experience authoring specifications
  • Strong technical background architecting ASIC, SoC, or I/O subsystems involving PCIe/UCIe/CXL and DMA engines
  • Knowledge of I/O Subsystem and DMA interactions with internal embedded processor-subsystems (x86, RISC-V or ARM) and external host CPU
  • Good understanding of computer/graphics architecture, ML, LLM
  • Architecting an GPU/TPU/xPU Accelerator systems with optimized high bandwidth memory hierarchy and frontend architecture for multi-trillion parameter LLM training/inference including Dense, Mixture of Experts (MoE) with multiple modalities (text, vision, speech)
  • KV cache optimization, Flash Attention, Mixture of Experts
  • Deep experience optimizing large-scale ML systems, GPU architectures
  • Proficiency in principles and methods of microarchitecture, software, and hardware relevant to performance engineering
  • Knowledge of ARM Processors and AXI Interconnects

Nice To Haves

  • Familiarity and background in UCIe, CXL, NVLink, or UAL microarchitecture and protocols is a plus
  • Familiarity with High-speed networking: InfiniBand, RDMA, NVLink is a plus
  • Expert knowledge of transformer architectures, attention mechanisms, and model parallelism techniques
  • Multi-disciplinary experience, including familiarity with Firmware and ASIC design
  • Expertise in CUDA programming, GPU memory hierarchies, and hardware-specific optimizations
  • Proven track record architecting distributed training systems handling large scale systems
  • Previous experience with NVMe storage systems, protocols, and NAND flash – advantage

Responsibilities

  • Responsible for driving the AI/ML ASIC architecture that integrates the AI Storage with GPU/TPU/xPU accelerators, with a particular focus on I/O subsystems connected over UCIe/ PCIe/CXL
  • Author architecture specifications in clear and concise language for AI/ML xPU based Accelerator using AI Storage Solutions.
  • Define I/O subsystem and PCIe DMA architectures, including their interactions with internal embedded processor-subsystems, Network on Chip, Memory controllers, and FPGA fabric.
  • Create flexible and modular I/O subsystem architectures that can be deployed in either Chiplet, monolithic or 3D form factors.
  • Work with customers, and cross-functional teams to scope SoC requirements, analyze PPA tradeoffs, and then define architectural requirements that meet the PPA and schedule targets.
  • Define SoC subsystem and DMA hardware, software, and firmware interactions with embedded processing subsystems and SoC CPUs on the device side and Host CPUs.
  • Author architecture specifications in clear and concise language. Guide and assist pre-silicon design/verification and post-silicon validation during the execution phase.
  • Responsible for improving the AI/ML ASIC Architecture performance through hardware & software co-optimization, post-silicon performance analysis, and influencing the strategic product roadmap.
  • Work with customers, and cross-functional teams to scope SoC requirements, analyze PPA tradeoffs, and then define architectural requirements that meet the PPA and schedule targets.
  • Guide and assist pre-silicon design/verification and post-silicon validation during the execution phase.
  • LLM Workload analysis and characterization of ASIC and competitive datacenter and AI solutions to identify opportunities for performance improvement in our products.
  • Experience architecting one or some components of AI/ML accelerator ASICs such as HBM, PCIe/UCIe/CXL, NoC, DMA, Firmware Interactions, NAND, xPU, fabrics, etc
  • Drive the AI Storage Solutions frontend system architecture with GPU/TPU/NPU/xPU to match or exceed the nextgen HBM bandwidth
  • Architect memory-efficient inference/training systems utilizing techniques like pruning, quantization with MX format , continuous batching/chunked prefill, and speculative decoding
  • Collaborate with internal and external stakeholders/ML researchers to disseminate results and iterate at rapid pace

Benefits

  • We offer a comprehensive package of benefits including paid vacation time; paid sick leave; medical/dental/vision insurance; life, accident and disability insurance; tax-advantaged flexible spending and health savings accounts; employee assistance program; other voluntary benefit programs such as supplemental life and AD&D, legal plan, pet insurance, critical illness, accident and hospital indemnity; tuition reimbursement; transit; the Applause Program, employee stock purchase plan, and the Sandisk's Savings 401(k) Plan.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service