About The Position

We are now looking for a Senior Deep Learning Performance Architect! NVIDIA seeks a Senior DL Performance Architect to join our group of pioneers who enjoy pushing AI Inference performance boundaries. Our team focuses on ambitious hardware-software co-design to speed AI Inference workloads. This role gives an outstanding opportunity to develop world-class performance strategies, guide future GPU architecture decisions, and lead AI innovation. If you are passionate about AI efficiency Pareto curves, have a proven record of modeling LLM performance and architecting AI systems, and enjoy optimizing every cycle, this role may be perfect for you! What you'll be doing: Design novel GPU and system architectures to advance the forefront of AI Inference performance and efficiency Construct, investigate, and test popular deep learning algorithms and applications Understand and analyze the relationship between hardware and software architectures as it influences future algorithms and applications Build efficient power and performance models of AI inference stack, while capturing minimal but significant information to guide next-gen HW architecture Collaborate across the company to guide the direction of AI, working with software, research, and product teams

Requirements

  • A MS or PhD in a relevant field (CS, EE, Math) or equivalent experience, with 5+ years of relevant experience
  • Strong mathematical foundation in machine learning and deep learning
  • Expert programming skills in C, C++, and/or Python
  • Familiarity with GPU computing (CUDA or similar) and HPC (MPI, OpenMP) stack
  • Strong knowledge and coursework in computer architecture

Nice To Haves

  • Background with systems-level performance modeling, profiling, and analysis
  • Experience in characterizing and modeling system-level performance, accomplishing comparison studies, and documenting and publishing results
  • Background in improving AI Inference workloads by developing CUDA kernels or compilers for custom ASIC hardware

Responsibilities

  • Design novel GPU and system architectures to advance the forefront of AI Inference performance and efficiency
  • Construct, investigate, and test popular deep learning algorithms and applications
  • Understand and analyze the relationship between hardware and software architectures as it influences future algorithms and applications
  • Build efficient power and performance models of AI inference stack, while capturing minimal but significant information to guide next-gen HW architecture
  • Collaborate across the company to guide the direction of AI, working with software, research, and product teams

Benefits

  • You will also be eligible for equity and benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service