Data Center AI SoC Architect

QualcommSan Diego, CA
5d

About The Position

Qualcomm’s growing Data Center AI Architecture team is defining the next-generation cloud AI data center products to serve Large Language Model and Generative AI inference workloads, which demand exceptional effective memory bandwidths & capacities for effective compute. We are seeking computer architects with experience in data center SoC architecture, especially in the areas of memory systems, reliability/availability/serviceability (RAS), and/or emerging memory technologies such as processing in memory (PIM), 3DIC, and chiplets to drive the analysis and development of future generations of transformational inference accelerators & their memory architecture. In this dynamic role, you will have the opportunity to innovate, analyze, and help define multiple generations of AI accelerators and their memory system solutions on the cutting edge of the AI accelerator and memory industries. You will engage with Data Center Business Unit Architects and Product Managers to understand product requirements; analyze accelerator and memory technologies; quantify tradeoffs; and influence the technical direction with data-driven justifications. In turn, you will work with and drive requirements to various technology and IP core teams within Qualcomm to ensure delivery of components with the appropriate feature sets. Throughout the process, you will effectively communicate and collaboratively engage with other SoC & IP architects, designers, systems engineers, product managers, and software teams to enable market-leading Data Center products. If you’d like to be a member of collaborative multidisciplinary architecture team within a dynamic and growing Data Center AI Architecture team, and if you’re passionate about shaping the future of the world’s most advanced technologies, we want to hear from you!

Requirements

  • Computer architecture background fundamentals across processors, memories, interconnects, etc.
  • Strong quantitative analysis tools and methods, and track record of using tools such as high-level calculators & spreadsheets, profilers, functional and performance simulators, etc.
  • End-to-end competitive analysis across architecture features, performance, power, area, cost, etc.
  • Knowledge of data center requirements in Reliability, Availability, Serviceability (RAS), ECC, security, and/or encryption, and the methodology to architect and evaluate such features
  • Background in memory systems, understanding the tradeoffs among bandwidth, latency, power, etc.
  • Understanding of DRAM architecture, memory controllers across one or more protocols such as LPDDR, HBM, DDR, GDDR, etc.
  • Ability to abstract appropriately to define problems and solutions, and make data-driven decisions
  • Excellent communication, documentation, and interpersonal skills with ability to convey proposals and interact effectively across a distributed multi-discipline organization
  • Self-driven execution, full problem ownership, and a focus on accuracy and rigorous methodology
  • Bachelor's degree in Computer Science, Engineering, Information Systems, or related field and 8+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience.
  • Master's degree in Computer Science, Engineering, Information Systems, or related field and 7+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience.
  • PhD in Computer Science, Engineering, Information Systems, or related field and 6+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience.

Nice To Haves

  • Preferably exposure to novel memory technologies such as processing-in-memory (PIM), processing-near-memory, 3DIC, etc.
  • Preferably exposure to interconnects, chip-to-chip and die-to-die protocols, and chiplet architectures
  • MS or PhD degree in EE/ECE/CE/CS or related field required
  • 10+ years of experience in computer architecture, AI accelerators, memory architecture, memory technologies
  • Generative AI & Machine Learning workloads, especially Large Language Model inference
  • Processor architecture, including ISA design & microarchitecture
  • Exposure to one or more of: chiplets, power, thermals, PHYs, packaging
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service