Qualcomm-posted about 1 month ago
Full-time • Mid Level
San Diego, CA
5,001-10,000 employees
Computer and Electronic Product Manufacturing

Qualcomm Overview: Qualcomm is a company of inventors that unlocked edge AI and connected computing ushering in an age of rapid acceleration in connectivity and new possibilities that will transform industries, create jobs, and enrich lives. But this is just the beginning. It takes inventive minds with diverse skills, backgrounds, and cultures to transform high performance AI and connected computing potential into world-changing technologies and products. This is the Invention Age - and this is where you come in. Job Overview: The Qualcomm Memory System/Technology Team in Process & Package Solutions Group has an opening in the areas of AI workload mapping to the memory-centric compute systems for data center, mobile, compute, and XR. The candidate will develop new algorithms and architectures that move computation closer to or directly into memory to overcome the performance bottleneck of data transfer between processors and memory. The candidate should have good knowledge of bus and compute fabrics as well as state-of-the-art AI models such as LLM, transformers, CNN, etc. This role involves creating and optimizing AI models for memory-centric systems by designing algorithms and hardware modules that improve speed, power efficiency, and scalability of those AI workloads. This position offers the opportunity to work across multiple organizations such as process and packaging team, AI and compute architects, memory controller team, global SoC team, and emulation team. Providing timely feedback and updating architecture and design trade-offs to the team is essential.

  • Architect, design, and implement scalable AI/ML computing infrastructure for data center, compute, and mobile
  • Develop and optimize AI algorithms for the memory-centric compute systems for better performance and power
  • Co-optimize compute, memory, and connect fabric allocation for the improved scale-up and scale-out metrics
  • Develop and validate the algorithms using Phyton, C, etc for improved throughput, latency, power, and energy
  • Use state-of-the-art modeling tools and numerical analysis techniques for performance modeling
  • Bachelor's degree in Science, Engineering, or related field and 8+ years of ASIC design, verification, validation, integration, or related work experience.
  • OR
  • Master's degree in Science, Engineering, or related field and 7+ years of ASIC design, verification, validation, integration, or related work experience.
  • OR
  • PhD in Science, Engineering, or related field and 6+ years of ASIC design, verification, validation, integration, or related work experience.
  • Experience in computer and memory architectures
  • Good knowledge of AI models such as CNNs, RNNs, transformers, LLM, multi-modal AI, and AI agents
  • Good knowledge of dataflow, memory and bus protocols
  • Knowledge of tensor cores, near-memory computing, and high-bandwidth memories such as HBM
  • Knowledge in memory controller design
  • Proficiency in use of performance modeling tools
  • Good knowledge of memory architecture, scalar processors, and 2.5D/3D integration
  • Master's or Ph.D. in Electrical Engineering, Computer Science, and a related field
  • Good communication skills
  • Strong teamwork
  • Strong problem-solving and analytical skills
  • Ability to work independently and as part of a team
  • Familiar with in/near-memory computing
  • Experience in programming language (C/C++/Phyton) or scripting language (Perl/Python)
  • Familiar with the DRAM datasheets and IO interfaces
  • We also offer a competitive annual discretionary bonus program and opportunity for annual RSU grants (employees on sales-incentive plans are not eligible for our annual bonus). In addition, our highly competitive benefits package is designed to support your success at work, at home, and at play.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service