As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. Our team of engineers and computer scientists work s on projects related to high-performance embedded and edge computing, advanced numerical / scientific computing and algorithms, state-of-the-art optimizations, LLM and vision-based model accelerations and on-device performance optimizations, and quantization techniques, supporting internal and external clients at/of Qualcomm. Our research is at the cutting edge of AI R&D , is highly impactful with products and customer applications, and is making relevant and high impact both in research realm and real-world customers. We are seeking talented Research Engineers to join our Artificial Intelligence R&D team. As a Machine Learning Engineer, you will work on a range of projects within a small technical team, conducting fundamental research that creates innovative machine learning methodology and achieves beyond state-of-the-art performance . You will be closely guided and mentored by one of our experienced researchers, providing you with an unparalleled learning experience. You will have the opportunity to publish your own research papers, and to attend client meetings and academic conferences. We support Qualcomm's mission for making the world a better place one wireless connection at a time, making AI smarter, cars drive better, networks move data faster, and extending reality . By joining our team, you can help Qualcomm Win Toge the r, Achieve Excellence, Make the Impossible Inevitable, and Do the Right Thing. Some examples of past and ongoing projects our team of Research Engineers have contributed to include: Cutting-edge c ompiler optimizations for new industry-leading Qualcomm AI chips. New approaches to and applications of nonconvex optimization Developing new algorithmic implementations for Qualcomm hardware, grounded in mathematical and linear algebra theory and practice , improving on state-of- art standard approaches. New approaches to accelerating inference and learning via algorithm , compiler and hardware innovations. Drop-in replacements for traditional attention-based transformer architectures State-of-the-art R&D in LLM inference efficiency algorithms, efficient model architecture design, and LLM training Developing creative solutions with consideration of practical challenges on device Implementation and evaluation of possible solutions in both simulation and on-device environments for GPUs, Qualcomm NPUs, and heterogeneous computing environments for image recognition, autonomous driving, and general vision-based environments.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
5,001-10,000 employees