There are still lots of open positions. Let's find the one that's right for you.
We are seeking highly motivated individuals with a robust background in the field of deep learning (DL) to design state-of-the-art low-level perception (LLP) as well as end-to-end AD models, with a focus on achieving accuracy-latency Pareto optimality. This role involves comprehending state-of-the-art research in this field and deploying networks on the Qualcomm Ride platform for L2/L3 Advanced Driver Assistance Systems (ADAS) and autonomous driving. The ideal candidate must be well-versed in recent advancements in Vision Transformers (Cross-attention, Self-attention), lifting 2D features to Bird's Eye View (BEV) space, and their applications to multi-modal fusion. This position offers extensive opportunities to collaborate with advanced R&D teams of leading automotive Original Equipment Manufacturers (OEMs) as well as Qualcomm's internal stack teams. The team is responsible for enhancing the speed, accuracy, power consumption, and latency of deep networks running on Snapdragon Ride AI accelerators. A thorough understanding of machine learning algorithms, particularly those related to automotive use cases (autonomous driving, vision, and LiDAR processing ML algorithms), is essential. Research experience in the development of efficient networks, various Neural Architecture Search (NAS) techniques, network quantization, and pruning is highly desirable. Strong communication and interpersonal skills are required, and the candidate must be able to work effectively with various horizontal AI teams.