Zendar is looking for a Senior Machine Learning Research Engineer (Multi-Sensor Fusion Perception & Foundation Models) to join our Berkeley office. Zendar develops one of the best 360-degree radar-based vehicular perception systems for automotive. We’re now expanding our capabilities to deliver full-scene perception outputs using early fusion of camera and radar , and scaling these technologies across both the automotive and robotics industries. We are not bogged down by legacy systems, and by joining us you’ll have the opportunity to define and own a next-generation perception stack that enables reliable autonomy at scale. About Zendar: Autonomous vehicles need to be able to understand the world around them not only in bright daylight, but also at night, when it is foggy or rainy, or when the sun is shining right in your face. At Zendar, we make this possible by developing the highest-resolution, most information-rich radar in the world. What makes radar powerful - its long wavelength which makes it robust to all sorts of weather and lighting conditions - also makes it really challenging to work with. We have used our deep understanding of radar physics to build radar perception models that bring a rich and complete understanding of the environment around the AV from free space to object detections to road structure. Check out what our technology can do here - all produced with only radar information, no camera and no lidar! Zendar has a diverse and dynamic team of hardware, machine learning, signal processing and software engineers with a deep background in sensing technology. We have a global team of 60, distributed across our sites in Berkeley, Lindau (Germany), and Paris. Zendar is backed by Tier-1 VCs, has raised more than $50M in funding and has established strong partnerships with industry leaders. Your Role: Zendar’s Semantic Spectrum perception technology extracts a rich scene understanding from radar sensing. Our next goal is to build a foundation-model-driven perception stack that fuses streaming camera and radar to produce full perception outputs that are robust enough for real-world autonomy: occupancy/free-space (e.g., occupancy grid), object detection and tracking, lane line and road structure estimation, and the interfaces required to make these outputs actionable for downstream systems.We are seeking an experienced Senior ML Engineer to design, implement, and drive the architecture of these models end-to-end, including training from scratch on large-scale datasets (not just fine-tuning), defining evaluation and long-tail validation, and partnering with platform and product teams to ensure successful deployment in real-time systems.This is an ideal position for an engineer who enjoys owning hard technical problems, making rigorous tradeoffs, and building systems that work reliably in the messy long tail of the real world.In this role you will have close communication and collaboration with platform, embedded, and robotics teams. You will work with our real-world dataset of tens of thousands of kilometers collected in multiple continents and geographies, and you will have opportunities to validate results on real vehicles.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
Ph.D. or professional degree
Number of Employees
51-100 employees