Senior or Staff Perception CV/ML SWE - Onboard Autonomy

ZiplineSouth San Francisco, CA
10h$180,000 - $265,000

About The Position

Zipline is operating the world’s largest autonomous logistics network—delivering critical medical and commercial goods globally with high reliability, precision, and scale. As we expand into increasingly complex, safety-critical environments, the systems behind our autonomy stack must be robust, adaptable, and deeply integrated—especially at the intersection of perception and deployment. We're hiring senior and staff perception engineers to join our Droid team, the group responsible for the autonomy that powers Zipline’s backyard delivery experience. This team owns the full stack of onboard, offboard and cloud-side perception systems enabling precise and reliable delivery in complex customer backyards. You’ll build realtime 3D perception models that capture geometry, scene semantics and preference for both delivery and package pickup. You’ll develop across the entire perception stack, from optimizing the onboard TensorRT engines to building a data flywheel that finds interesting samples from our long-tail of customer deliveries. You will work closely with the planner team to make sure we build the right system, rather than just the best perception model. This is not a research role—you’ll be expected to move fast, ship production-grade systems, and find clever ways to apply state-of-the-art techniques to tangible, high-impact problems.

Requirements

  • At least 5+ years of experience building and deploying deep learning-based perception systems, particularly in 3D geometry, semantic understanding, or mapping from cameras
  • Strong understanding of classical computer vision (e.g. camera calibration, epipolar geometry, structure-from-motion, SGBM stereo) and the ability to blend it with modern ML approaches.
  • Expertise and depth with robotics fundamentals: you should be able to reason about reference frames, matrix math, SE(3) manifolds and probabilistic sensor fusion
  • Hands-on experience training, iterating on, and optimizing CNN and transformer architectures on target hardware: think NVIDIA-Jetson sized compute
  • An engineering mindset focused on outcomes over experimentation—you know how to prioritize what's good enough to ship now and what needs to be architected for scale later.
  • Familiarity with building training, data annotation, and evaluation pipelines—not just models.
  • Comfort working across systems: jumping into data pipelines, training infrastructure, or debugging distributed training issues as needed.

Nice To Haves

  • Experience deploying models in real-world, high-stakes robotics or autonomy applications is a strong plus - a robot will move based on the outputs of your perception system

Responsibilities

  • Implement, train and evaluate real-time 3D perception models that work with two or more cameras across one or more timesteps
  • Run these models onboard a resource-constrained computer, finding ways to optimize and reduce compute and memory footprints
  • Build visualization, introspection and eval tooling to deeply understand model performance both on test datasets as well as “in the wild”
  • Help design and implement data selection pipelines that identify the most valuable data from the field, then help our annotation teams label these faster through the use of prelabeling or pseudo-ground-truthing these samples.
  • Work closely with the droid planner team, building a strong interface between the two subsystems and tracking the right metrics to ensure we’re always hill-climbing towards a better overall system
  • Stay up to date with research in the field, drive experimentation, and help keep Zipline’s modeling stack in lockstep with powerful new paradigms in real-time compute-constrained 3D perception
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service