3D Perception Engineer - Autonomy (Droid)

ZiplineSouth San Francisco, CA
4h$180,000 - $265,000

About The Position

Zipline is operating the world’s largest autonomous logistics network—delivering critical medical and commercial goods globally with high reliability, precision, and scale. As we expand into increasingly complex, safety-critical environments, the systems behind our autonomy stack must be robust, adaptable, and deeply integrated—especially at the intersection of perception and deployment. We're hiring senior and staff perception engineers to join our Droid team, the group responsible for the autonomy that powers Zipline’s backyard delivery experience. This team owns the full stack of offboard and cloud-side perception systems that inform, validate, and augment our onboard autonomy. From generating rich 3D and semantic priors from aerial survey data to learning customer preferences and terrain features at scale, your work will define how we prepare Zipline aircraft to perform mission-critical deliveries in complex, real-world environments. This is not a research role—you’ll be expected to move fast, ship production-grade systems, and find clever ways to apply state-of-the-art techniques to tangible, high-impact problems.

Requirements

  • At least 5+ years of experience building and deploying deep learning-based perception systems, particularly in 3D geometry, semantic understanding, or mapping from remote sensing data.
  • Strong understanding of classical computer vision (e.g. camera calibration, epipolar geometry, structure-from-motion) and the ability to blend it with modern ML approaches.
  • Hands-on experience training, iterating on, and optimizing CNN and transformer architectures in production environments.
  • An engineering mindset focused on outcomes over experimentation—you know how to prioritize what's good enough to ship now and what needs to be architected for scale later.
  • Familiarity with building training, data annotation, and evaluation pipelines—not just models.
  • Comfort working across systems: jumping into data pipelines, training infrastructure, or debugging distributed training issues as needed.

Nice To Haves

  • Experience deploying models in real-world, high-stakes robotics or autonomy applications is a strong plus.

Responsibilities

  • Own the design and implementation of cloud-side autonomy pipelines that directly support and scale our onboard perception stack.
  • Leverage satellite imagery, aerial surveys, and structured data to build semantic and geometric world models of customer delivery zones.
  • Design and ship tools that predict deliverability, generate high-fidelity priors, and reduce the operational friction of onboarding new customers in new environments. You’ll step in where our on-vehicle capabilities can’t solve the problems we need to solve in order to scale the product.
  • Train and deploy mid- to large-scale models for semantic segmentation, 3D geometry, and learned preference modeling.
  • Design evaluation and validation infrastructure to ensure models behave reliably in the field.
  • Work across engineering to integrate your work into fleet-facing autonomy systems.
  • Lead architectural decisions, drive experimentation, and help the team push the limits of what’s possible with production-grade perception at scale.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service