3D Perception Engineer - Autonomy (Droid)

ZiplineSouth San Francisco, CA

About The Position

Zipline is the world’s largest and most experienced drone delivery service, on a mission to serve all humans equally by ensuring access to food, medicine and essential goods anytime, anywhere. They design, build, and operate the world’s largest autonomous logistics system, making millions of deliveries globally. The company operates on four continents, completing a delivery every 30 seconds. Zipline is expanding into increasingly complex, safety-critical environments, requiring robust, adaptable, and deeply integrated autonomy systems. This role is for senior and staff perception engineers to join the Droid team, which is responsible for the autonomy powering Zipline’s backyard delivery experience. This team owns the full stack of offboard and cloud-side perception systems that inform, validate, and augment onboard autonomy. The work involves generating rich 3D and semantic priors from aerial survey data and learning customer preferences and terrain features at scale to prepare Zipline aircraft for mission-critical deliveries in complex, real-world environments. This is a production-focused role, not research, requiring rapid shipping of production-grade systems and applying state-of-the-art techniques to tangible, high-impact problems.

Requirements

  • At least 5+ years of experience building and deploying deep learning-based perception systems, particularly in 3D geometry, semantic understanding, or mapping from remote sensing data.
  • Strong understanding of classical computer vision (e.g. camera calibration, epipolar geometry, structure-from-motion) and the ability to blend it with modern ML approaches.
  • Hands-on experience training, iterating on, and optimizing CNN and transformer architectures in production environments.
  • An engineering mindset focused on outcomes over experimentation—you know how to prioritize what's good enough to ship now and what needs to be architected for scale later.
  • Familiarity with building training, data annotation, and evaluation pipelines—not just models.
  • Comfort working across systems: jumping into data pipelines, training infrastructure, or debugging distributed training issues as needed.

Nice To Haves

  • Experience deploying models in real-world, high-stakes robotics or autonomy applications is a strong plus.

Responsibilities

  • Own the design and implementation of cloud-side autonomy pipelines that directly support and scale our onboard perception stack.
  • Leverage satellite imagery, aerial surveys, and structured data to build semantic and geometric world models of customer delivery zones.
  • Design and ship tools that predict deliverability, generate high-fidelity priors, and reduce the operational friction of onboarding new customers in new environments. You’ll step in where our on-vehicle capabilities can’t solve the problems we need to solve in order to scale the product.
  • Train and deploy mid- to large-scale models for semantic segmentation, 3D geometry, and learned preference modeling.
  • Design evaluation and validation infrastructure to ensure models behave reliably in the field.
  • Work across engineering to integrate your work into fleet-facing autonomy systems.
  • Lead architectural decisions, drive experimentation, and help the team push the limits of what’s possible with production-grade perception at scale.

Benefits

  • equity compensation
  • overtime pay
  • discretionary annual or performance bonuses
  • sales incentives
  • medical insurance
  • dental insurance
  • vision insurance
  • paid time off

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

No Education Listed

Number of Employees

501-1,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service