Senior or Staff MLE - Droid Perception (Offboard)

ZiplineSouth San Francisco, CA

About The Position

Zipline operates the world’s largest autonomous logistics network, delivering critical medical and commercial goods globally with high reliability, precision, and scale. As the company expands into increasingly complex, safety-critical environments, the autonomy stack systems must be robust, adaptable, and deeply integrated, especially at the intersection of perception and deployment. This role is for a senior or staff perception engineer to join the Droid team, which is responsible for the autonomy powering Zipline’s backyard delivery experience. The team owns the full stack of offboard and cloud-side perception systems that inform, validate, and augment onboard autonomy. The work involves generating rich 3D and semantic priors from aerial survey data and learning customer preferences and terrain features at scale, defining how Zipline aircraft prepare for mission-critical deliveries in complex, real-world environments. This is a production-focused role, requiring rapid development and deployment of systems, applying state-of-the-art techniques to tangible, high-impact problems.

Requirements

  • At least 5+ years of experience building and deploying deep learning-based perception systems, particularly in 3D geometry, semantic understanding, or mapping from remote sensing data.
  • Strong understanding of classical computer vision (e.g. camera calibration, epipolar geometry, structure-from-motion) and the ability to blend it with modern ML approaches.
  • Hands-on experience training, iterating on, and optimizing CNN and transformer architectures in production environments.
  • An engineering mindset focused on outcomes over experimentation—you know how to prioritize what's good enough to ship now and what needs to be architected for scale later.
  • Familiarity with building training, data annotation, and evaluation pipelines—not just models.
  • Comfort working across systems: jumping into data pipelines, training infrastructure, or debugging distributed training issues as needed.

Nice To Haves

  • Experience deploying models in real-world, high-stakes robotics or autonomy applications is a strong plus.

Responsibilities

  • Own the design and implementation of cloud-side autonomy pipelines that directly support and scale our onboard perception stack.
  • Leverage satellite imagery, aerial surveys, and structured data to build semantic and geometric world models of customer delivery zones.
  • Design and ship tools that predict deliverability, generate high-fidelity priors, and reduce the operational friction of onboarding new customers in new environments. You’ll step in where our on-vehicle capabilities can’t solve the problems we need to solve in order to scale the product.
  • Train and deploy mid- to large-scale models for semantic segmentation, 3D geometry, and learned preference modeling.
  • Design evaluation and validation infrastructure to ensure models behave reliably in the field.
  • Work across engineering to integrate your work into fleet-facing autonomy systems.
  • Lead architectural decisions, drive experimentation, and help the team push the limits of what’s possible with production-grade perception at scale.

Benefits

  • equity compensation
  • overtime pay
  • discretionary annual or performance bonuses
  • sales incentives
  • medical, dental and vision insurance
  • paid time off

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

No Education Listed

Number of Employees

501-1,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service