About The Position

Zipline is building the world’s largest autonomous logistics system, delivering vital medical and commercial supplies across the globe. As we scale our operations and extend into more complex, safety-critical environments, the ability to validate and prove our autonomy performance at scale becomes absolutely essential. We’re looking for a deeply technical software engineer to join the Autonomy Validation team — a core team responsible for building the infrastructure, tools, and frameworks that support software validation across the entire Autonomy organization. This includes planning, perception, control, and all of the decision-making logic that powers our self-flying aircraft. Our autonomy stack is highly custom-built, and while that gives us unmatched control and performance, it also means standard out-of-the-box testing tools don't work. In this role, you’ll take ownership of developing robust internal platforms for validation — enabling both simulation at scale and rigorous scenario testing — so that autonomy engineers can ship with confidence. This is not a QA role. It’s a foundational software engineering position building critical systems that will directly shape how we test, verify, and deploy autonomy safely around the world.

Requirements

  • 5+ years of experience building software systems for simulation, testing, or safety validation — ideally in robotics, autonomy, aerospace, or other real-time, safety-critical domains
  • Strong software engineering skills with proficiency in C++, Rust, or C (Python is a plus for tooling and scripting)
  • Hands-on experience building or working with simulation systems for robotics or autonomous systems, particularly for testing or validation
  • Experience building tools, platforms, or infrastructure used by other engineers
  • Understanding of validation methodologies, including defining metrics, evaluating system behavior, and testing complex electromechanical systems

Nice To Haves

  • Experience with high-fidelity simulation or scenario generation frameworks
  • Experience with large-scale or distributed systems (e.g., cloud infrastructure, Kubernetes, AWS)
  • Familiarity working with systems engineering concepts (requirements, safety constraints, metrics)
  • Exposure to autonomy stacks (planning, perception, control)
  • Some exposure to machine learning systems (not required)

Responsibilities

  • Build and own the infrastructure for validating autonomous features and system performance across planning, perception, and control.
  • Design and develop simulation tools and scenario generation for large-scale and high-fidelity testing of autonomy under real-world and edge-case conditions.
  • Develop custom tools for validation and testing of autonomy components that plug into our internal autonomy development stack.
  • Create scalable, reliable, and automated solutions for running, tracking, and analyzing thousands of validation tasks across autonomy org.
  • Collaborate deeply with Autonomy engineers, to understand how system behavior should be evaluated.
  • Collaborate deeply with Systems and data engineers, to ensure metrics, safety thresholds, and requirements are codified in test infrastructure.
  • Collaborate deeply with Flight test and QA teams, to connect real-world results back into test tools and CI loops.
  • Establish best practices for software release validation, helping ensure our autonomy stack is safe, measurable, and production-ready.
  • Contribute to internal docs, standards, and validation workflows that scale across the autonomy organization.
  • Define validation methodologies and metrics, determining how new autonomy features should be evaluated and measured in simulation

Benefits

  • equity compensation
  • overtime pay
  • discretionary annual or performance bonuses
  • sales incentives
  • medical, dental and vision insurance
  • paid time off
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service