Zipline builds and operates fleets of delivery drones to get medicine to those who need it, fast, regardless of where they live. To power this, the software team is building out the long term scalable solutions to expand rapidly while empowering our world class distribution centers to serve their customers as fast as possible. Zipline’s security problems aren’t “website got pwned” problems (though those exist too). They’re “real-world autonomy + robotics + global operations + cloud software + regulated/health-adjacent workflows” problems. You’ll partner deeply with software, infrastructure, and (where relevant) embedded/autonomy teams to reduce real risk in real systems. We have a large attack surface Our ideal candidate works well in startup environments, wears many hats, and collaborates across engineering disciplines. You’ll join a small, high-ownership security team with significant influence over how we scale. A note on our modern reality and agentic tooling: Engineering teams are increasingly adopting LLM copilots and agentic tools to move faster. That’s useful, until an “assistant” becomes an unmonitored automation path to secrets, sensitive data, or privileged actions. (Think: “obedient intern with production credentials.”) Industry guidance is converging on practical frameworks like the NIST AI Risk Management Framework (including a profile for generative AI) and the OWASP Top 10 for LLM Applications, which explicitly calls out risks like prompt injection, insecure plugin design, and excessive agency. In this role, you’ll help Zipline safely leverage these tools while containing them so they don’t quietly “rewrite the threat model”. This is a Hybrid onsite role - you will frequently have conversations in person at our HQ in South San Francisco.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed
Number of Employees
1,001-5,000 employees