About The Position

We are seeking a visionary Director of Product Management to serve as the chief product officer of technical strategy, bridging the gap between cutting-edge AI research and enterprise-scale business impact. Leveraging our deep expertise in AI/ML and security, you will lead the architecture and deployment of large-scale, production-grade AI/ML solutions for our Secure Access Service Edge (SASE) architecture. This is a highly impactful leadership opportunity to shape the AI transformation of a market-leading cloud security company. A successful candidate will serve as a primary technical and strategy pillar for the organization, possessing deep technical expertise in applying AI/ML technologies to complex security applications. You will have the platform to build a scalable 'AI Engine', working alongside top-tier engineers, researchers, and machine learning scientists to solve the industry's most challenging AI latency, scalability, and cloud security problems today.

Requirements

  • 12-15+ years of industry experience (or equivalent combination of an Advanced management degree + experience) in defining and bringing to market AI/ML solutions on large-scale security products or services.
  • Fluency in the modern AI stack with proven experience with cutting-edge LLMs in production environments, with deep knowledge of vLLM, SGLang, and KV Cache optimization.
  • Ability to translate complex technical requirements concepts between CXOs, non-technical stakeholders, and Data Scientists.
  • Energetic self-starter with a true startup spirit, demonstrated ability to influence without authority, and the willingness to wear multiple hats to deliver end-to-end solutions in a dynamic, fast-paced environment.

Responsibilities

  • Define the AI Transformation Roadmap: Drive the overarching AI/ML technical strategy, ruthlessly prioritizing architectural choices to ensure highly scalable, reliable, and production-grade systems.
  • Define and Bring to Market Production-Grade Inference Systems: highly scalable AI/ML inference systems, leveraging the latest LLM serving technologies such as vLLM, SGLang, and advanced KV Cache optimization to maximize throughput and minimize latency.
  • Lead the End-to-End AI Lifecycle: Collaborate with ML scientists, engineers, and executive stakeholders to define and execute complex business requirements into industrialized enterprise solutions.
  • Establish Rigorous AI Evaluation: Create strict 'Report Cards' for AI models in production, ensuring models are robust, scalable, and well-documented by measuring accuracy, latency, and security relevance before deployment.

Benefits

  • comprehensive health plan
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service