Software Engineering Manager - Search

VerkadaSan Mateo, CA
Hybrid

About The Position

Verkada is transforming how organizations protect their people and places with an integrated, AI-powered platform. A leader in cloud physical security, Verkada helps organizations strengthen safety and efficiency through one connected software platform that includes solutions for video security, access control, air quality sensors, alarms, intercoms, and visitor management. Over 30,000 organizations worldwide, including more than 100+ companies in the Fortune 500, trust Verkada as their physical security layer for easier management, intelligent control, and scalable deployments. Founded in 2016, Verkada has expanded rapidly with 15 offices and 2,200+ full-time employees. We are hiring an Engineering Manager to lead Verkada's Search team, the group responsible for the AI-powered search and computer vision capabilities that make our camera fleet best in class in investigation and alerting. From face and person search to license plate recognition, reverse image search, and our next generation of LLM- and VLM-powered agentic experiences, the Search team owns the backend systems that let our customers find what they need across billions of frames in seconds. As the leader of this team you will not just be managing engineers; you will be setting the technical direction for how Verkada defines search with embedding-based retrieval all to an agentic, multi-modal experience powered by modern LLMs and VLMs. You will own a portfolio of production services (API, inference pipelines, vector databases, etc.), drive active migrations, and hire, mentor, and scale a team of backend, frontend, and ML engineers to execute against an ambitious product roadmap while raising the bar on reliability, latency, and cost.

Requirements

  • 7+ years of software engineering experience, including 2+ years managing backend or ML engineering teams.
  • Track record leading 5+ engineers running production services at meaningful scale.
  • 4+ years of hands-on experience in at least two of: information retrieval, vector / embeddings-based search, computer vision, or large-scale recommendation systems.
  • 2+ years productionizing modern LLMs, VLMs, or agentic systems (evals, guardrails, latency/cost tuning).
  • Deep proficiency in Python plus Go/Java/C++; fluent in distributed systems (gRPC, Kafka) and a major cloud provider (AWS preferred).
  • Hands-on with at least one production vector database or search engine (OpenSearch, Turbopuffer, FAISS, Milvus, pgvector, etc.).
  • Strong operational instincts: on-call ownership, post-mortem rigor, and a habit of defining metrics and building evals to drive quality on high-availability services.

Nice To Haves

  • Experience integrating foundation models via open-source GPU inference stacks.
  • Familiarity with face recognition, person re-identification, LPR, or similar fine-grained CV pipelines.
  • Background in physical security, video surveillance, or IoT/connected-device domains.
  • CI/CD experience for ML/search services.

Responsibilities

  • Build the Team: Recruit, hire, and mentor a high-performing group of backend and ML engineers covering search infrastructure, computer vision, and applied AI.
  • Strategic Oversight: Own the end-to-end roadmap for Search and ML engineering backend from product-facing features like POI, LPR, and AI Search to the platform services that power them.
  • Cross-Functional Partnership: Partner closely with Product, Design, CV/ML research, Camera Firmware, and Infrastructure teams to align on priorities, dependencies, and deployment plans.
  • Agentic & Generative AI: Drive the rollout of AI-Powered Search, LLM migrations, VLM experimentation, and agentic AI into production-grade features.
  • Embeddings & Retrieval: Oversee the evolution of our vector search stack to improve recall, latency, and cost at fleet scale.
  • Model Evaluation: Lead detection evaluation, model consistency, and ongoing quality improvements for various CV and ML pipelines.
  • Service Ownership: Accountable for a portfolio of production services including submission endpoints, inference pipelines, APIs and database layer.
  • Migrations & Deprecations: Execute in-flight migrations into new advanced pipelines and deprecations of old pipelines without disrupting customer-facing features.
  • Scalability & Performance: Drive large-org optimizations, inference engine stability, gRPC load balancing, and connection-hardening work to keep the pipeline healthy as the fleet grows.
  • On-Call & Incident Response: Own the team on-call rotation, post-mortem quality, and the new programs required to scale the team.
  • Test Coverage: Expand integration test coverage and search pipeline change testing to catch regressions before they reach production.
  • Telemetry & Dashboards: Define and track the metrics that matter (e.g. API latency, search quality, inference stability, field reliability, etc.) via dashboards and service SLOs.

Benefits

  • Healthcare programs that can be tailored to meet the personal health and financial well-being needs - Premiums are 100% covered for the employee under at least one plan and 80% for family premiums under all plans
  • Nationwide medical, vision and dental coverage
  • Health Saving Account (HSA) with annual employer contributions and Flexible Spending Account (FSA) with tax saving options
  • Expanded mental health support
  • Paid parental leave policy & fertility benefits
  • Time off to relax and recharge through our paid holidays, firmwide extended holidays, flexible PTO and personal sick time
  • Professional development stipend
  • Fertility Stipend
  • Wellness/fitness benefits
  • Healthy lunches provided daily
  • Commuter benefits
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service