About The Position

Life360 is seeking a Sr. Staff AI Security Engineer to join their AI Native Platform team. This role is crucial for securing Life360's AI infrastructure as it evolves. The engineer will report directly to the CISO and work closely with the team building the platform's layers. The position is execution-focused with significant architectural influence, driving delivery across key security domains and contributing to design decisions. The role involves building and validating security patterns in a rapidly developing field, participating in architecture reviews, owning security implementations, and developing controls for safe and fast AI development. The engineer will also contribute to writing security playbooks for AI systems. The data involved is highly sensitive, including real-time location and family relationship data, making security a core product obligation rather than just a compliance exercise. This role is part of a growing security function within the AI Native Platform team.

Requirements

  • 12+ years in security engineering with depth in application security, cloud security, IAM, or detection, and a track record of building controls that earn adoption.
  • Hands-on builder shipping security controls that hold up in production; a practitioner who can define lasting patterns.
  • Hands-on fluency with LLM and agentic systems, including building with, breaking, and shipping fixes for prompt pipelines, RAG architectures, and multi-agent orchestration.
  • Solid grounding in IAM for non-human systems: service identities, OAuth, secrets management, RBAC/ABAC, and least-privilege architecture at scale.
  • Experience with production telemetry and detection, including defining detections and building response paths for threat surfaces without established playbooks.
  • Comfort with ambiguity and in-flight builds; energized by figuring things out, writing first-draft standards, testing approaches, and scaling what works.
  • Strong cross-functional communication skills and the ability to push back constructively, carrying risk, tradeoffs, and technical decisions across engineering, product, and security leadership.
  • Familiarity with NIST AI RMF, OWASP LLM Top 10, and adjacent compliance environments for consumer data at scale.
  • Bachelor's degree or equivalent experience in Computer Science, Information Security, or a related field.

Nice To Haves

  • Experience with frontier model API security, tool-use authorization patterns, or access governance for AI systems at scale.
  • Hands-on experience with multi-agent orchestration frameworks (LangGraph, AutoGen, CrewAI, or similar) and their trust, identity, and authorization challenges.
  • Familiarity with knowledge graph architectures, vector stores, or RAG systems and the access control and data boundary problems they introduce.
  • Red teaming or adversarial testing against AI systems: prompt injection, jailbreaks, data extraction, model inversion, or supply chain attacks.
  • Background in consumer technology or another domain where personal data sensitivity is a core product obligation.
  • Experience designing or reviewing security for internal enterprise AI platforms serving non-technical users.

Responsibilities

  • Secure how Life360 accesses frontier models by designing, building, and iterating access controls, policy enforcement, and authorization patterns.
  • Build secure patterns for MCP access and tool use authorization, including vetting, risk-tiering, and governing integrations with external tools and services.
  • Design and build the identity and authorization model for autonomous agents, including service identities, scoped credentials, and least-privilege access patterns, defining and enforcing trust boundaries.
  • Design and build agentic observability and adversarial defenses, including telemetry pipelines, behavioral monitoring, and architecture-level defenses against prompt injection and related attacks.
  • Shape security for the common AI end-user platform by leading design reviews, building access controls, data boundary enforcement, and abuse detection.
  • Secure the shared knowledge layer by defining access control and data governance for retrieval augmented and reasoning systems.
  • Build AI supply chain integrity into the platform through developing model provenance practices, service vetting, and dependency controls.
  • Partner with Privacy, Legal, and Data Platform to ensure appropriate controls are built into pipelines handling sensitive data, including data involving minors.

Benefits

  • Competitive pay and benefits
  • Medical, dental, vision, life and disability insurance plans (100% paid for employees)
  • 401(k) plan with company matching program
  • Mental Wellness Program & Employee Assistance Program (EAP) for mental well-being
  • Flexible PTO
  • 13 company-wide days off throughout the year
  • Winter and Summer Weeklong Synchronized Company Shutdowns
  • Learning & Development programs
  • Equipment, tools, and reimbursement support for a productive remote environment
  • Free Life360 Platinum Membership for your preferred circle
  • Free Tile Products
Β© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service