Product Trust Manager, Learning Commons

Learning CommonsRedwood City, CA
$169,000 - $211,000Hybrid

About The Position

Learning Commons aims to scale proven teaching and learning practices to benefit every learner by building AI infrastructure that better connects the way students learn to the tools they learn with. At Learning Commons, we operate at the intersection of technology, research, and philanthropy. We pair product development with grantmaking to scale proven teaching and learning practices for the benefit of every learner. We aim to bring learning science into the tools educators and students use every day. Our work is grounded in a deep belief: when technology reflects the realities of classrooms and the science of how students learn, it can meaningfully strengthen teaching and unlock new possibilities for students. The rise of generative AI offers us a once-in-a-generation opportunity to dramatically accelerate the translation of research insights into practical, classroom-ready tools; tools that honor teachers’ expertise, adapt to students’ needs, and make effective learning practices easier to access, implement, and sustain. In today’s fragmented edtech landscape, school districts are often left piecing together products that don’t always align with curricula or instructional needs. While AI holds enormous potential to support teachers and students, it can only deliver on that promise when grounded in research, high-quality educational data, and expert evaluation. That’s why we’re building open, public-purpose infrastructure — datasets, rubrics, and resources — that help raise the standard for educational tools and create more consistent, impactful learning experiences for all students and teachers. We are seeking a Product Trust Manager to join our Education Trust team. In this role, you will lead technically grounded trust initiatives across our education platform, with deep ownership of platform integrity, data privacy, responsible AI development, DPIAs, API access patterns, data licensing, and restricted data access controls. You will translate trust, legal, and policy requirements into scalable product and platform mechanisms.

Requirements

  • 8+ years of experience in product risk, platform governance, trust & safety, data privacy, policy or related domains, with practical experience working closely with technical and AI/ML teams, and a strong hands-on understanding of API-based platforms (authentication, authorization, access scopes), restricted data access frameworks (RBAC, approval workflows, purpose-based access), and common system failure modes that create privacy or data misuse risk.
  • Demonstrated expertise in privacy, data, and platform risk governance, including leading DPIAs/PIAs and privacy risk assessments; translating outcomes into product and engineering requirements; governing data licensing and contractual data use limitations; and applying data protection regulations (e.g., GDPR, CCPA/CPRA) to APIs, developer access, and platform ecosystems.
  • Proven ability to lead cross-functional initiatives involving product, engineering, legal, security, and operations in fast-moving environments.
  • Analytical, systems-oriented mindset with the ability to spot structural risk early and design scalable mitigations.
  • Clear communicator who can translate between legal requirements, technical implementation, and product strategy.

Responsibilities

  • Execute the Trust strategy across AI systems, APIs, data platforms, and partner integrations, ensuring product integrity and compliant use of data at scale.
  • Own and define trust requirements for consent alignment, API access controls (authentication vs authorization, scoped permissions, rate limiting, logging and monitoring), retention controls, deletion workflows, user rights, data licensing, and data use restrictions. Partner with engineering and product teams to translate these requirements into enforceable platform controls.
  • Establish, document, and maintain clear governance standards and policy frameworks across the AI and data lifecycle—including training data ingestion, model evaluation, inference APIs, downstream consumption, and third-party integrations—and ensure they are consistently understood and applied across teams.
  • Identify and mitigate structural risks across the AI and data lifecycle.
  • Collaborate with Legal, Product Counsel, Privacy, and Security to translate GDPR, CCPA/CPRA, and emerging AI regulations into documented policy guidance and enforceable developer-facing requirements.

Benefits

  • Provides a generous employer match on employee 401(k) contributions to support planning for the future.
  • Paid time off to volunteer at an organization of your choice.
  • Funding for select family-forming benefits.
  • Relocation support for employees who need assistance moving
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service