Executive Director - Youth AI Safety Institute

COMMON SENSE MEDIASan Francisco, CA
$250,000 - $300,000Onsite

About The Position

The Executive Director is the founding leader of the Youth AI Safety Institute—a mission-driven institution-builder who will launch a technically credible global institute and scale it into a trusted, field-level standard setter. This is a role for a leader already accomplished in their own right: deeply steeped in AI, possessed of genuine gravitas, and fully committed to protecting the next generation in the AI era. Managing a $20–25M annual operating budget, the Executive Director reports to the CEO of Common Sense Media and serves as a primary public face of the Institute alongside Common Sense Media’s senior leadership. The ideal candidate brings two equally strong capabilities: (1) deep technical fluency to engage as a peer with frontier AI companies, tooling and evaluation providers, and leading researchers; and (2) exceptional convening ability to drive consensus on youth AI safety standards and hold industry accountable. This leader is a confident relationship-builder equally comfortable making tough calls, navigating conflicts of interest, and engaging the Board with candor.

Requirements

  • An accomplished, recognized leader in AI, technology, or public policy who brings existing credibility, gravitas, and a strong professional network.
  • Deep, current fluency in AI—including large language models, evaluation methodologies, and AI safety frameworks— sufficient to engage as a peer with frontier AI labs.
  • Demonstrated success building or scaling a research, standards, or advocacy organization; proven ability to recruit senior talent and manage a growing team and vendor ecosystem.
  • Strong fundraising track record, including direct responsibility for closing major philanthropic commitments and managing multi-year funder relationships.
  • Exceptional public communication skills; experience as a credible spokesperson with media, policymakers, the AI industry, and broad audiences.
  • Comfortable making difficult calls, navigating conflicts of interest, and operating with transparency under scrutiny, including through credibility-testing events.
  • Experience managing significant operating budgets, vendor ecosystems, and organizational operations.

Nice To Haves

  • Background in or engagement with children’s health, youth development, education, or digital well-being.
  • Experience with AI safety evaluation, red-teaming, adversarial testing, or benchmark development.
  • Understanding of child safety standards and AI governance across international markets; experience with global regulatory bodies.
  • Advanced degree (Ph.D., J.D., M.D., or equivalent) in computer science, public policy, public health, psychology, or a related discipline.

Responsibilities

  • Own the Institute’s multi-year strategy; make disciplined decisions on scope, cadence, and publication under sustained public and industry scrutiny.
  • Direct product testing, benchmarking, and research; oversee rigorous, reproducible AI safety standards and evaluation frameworks; publish findings—including uncomfortable ones—with full transparency.
  • Serve as a primary public face of the Institute; drive industry accountability and translate findings for broad audiences through media, campaigns, and global convenings.
  • Lead closing of major philanthropic commitments; manage multi-year fund structures, funder governance, and supporter relationships.
  • Recruit and retain exceptional staff; build a culture blending startup speed with standards-body rigor; manage vendors and make disciplined build-vs.-partner decisions.
  • Engage credibly at the highest levels with frontier AI labs, policymakers, and global regulatory bodies; drive consensus on and commitment to youth AI safety standards.
  • Manage the $20–25M annual budget; oversee external research partners and technical evaluators with rigor and accountability.
  • Engage Board of Directors and Advisors effectively; uphold conflict-of-interest protocols and editorial independence; maintain durable public trust.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service