Senior Product Manager, Generative AI Chat Safety, YouTube

GoogleSan Bruno, CA
3h$183,000 - $271,000

About The Position

At Google, we put our users first. The world is always changing, so we need Product Managers who are continuously adapting and excited to work on products that affect millions of people every day. In this role, you will work cross-functionally to guide products from conception to launch by connecting the technical and business worlds. You can break down complex problems into steps that drive product development. One of the many reasons Google consistently brings innovative, world-changing products to market is because of the collaborative work we do in Product Management. Our team works closely with creative engineers, designers, marketers, etc. to help design and develop technologies that improve access to the world's information. We're responsible for guiding products throughout the execution cycle, focusing specifically on analyzing, positioning, packaging, promoting, and tailoring our solutions to our users. The Trust and Safety team works to keep the platform secure. The team leverages technology, including AI and machine learning, alongside policies and partnerships to identify and mitigate risks. As a Product Manager, you will lead the safety strategy for YouTube's next generation of interactive features. You will be responsible for defining and building the protections that allow the community to connect and engage. This involves cross-functional collaboration with product, engineering, policy, and operations teams to embed safety by design principles from the ground up. You will develop AI powered systems to detect and prevent issues. At YouTube, we believe that everyone deserves to have a voice, and that the world is a better place when we listen, share, and build community through our stories. We work together to give everyone the power to share their story, explore what they love, and connect with one another in the process. Working at the intersection of cutting-edge technology and boundless creativity, we move at the speed of culture with a shared goal to show people the world. We explore new ideas, solve real problems, and have fun — and we do it all together.

Requirements

  • Bachelor's degree or equivalent practical experience.
  • 8 years of experience in product management or related technical roles.
  • 2 years of experience in Large Language Model (LLM) steering, including prompt engineering, model alignment, or crafting system instructions such as contextual safety instructions to control model persona, tone, and boundary adherence.
  • 2 years of experience in building responsibility/safety stacks.

Nice To Haves

  • Experience in developing strategy.
  • Experience with approaches to model evaluations, including designed and interpreted safety evaluations, managing adversarial datasets (red teaming) and defining quantitative metrics to measure improvement over time.
  • Experience in partnering with engineering to make architectural and product decisions, such as weighing the latency and cost of external guardrails against the effectiveness of upstream model training.
  • Familiarity with age-gating, parental controls, or Children's Online Privacy Protection Act (COPPA)/ General Data Protection Regulation - Kids (GDPR-K) compliance.
  • Ability to bring perspective on product quality that extends beyond safety to include user wellness.

Responsibilities

  • Define and own the strategic roadmap for multi-turn safety and b uild safety principles and translate into a product strategy.
  • Determine how to effectively implement safety by design into chatbot architecture, partnering with legal, policy, research, engineering, and customer teams.
  • Define product principles that prioritize long-term user mental health and wellness, support that with testing, metrics, and responsibility systems.
  • Own the strategy behind system instructions, contextual safety instructions, classifiers to steer feature behavior, translating high-level policy into product safety/responsibility.
  • Architect safety evaluation frameworks and l ead the development of evaluation metrics, adversarial testing protocols, and benchmarks so customers can validate safety prior to deployment covering security, safety, and behavioral risks.
  • Synthesize the internal and external AI landscape to ensure our systems/capabilities and our user experience is engaged.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service