Google-posted 3 days ago
$132,000 - $194,000/Yr
Full-time • Mid Level
San Bruno, CA
5,001-10,000 employees

Fast-paced, dynamic, and proactive, YouTube’s Trust and Safety team is dedicated to making YouTube a safe place for users, viewers, and content creators around the world to create, and express themselves. Whether understanding and solving their online content concerns, navigating within global legal frameworks, or writing and enforcing worldwide policy, the Trust and Safety team is on the frontlines of enhancing the YouTube experience, building internet safety, and protecting free speech in our ever-evolving digital world. The YouTube Intelligence Desk is a proactive effort within YouTube to understand emerging threats and work across the organization to mitigate them. In this role, you will look across policies and seek to understand bad actors behaviors, motivations, and tactics and identify vulnerabilities across YouTube product surfaces and leverage data to better articulate risks to the YouTube ecosystem. At YouTube, we believe that everyone deserves to have a voice, and that the world is a better place when we listen, share, and build community through our stories. We work together to give everyone the power to share their story, explore what they love, and connect with one another in the process. Working at the intersection of cutting-edge technology and boundless creativity, we move at the speed of culture with a shared goal to show people the world. We explore new ideas, solve real problems, and have fun — and we do it all together.

  • Experiment with and develop techniques to overcome safety features in emergent AI capabilities.
  • Establish standardized, reusable frameworks that can be applied across products.
  • Develop sophisticated prompt sets and jailbreaking strategies to sufficiently test product safety, working with partner teams to leverage and evolve best practices.
  • Expand expertise and serve as a thought partner on novel testing, providing guidance to product launch owners and driving progress and alignment across Trust and Safety teams.
  • Collaborate with stakeholders across Trust and Safety to create and share new insights and approaches for testing, threat assessment, and AI safety.
  • Bachelor's degree or equivalent practical experience.
  • 7 years of experience in Trust and Safety, product policy, privacy and security, legal, compliance, risk management, intel, content moderation, red teaming, AI testing, adversarial testing, or similar.
  • 1 year of experience in data analytics or research, business process analysis.
  • Master's degree or PhD in a relevant field.
  • Experience in working with Google's products and services, particularly Generative AI products and AI systems, machine learning, and their potential risks.
  • Experience in SQL, building dashboards, data collection/transformation, visualization/dashboards, or scripting/programming language (e.g., Python).
  • Experience in using data to provide solutions and recommendations and identify emerging threats and vulnerabilities.
  • Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.
  • Excellent communication and presentation skills (written and verbal) and influence cross-functionally at various levels.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service