User Protection is an organization dedicated to protecting Google's users from abuse, account compromise and other harms online. Our team works with the Content Safety (CS) and User Protection Platform and Services (UPS) which develops tools to protect users from abusive content at scale, often leveraging AI technology to do so. Our team provides data science capabilities to these two organizations, and works directly with product and engineering to evaluate, understand, and improve the quality of our protections. Organizationally, we are a part of a large data science team in Core, which provides ample opportunities for knowledge sharing, development, and learning from other data scientists working in adjacent domains. CS and UPS equip Google products with tools to protect users from abuse and harm. As a Data Scientist working with CS and UPS, you'll be helping to evaluate, understand, and improve our abuse protections - which are generally built with and for AI tools. We work closely with cross-functional product teams on specific content safety classifiers, but also on generic strategies and tooling for understanding content safety classifiers. Our team is designing safety data evaluations and safety mitigation evaluations, including LLM-as-judge, prompt injection, and Responsible AI testing. We also work with flagship GenAI product teams on understanding Google-wide GenAI safety postures in production traffic.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior