About The Position

User Protection is an organization dedicated to protecting Google's users from abuse, account compromise and other harms online. Our team works with the Content Safety (CS) and User Protection Platform and Services (UPS) which develops tools to protect users from abusive content at scale, often leveraging AI technology to do so. Our team provides data science capabilities to these two organizations, and works directly with product and engineering to evaluate, understand, and improve the quality of our protections. Organizationally, we are a part of a large data science team in Core, which provides ample opportunities for knowledge sharing, development, and learning from other data scientists working in adjacent domains. CS and UPS equip Google products with tools to protect users from abuse and harm. As a Data Scientist working with CS and UPS, you'll be helping to evaluate, understand, and improve our abuse protections - which are generally built with and for AI tools. We work closely with cross-functional product teams on specific content safety classifiers, but also on generic strategies and tooling for understanding content safety classifiers. Our team is designing safety data evaluations and safety mitigation evaluations, including LLM-as-judge, prompt injection, and Responsible AI testing. We also work with flagship GenAI product teams on understanding Google-wide GenAI safety postures in production traffic.

Requirements

  • Master's degree in Statistics, Data Science, Mathematics, Physics, Economics, Operations Research, Engineering, or a related quantitative field, or equivalent practical experience.
  • 5 years of experience using analytics to solve product or business problems, coding (e.g., Python, R, SQL), querying databases, or statistical analysis, or 3 years of experience with a PhD degree.
  • 4 years of experience in data analysis or related fields as a statistician or data scientist.
  • Experience with statistical software (e.g., R, Python, MATLAB, pandas) and database languages (e.g., SQL).
  • Experience with statistical methodologies.

Nice To Haves

  • PhD degree in Statistics, Data Science, Mathematics, Physics, Economics, Operations Research, Engineering, or a related quantitative field.
  • 8 years of work experience using analytics to solve product or business problems, coding (e.g., Python, R, SQL), querying databases or statistical analysis, or 6 years of work experience with a PhD degree.
  • Experience in training, validating or optimizing language models or LLM-based classifiers.
  • Experience analyzing multi-modal data (image, audio, or video).
  • Experience of GenAI safety and red-teaming.

Responsibilities

  • Solve ambiguous problems in the Generative Artificial Intelligence safety space, including agent-based safety.
  • Develop quantitative methodologies to curate training data and evaluation data from synthetic data and real-world production data for improving content safety mitigations. Design and evaluate models to mathematically express and solve defined problems with limited precedent.
  • Drive cross-functional alignment on measuring violation rates and unjustified refusals across multiple flagship Generative Artificial Intelligence product surfaces. Identify and clarify business or product questions.
  • Provide feedback and refine business questions into tractable analysis, evaluation metrics, or mathematical models. Drive clarity and coherence in understanding safety at scale across Google.
  • Own the process of gathering, extracting, and compiling data across sources (e.g., SQL, R, Python). Format, re-structure, or validate data to ensure quality, and review the dataset to ensure it is ready for analysis.

Benefits

  • bonus
  • equity
  • benefits
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service