The Productivity and Machine Learning Evaluation team ensures the quality of AI-powered features across a suite of productivity and creative applications - including Creator Studio - used by hundreds of millions of people. This team serves as the primary evaluation function, and its analysis directly informs decisions about model development, feature launches, and product direction. This role is the analytical core of the team; responsible for making sense of evaluation signals and real-world user behavior. The work involves designing feature-level quality metrics, collaborating with partner teams on data collection strategies, and translating evaluation data into concise, actionable insights that drive decisions. This is an opportunity to define how AI feature quality is measured and to directly shape what gets shipped. DESCRIPTION Day-to-day work involves analyzing evaluation results, identifying trends, regressions, and segment-level patterns across multiple AI features. This includes collaborating with partner teams on data collection strategies, ensuring evaluation data is representative of real-world usage, and designing the metrics framework that leadership uses to make decisions on AI features. Typical deliverables include: feature-level quality metrics and dashboards, evaluation analysis reports, data collection requirements, dataset representativeness audits, and concise metric summaries for decision-makers.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
5,001-10,000 employees