Model Launch Specialist, Responsible AI

GoogleMountain View, CA
98d$160,000 - $237,000

About The Position

Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what's right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety. The Trust and Safety Launch Governance and Operations (LGO) team is a critical function that accelerates Google's launch velocity by delivering predictable, principle-driven responsibility governance workflows. We operate at the intersection of innovation and safety, evaluating risks and strategic opportunities for new products and features. We ensure development teams are equipped with the knowledge of Google's AI Principles and responsibility standards and have the tools to implement them early in the development life-cycle. LGO provides transparent, streamlined workflows for model launches, documenting evaluations and mitigations that directly inform product-level responsibility decisions. Finally, we build reporting infrastructure to provide stakeholders with the snapshots and briefings necessary to ensure a coordinated, responsible market entry for launches. At Google we work hard to earn our users' trust every day. Trust & Safety is Google's team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google's products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

Requirements

  • Strong technical know-how and problem-solving skills.
  • Ability to work globally and cross-functionally.
  • Experience in promoting user safety and trust.
  • Knowledge of AI Principles and responsibility standards.

Responsibilities

  • Identify and tackle problems related to the safety and integrity of Google products.
  • Use technical skills and user insights to protect users and partners from abuse.
  • Collaborate with engineers and product managers to address abuse and fraud cases.
  • Promote trust in Google and ensure user safety.
  • Deliver predictable, principle-driven responsibility governance workflows.
  • Evaluate risks and strategic opportunities for new products and features.
  • Equip development teams with knowledge of Google's AI Principles and responsibility standards.
  • Document evaluations and mitigations for model launches.
  • Build reporting infrastructure for stakeholders regarding market entry for launches.

Benefits

  • Base salary range of $160,000-$237,000.
  • Bonus and equity options.
  • Comprehensive benefits package.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Web Search Portals, Libraries, Archives, and Other Information Services

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service