About The Position

Trust and Safety team members are tasked with identifying and taking on the biggest problems of safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety. As a part of the Global Policy and Standards team, you will be a subject matter expert who promote user trust and uphold Google’s policies. You will focus on analyzing, developing and implementing policies to help ensure Google has best policies to protect users across its Workspace and UGC products such as Gmail, Drive, Classroom, Chat and Meet products that are used by billions of users each day. You will act as a advisor to product, engineering and enforcement teams about how and why our policies matter, and help resolve high-profile policy issues coming from the public, regulators, users, customers and internal stakeholders. This role works with sensitive content or situations and may be exposed to graphic, controversial, or upsetting content.At Google we work hard to earn our users’ trust every day. Trust and Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

Requirements

  • Bachelor's degree or equivalent practical experience.
  • 5 years of experience in a policy, legal, trust and safety, or technology environment.

Nice To Haves

  • JD, MBA, or Master’s degree.
  • Experience with user-generated products and content moderation processes.
  • Experience with Google’s user-facing AI products (e.g. Gemini) and building out policies for generative AI content.
  • Experience with policy and knowledge of the technology sector and key policy issues affecting the internet (e.g. harmful content, privacy, AI).
  • Experience with successfully driving evidence/data-based policies.
  • Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.

Responsibilities

  • Drive policy strategy creation and execution across Google Workspace and User-Generated Content (UGC) products to continuously identify needs and improve processes.
  • Resolve high-profile policy issues coming from the public, regulators, users, customers and internal stakeholders.
  • Identify abuse trends, gaps, and opportunities across Google Workspace and UGC products and partner with cross-functional teams to develop, evaluate and improve enforcement processes and guidelines.
  • Advise cross-functional teams on policy considerations on Workspace and UGC AI product launches and design best-fit policy solutions, and collaborate to influence product decisions and prioritization and improve user experience.
  • Advance education around the first principles that define our approach to trust risk areas.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service