About The Position

Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety. The Workspace Account Security and Abuse team works across Google products to identify good and bad customers on our platforms and prevent malicious users from misusing our platforms. Our primary mission is to ensure that Workspace accounts are not used to abuse our AI and non-AI products across Google. The team is dedicated to stopping bad actors from misusing our AI services and reducing overall abuse across the ecosystem. Through our enterprise-focused efforts, our goal is to enable the sustainable growth of Google Workspace and Cloud. We collaborate with Community Abuse Technology (CAT), Workspace, Sales, Support, Growth teams, and more. We use state-of-the-art ML, robust rules, and intelligent reputation systems to keep our enterprise customers and broader product ecosystem safe. At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

Requirements

  • Bachelor's degree or equivalent practical experience.
  • 5 years of experience in signal development, data science, cyber security, anti-abuse, or enterprise account security.
  • Experience with SQL or Python.

Nice To Haves

  • Master's degree in a quantitative discipline.
  • Experience demonstrating applied AI, specifically in building and automating threat detection and evaluation pipelines.
  • Knowledge or proven interest in security, malware, phishing, AI safety, or related topics.
  • Excellent problem-solving and critical thinking skills with attention to detail in a fluid environment.
  • Excellent communication and presentation skills and the ability to influence cross-functionally at various levels.

Responsibilities

  • Analyze metrics and complex datasets using data science methodologies to identify underlying vulnerabilities and solve non-routine abuse problems.
  • Partner with cross-functional teams to develop counter-abuse strategies and technical requirements, emphasizing Enterprise account security and ecosystem protection.
  • Leverage applied AI and Machine Learning to automate detection processes and secure emerging AI services against evolving abuse trends.
  • Initiate the design of abuse protection integrations while conducting end-to-end analysis and delivering key results to stakeholders.
  • Participate in the on-call rotation schedule to manage high-priority escalations that may occur outside of standard non-work hours, including weekends/holidays.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service