Trust & Safety Specialist - ModSquad / Adobe Brand Rep

ModSquad
$80,000 - $100,000Hybrid

About The Position

ModSquad is seeking a Trust & Safety Specialist to join our team as an embedded Brand Representative supporting Adobe’s child safety program at their San Francisco office. As a ModSquad employee, you will work directly within Adobe’s Trust & Safety operations team, serving as a front-line operator for CSAM queue review, NCMEC CyberTip reporting, and nuanced content assessment that directly supports law enforcement outcomes. This is not a generalist content moderation role — the core of this work is high-exposure child safety operations embedded inside one of the world’s leading creative technology companies. If protecting the most vulnerable users on the internet is work you’re called to do, read on.

Requirements

  • Demonstrated, hands-on experience reviewing and classifying CSAM in a professional Trust & Safety capacity. Candidates without direct CSAM review experience will not be considered.
  • Working knowledge of the A1–B2 NCMEC classification system and the federal reporting requirements that govern each category.
  • Experience submitting or supporting NCMEC CyberTip reports, including proper documentation standards.
  • Ability to review disturbing visual content — including explicit child sexual abuse material — with accuracy, objectivity, and emotional regulation on a daily basis.
  • Strong analytical judgment for nuanced, gray-area content decisions where classification is not immediately obvious.
  • High attention to detail and ability to maintain documentation standards under volume and time pressure.
  • Full-time availability (40 hrs/week), based in the San Francisco Bay Area, with 2–3 days per week required on-site at Adobe’s SF office.

Nice To Haves

  • Prior experience with NCMEC reporting workflows, CyberTipline submissions, or law enforcement coordination in a Trust & Safety context.
  • Familiarity with ticketing systems such as Zendesk or ServiceNow.
  • Experience reviewing AI-generated content for policy violations.
  • Familiarity with Adobe products and the creative professional use cases that distinguish legitimate from harmful content.

Responsibilities

  • Review and classify escalated visual content (images and video) using the A1–B2 NCMEC classification framework, making accurate reportability determinations in accordance with federal statute and Adobe’s child safety policies.
  • Submit CyberTip reports to NCMEC via established reporting protocols, ensuring documentation is accurate, complete, and legally defensible.
  • Conduct network investigations of repeat offenders, reviewing account signals (IP, email structure, behavioral patterns) to assess recidivism risk and support account deactivation decisions.
  • Review AI-generated content to assess user intent and identify potential child safety policy violations.
  • Identify trends in queue data and surface insights to inform process improvements and policy calibration.
  • Maintain detailed case documentation and escalate gray-area content for team calibration when classification is uncertain.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

501-1,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service