Trust & Safety Content Annotation Analyst

DiscordSan Francisco, CA
38d$52 - $58

About The Position

Discord seeks a Trust & Safety Content Annotation Analyst to strengthen our content moderation systems to more accurately and effectively enforce our Community Guidelines at scale. Reporting to our Core Initiatives team, you'll annotate user-reported content to train machine learning models that detect harmful behavior. This role requires flexibility across diverse policy domains, so the ideal candidate balances analytical precision with good judgment and a calm demeanor, even when handling sensitive or potentially distressing content. You'll apply operational rigor to audit ML outputs, investigate the root causes of policy interpretation errors, and communicate your findings to stakeholders across Trust & Safety. Through this work, you'll expand knowledge of policy areas and safety processes while aiding responsible, large-scale user protection. This position is temporary.

Requirements

  • 2+ years of experience in trust & safety or policy work in a social online platform environment.
  • A track record of strong judgment and adaptability across a broad range of trust & safety harm types which we enforce via our Community Guidelines.
  • Ability to focus within structured policy taxonomies and annotation frameworks while maintaining consistency: you'll be reviewing a lot of content, and experience with deep focus to quickly get through a backlog will be crucial.
  • A calm, resilient demeanor when handling sensitive or potentially distressing content.
  • A strong curiosity around online culture and communication, and the nuances surrounding online speech.
  • Strong communication skills to effectively document annotation rationales and convey findings and recommendations to a wide range of stakeholders, such as policy, quality assurance, and machine learning engineers.
  • The ability to go between specific annotation decisions and broad operational trends, evidenced by previous exposure to quality assurance, root cause analysis, process improvement, or operational excellence initiatives.

Nice To Haves

  • Hands-on experience with data annotation or dataset creation for machine learning applications.
  • Familiarity with prompt engineering and ongoing iteration on LLMs.
  • Familiarity with Discord or similar community-based platforms as a user or moderator.
  • Experience working on globally distributed, hybrid work teams.

Responsibilities

  • Annotate user-reported content with precision and consistency to build high-quality training datasets that capture the nuance ML models need to make accurate decisions.
  • Quickly align with internal interpretations to apply policy judgments across multiple harm categories, including social engineering, threats, graphic content, teen safety, and harassment.
  • Work with policy teams to navigate edge cases and ambiguous content, ensuring your annotations reflect current Community Guidelines and platform context.
  • Audit ML model decisions to spot misclassification patterns, then investigate why automated and human judgments diverge.
  • Partner with ML engineers and policy stakeholders to share insights on model performance, propose new annotation frameworks, and flag content categories requiring stronger detection.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Web Search Portals, Libraries, Archives, and Other Information Services

Education Level

No Education Listed

Number of Employees

501-1,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service