Discord seeks a Trust & Safety Content Annotation Analyst to strengthen our content moderation systems to more accurately and effectively enforce our Community Guidelines at scale. Reporting to our Core Initiatives team, you'll annotate user-reported content to train machine learning models that detect harmful behavior. This role requires flexibility across diverse policy domains, so the ideal candidate balances analytical precision with good judgment and a calm demeanor, even when handling sensitive or potentially distressing content. You'll apply operational rigor to audit ML outputs, investigate the root causes of policy interpretation errors, and communicate your findings to stakeholders across Trust & Safety. Through this work, you'll expand knowledge of policy areas and safety processes while aiding responsible, large-scale user protection. This position is temporary.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Industry
Web Search Portals, Libraries, Archives, and Other Information Services
Education Level
No Education Listed
Number of Employees
501-1,000 employees