About The Position

The Safety Product (or Platform Responsibility)team is at the forefront of building and optimizing content safety systems. With a focus on optimising and advancing content safety, we leverage advanced large language models to enhance review efficiency, risk control, and user trust. Working closely with business and technical stakeholders, we deliver scalable solutions that keep pace with rapid global growth.

Requirements

  • 1+ years of product management experience in consumer-facing mobile/web companies, preferably in AI/ML or Trust & Safety domains.
  • Hands-on experience with AI/ML systems, particularly in natural language processing, and the ability to translate model capabilities into effective product solutions.
  • Proven track record of delivering impactful products or process improvements that enhance safety, governance, or user experience.
  • Strong data analysis skills, with the ability to evaluate model quality, interpret trade-offs, and guide iterations based on data insights.
  • Excellent communication and project management skills, capable of influencing and aligning cross-functional stakeholders.

Nice To Haves

  • Prior experience in Trust & Safety, safety-by-design, or risk/governance products.
  • Familiarity with online safety regulations, AI policy trends, and emerging governance practices in the tech industry.
  • Experience managing products with safety-related metrics (e.g., precision/recall, error trade-offs, governance impact).
  • Experience with human-in-the-loop systems and machine moderation for reducing harmful content.
  • Experience working with international teams across diverse markets, time zones, and cultural contexts.

Responsibilities

  • Lead the optimization of large language model (LLM) prompts in accordance with community guidelines, ensuring they align with safety and governance standards.
  • Develop and maintain processes for continuously cleaning and refining positive and negative examples to enhance model performance and accuracy.
  • Produce and curate high-quality ground truth datasets that support the training and validation of AI-driven safety frameworks.
  • Collaborate with cross-functional teams (ML engineers, data scientists, policy experts, operations) to implement structured and interpretable systems that improve platform safety and governance.
  • Analyze data trends and model outputs to identify areas for improvement and drive the iteration of prompt strategies.
  • Establish workflows for the regular evaluation and prioritization of prompt modifications based on community feedback and safety metrics.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Manager

Industry

Broadcasting and Content Providers

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service