About The Position

About the Team The Safety Product (or Platform Responsibility) team is at the forefront of building and optimizing content safety systems. With a focus on optimizing and advancing content safety, we leverage advanced large language models to enhance review efficiency, risk control, and user trust. Working closely with business and technical stakeholders, we deliver scalable solutions that keep pace with rapid global growth. As a project intern, you will have the opportunity to engage in impactful short-term projects that provide you with a glimpse of professional real-world experience. You will gain practical skills through on-the-job learning in a fast-paced work environment and develop a deeper understanding of your career interests. Applications will be reviewed on a rolling basis - we encourage you to apply early. Successful candidates must be able to commit to at least 3 months long internship period.

Requirements

  • Currently pursuing an Undergraduate/Master degree
  • Excellent Content Sense & Curiosity: A high sensitivity and curiosity for content, and social trends. You actively explore and understand the risks and context behind various content types.
  • Solid Operational Fundamentals: Basic knowledge of safety system design and content lifecycle (from trigger to intervention). Extremely detail-oriented, pragmatic, and a hands-on problem-solver.
  • Strong Learning Agility: Ability to quickly learn new knowledge and tools (e.g., LLM applications) and apply them to address evolving business needs and novel risks.
  • Resilience & Ownership: Excellent composure and problem-solving skills under high pressure. You take full ownership of your responsibilities and outcomes.

Nice To Haves

  • Prior experience in search, recommendation, news, content safety, or risk management operations at a major internet company.
  • Experience with sensitive word systems, content moderation systems, or policy platforms.
  • Experience participating in data training, evaluation, or application projects for large language models (LLMs).

Responsibilities

  • Risk Management
  • Mechanism Design: Participate in designing and optimizing proactive discovery, monitoring, and early-warning mechanisms for search-related risks.
  • Intervention Systems: Build and maintain customized intervention platforms and strategies for search, including but not limited to: sensitive keyword platform management, data training and evaluation for risk identification models, and designing intervention workflows.
  • Emergency Response: Execute rapid and precise intervention handling for sudden search safety incidents. Conduct post-incident analysis and reviews to drive systematic improvements.
  • Stress Testing & Fortification: Design and execute stress tests to simulate extreme scenarios, validating and strengthening the robustness of our safety systems.
  • Trending Project Support
  • Quality Control: Oversee, audit, and evaluate the quality of content review for trending topics, identifying potential risks and process gaps to drive improvements in standards.
  • Customized safety strategies: Develop tailored safety strategies (e.g., special monitoring for specific topics, unique display logic) and rapid intervention capabilities for the hotspots business, ensuring content is both fast-breaking and secure.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service