About The Position

As a Safety Operations Tutor Manager, you will contribute to xAI's mission by leading the team of Safety Operations Tutors responsible for training and refining Grok to enforce our terms of service and support functions. Your leadership will directly impact the safety of our products, X, and Grok, by minimizing existential risks, enforcing xAI’s rules, and promoting responsible development, helping to prevent illegal and harmful content.

Requirements

  • Proven leadership and people management experience in AI-driven operations, with a track record of developing high-performing teams.
  • Expertise in improving Large Language Models (LLMs) to maximize efficiencies in enforcement and support and ability to propose and implement solutions to increase security and safety of our platform.
  • Proven experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square.
  • Ability to interpret, apply, and train teams on xAI safety policies effectively.
  • Proficiency in analyzing complex scenarios and operational metrics, with strong skills in ethical reasoning, risk assessment, and team performance optimization.
  • Strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions, escalations, and talent development.
  • Strong leadership, communication, interpersonal, analytical, and ethical decision-making skills.
  • Quality assurance: Ability to hold the team to our high standard for quality work; managing performance as needed.
  • Commitment to continuous improvement of processes, people, and operations to prioritize safety and risk mitigation.
  • Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.

Nice To Haves

  • Experience managing teams in Trust and Safety for a social media company, leveraging AI or other automation tools.
  • Expertise in leading red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems, team processes, and platform robustness.

Responsibilities

  • Lead, mentor, and manage the team that monitors and takes action on content and behavior that goes against our terms of service, escalating as needed.
  • Oversee the processing of appeals and ensuring proper labeling of use cases in the system.
  • Guide the team’s use of proprietary software to provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance.
  • Ensure the delivery of high-quality curated data that reinforces xAI’s rules and ethical alignment.
  • Mentor team members, conduct performance management and calibration, drive feedback on tasks that improve AI's defenses to detect illegal and unethical behavior, identify emerging abuse vectors, and implement process improvements and automations.
  • Align Grok with our rules enforcement while collaborating cross-functionally to strengthen overall safety operations.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service