Technical Lead, Safety Research

OpenAISan Francisco, CA
85d

About The Position

The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit society and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency. The Safety Research team aims to fundamentally advance our capabilities for precisely implementing robust, safe behavior in AI models and systems. As capabilities continue to advance, it is imperative that our approaches to safety continue to improve and scale to address evolving risks. This is important both for ensuring our systems are robust to prevent harmful misuse as well as ensuring potential misalignment cannot cause harm. We are working on these problems in a way that is grounded in our current models and methods but that generalizes to future systems. We are growing our team to expand our research on methods that will improve safety for AGI and beyond. This will include exploratory research for example, new methods to improve safety common sense and generalizable reasoning, developing new evaluations to elicit or detect misalignment or inner goals of the AI, and new methods to support human oversight of long-running tasks.

Requirements

  • 4+ years of experience in the field of AI safety, especially in areas like RLHF, adversarial training, robustness, fairness & biases.
  • Hold a Ph.D. or other degree in computer science, machine learning, or a related field.
  • Possess experience in safety work for AI model deployment.
  • Have an in-depth understanding of deep learning research and/or strong engineering skills.
  • Demonstrate a passion for AI safety and making cutting-edge AI models safer for real-world use.

Nice To Haves

  • Strong track record of practical research on safety and alignment, ideally in AI and LLMs.
  • Experience leading large research efforts in the past.
  • Team player who enjoys collaborative work environments.

Responsibilities

  • Set the research directions and strategies to make our AI systems safer, more aligned and more robust.
  • Coordinate and collaborate with cross-functional teams, including the rest of the research organization, T&S, policy and related alignment teams, to ensure that our AI meets the highest safety standards.
  • Actively evaluate and understand the safety of our models and systems, identifying areas of risk and proposing mitigation strategies.
  • Conduct state-of-the-art research on AI safety topics such as RLHF, adversarial training, robustness, and more.
  • Implement new methods in OpenAI’s core model training and launch safety improvements in OpenAI’s products.

Benefits

  • Relocation assistance to new employees.
  • Hybrid work model of 3 days in the office per week.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

Ph.D. or professional degree

Number of Employees

1,001-5,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service