Research Engineer, AI Safety & Alignment

Character.AIMenlo Park, CA
85d

About The Position

Joining us as a Research Engineer, you'll be at the forefront of tackling one of the most critical challenges in AI today: safety and alignment. Your work will be pivotal in understanding and mitigating the risks of advanced AI, conducting foundational research to make our models safer, and solving the core technical problems of AI alignment—ensuring our models behave in accordance with human values and intentions. The Safety team is dedicated to pioneering and implementing techniques that make our models more robust, honest, and harmless. As a Research Engineer, you will bridge the gap between theoretical research and practical application, writing high-quality code to test hypotheses and integrating successful safety solutions directly into our products. Your research will not only protect millions of users but also contribute to the broader scientific community's understanding of how to build safe, beneficial AI.

Requirements

  • Hold a PhD (or equivalent experience) in a relevant field such as Computer Science, Machine Learning, or a related discipline.
  • Write clear and clean production-facing and training code.
  • Experience working with GPUs (training, serving, debugging).
  • Experience with data pipelines and data infrastructure.
  • Strong understanding of modern machine learning techniques, particularly transformers and reinforcement learning, with a focus on their safety implications.
  • Are passionate about the responsible development of AI and dedicated to solving complex safety challenges.

Nice To Haves

  • Experience with product experimentation and A/B testing.
  • Experience training large models in a distributed setting.
  • Familiarity with ML deployment and orchestration (Kubernetes, Docker, cloud).
  • Experience with explainable AI (XAI) and interpretability techniques.
  • Have research in AI safety, alignment, ethics, or a related area.
  • Knowledge of the broader societal and ethical implications of AI, including policy and governance.
  • Publications in relevant academic journals or conferences in the field of machine learning.

Responsibilities

  • Develop and implement novel evaluation methodologies and metrics to assess the safety and alignment of large language models.
  • Research and develop cutting-edge techniques for model alignment, value learning, and interpretability.
  • Conduct adversarial testing to proactively uncover potential vulnerabilities and failure modes in our models.
  • Analyze and mitigate biases, toxicity, and other harmful behaviors in large language models through techniques like reinforcement learning from human feedback (RLHF) and fine-tuning.
  • Collaborate with engineering and product teams to translate safety research into practical, scalable solutions and best practices.
  • Stay abreast of the latest advancements in AI safety research and contribute to the academic community through publications and presentations.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service