There are still lots of open positions. Let's find the one that's right for you.
The Research Scientist position in the GenAI Evaluations Foundations team focuses on safety evaluations for large language models (LLMs) with an emphasis on computer vision. The role is crucial for ensuring that the models respond safely to adversarial prompts, which is essential for open-sourcing models and maintaining Meta's reputation. The scientist will design evaluations and datasets to enhance model safety, contributing to the overall progress of AI by identifying and resolving issues early in the model training process.