We're building a future where AI systems are not only powerful but safe, aligned, and robust against misuse. Our team focuses on advancing practical safety techniques for large language models (LLMs) and multimodal systems-ensuring these models remain aligned with human intent and resist attempts to produce harmful, toxic, or policy-violating content. We operate at the intersection of model development and real-world deployment, with a mission to build systems that can proactively detect and prevent jailbreaks, toxic behaviors, and other forms of misuse. Our work blends applied research, systems engineering, and evaluation design to ensure safety is built into our models at every layer. We're looking for a Senior Staff Engineer to help lead our efforts in designing, building, and evaluating next-generation safety mechanisms for foundation models. You'll guide a team of research engineers focused on scaling safety interventions, building tooling for red teaming and model inspection, and designing robust evaluations that stress-test models in realistic threat scenarios.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Industry
Professional, Scientific, and Technical Services
Number of Employees
251-500 employees