There are still lots of open positions. Let's find the one that's right for you.
As a Research Engineer in Alignment Science at Anthropic, you will design and conduct machine learning experiments aimed at understanding and steering the behavior of advanced AI systems. This role combines scientific inquiry with engineering practices to explore AI safety, particularly focusing on the risks associated with powerful future systems. You will collaborate with various teams to contribute to research that ensures AI remains helpful, honest, and harmless.