Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. As a Prompt Engineer on the Claude Code team, you'll own Claude's behaviors specifically within Claude Code — ensuring users get a consistent, safe, and high-quality experience as we ship new models and evolve the product. This is a highly specialized role sitting at the intersection of model behavior and product quality. You'll be the expert on how Claude behaves inside Claude Code, owning and maintaining the system prompts that ship with each new model snapshot. When a new model drops, you're the person making sure Claude Code feels right within days — not weeks. You'll work closely with Model Quality and Research to understand emergent behaviors and behavioral regressions, and with product and safeguards teams to respond quickly when something goes wrong. This role requires someone who can move fast on behavioral tuning while maintaining rigor, and who cares deeply about the end-to-end developer experience Claude Code delivers. You'll need strong prompting skills, excellent judgment about model behaviors, and the collaborative skills to work across product, safeguards, and research teams. Salary: $320,000-405,000 (SWE-G 5-6)
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level