We’re looking for someone who can design realistic and structured evaluation scenarios for LLM-based agents. You’ll create test cases that simulate human-performed tasks and define gold-standard behavior to compare agent actions against. You’ll work to ensure each scenario is clearly defined, well-scored, and easy to execute and reuse. You’ll need a sharp analytical mindset, attention to detail, and an interest in how AI agents make decisions. Although every project is unique, you might typically: Create structured test cases that simulate complex human workflows. Define gold-standard behavior and scoring logic to evaluate agent actions. Analyze agent logs, failure modes, and decision paths. Work with code repositories and test frameworks to validate your scenarios. Iterate on prompts, instructions, and test cases to improve clarity and difficulty. Ensure that scenarios are production-ready, easy to run, and reusable.