About The Position

Please submit your CV in English and indicate your level of English proficiency. Mindrift connects specialists with project-based AI opportunities for leading tech companies, focused on testing, evaluating, and improving AI systems. Participation is project-based, not permanent employment. What this opportunity involves While each project involves unique tasks, contributors may: Create structured test cases that simulate complex human workflows Define gold-standard behavior and scoring logic to evaluate agent actions Analyze agent logs, failure modes, and decision paths Work with code repositories and test frameworks to validate your scenarios Iterate on prompts, instructions, and test cases to improve clarity and difficulty Ensure that scenarios are production-ready, easy to run, and reusable

Requirements

  • 3+ of software development experience with strong Python focus
  • Experience with Git and code repositories
  • Comfortable with structured formats like JSON/YAML for scenario description
  • Understanding core LLM limitations (hallucinations, bias, context limits) and how these affect evaluation design
  • Familiarity with Docker
  • English proficiency - B2

Responsibilities

  • Create structured test cases that simulate complex human workflows
  • Define gold-standard behavior and scoring logic to evaluate agent actions
  • Analyze agent logs, failure modes, and decision paths
  • Work with code repositories and test frameworks to validate your scenarios
  • Iterate on prompts, instructions, and test cases to improve clarity and difficulty
  • Ensure that scenarios are production-ready, easy to run, and reusable
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service