About The Position

At Grafana, we build observability tools that help users understand, respond to, and improve their systems – regardless of scale, complexity, or tech stack. The Grafana AI teams play a key role in this mission by helping users make sense of complex observability data through AI-driven features. These capabilities reduce toil, lower the barrier of domain expertise, and surface meaningful signals from noisy environments. We are looking for an experienced engineer with expertise in evaluating Generative AI systems, particularly Large Language Models (LLMs), to help us build and evolve our internal evaluation frameworks, and/or integrate existing best-of-breed tools. This role involves designing and scaling automated evaluation pipelines, integrating them into CI/CD workflows, and defining metrics that reflect both product goals and model behavior. As the team matures, there’s a broad opportunity to expand or redefine this role based on impact and initiative.

Requirements

  • Experience designing and implementing evaluation frameworks for AI/ML systems.
  • Familiarity with prompt engineering, structured output evaluation, and context-window management in LLM systems.
  • High autonomy to collaborate and translate team goals into clear, testable criteria supported by effective tooling.

Nice To Haves

  • Experience working in environments with rapid iteration and experimental development.
  • A pragmatic mindset that values reproducibility, developer experience, and thoughtful trade-offs when scaling GenAI systems.
  • A passion for minimizing human toil and building AI systems that actively support engineers.

Responsibilities

  • Design and implement robust evaluation frameworks for GenAI and LLM-based systems, including golden test sets, regression tracking, LLM-as-judge methods, and structured output verification.
  • Develop tooling to enable automated, low-friction evaluation of model outputs, prompts, and agent behaviors.
  • Define and refine metrics for both structure and semantics, ensuring alignment with realistic use cases and operational constraints.
  • Lead the development of dataset management processes and guide teams across Grafana in best practices for GenAI evaluation.

Benefits

  • equity
  • bonus (if applicable)
  • Restricted Stock Units (RSUs)
  • global annual leave policy of 30 days per annum
  • 3 days of your annual leave entitlement are reserved for Grafana Shutdown Days

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

1,001-5,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service