Machine Learning Engineer, LLM Evals & Observability

GleanMountain View, CA
Hybrid

About The Position

Our team owns the measurement and quality layer that make Glean's Assistant and Agents reliably better over time. This includes evaluation pipelines, quality eval-sets, LLM-powered judges, agent observability, and the tooling engineers use to understand what changed and why. It's a rare combination of infrastructure engineering, applied ML, and direct product impact. If you care deeply about quality and want to build the systems that make it measurable, this role is for you.

Requirements

  • 2+ years of software engineering experience with strong coding skills.
  • Strong backend fundamentals in Go and Python; comfortable with distributed data pipelines.
  • Experience working with LLM evaluation, reinforcement learning from human feedback, natural language processing, or other large systems involving machine learning.
  • Analytically rigorous – you think carefully about what offline metrics actually predict about real user experience.
  • Thrive in a customer-focused, tight-knit and cross-functional environment - being a team player and willing to take on whatever is most impactful for the company
  • You care about quality – not just in the systems you build, but in the product you're helping measure and improve.

Responsibilities

  • Design and curate evaluation datasets – sampling strategies, query diversity, and golden sets that give reliable, representative coverage of real assistant behavior.
  • Build and maintain large-scale evaluation pipelines that measure assistant quality across thousands of real user queries.
  • Build LLM-powered judges that score metrics like correctness, completeness, and response quality, and align them against human judgment.
  • Evaluate new models and product changes before they ship – providing the quality signal that gates launches and prevents regressions.
  • Build observability infrastructure for AI agents: trace enrichment, data pipelines, and dashboards that make assistant behavior inspectable.
  • Close the loop between quality measurement and improvement using eval results, customer feedback, and techniques like automated prompt iteration to help drive concrete gains in assistant behavior.
  • Collaborate with engineers across the company to make evals a first-class part of how we ship.

Benefits

  • Medical, Vision, and Dental coverage
  • Generous time-off policy
  • Opportunity to contribute to your 401k plan
  • Home office improvement stipend
  • Annual education and wellness stipends
  • Healthy lunches daily
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service