Machine Learning Engineer - LLM Evals + Observability

GleanPalo Alto, CA
6h$200,000 - $300,000Hybrid

About The Position

Building a great AI assistant is only half the battle – knowing whether it's actually great is the other half. Our team owns the measurement and quality layer that make Glean's Assistant and Agents reliably better over time: evaluation pipelines, quality evalsets, LLM-powered judges, agent observability, and the tooling engineers use to understand what changed and why. It's a rare combination of infrastructure engineering, applied ML, and direct product impact. If you care deeply about quality and want to build the systems that make it measurable, this role is for you.

Requirements

  • 2+ years of software engineering experience with strong coding skills.
  • Strong backend fundamentals in Go and Python; comfortable with distributed data pipelines.
  • Experience working with LLM evaluation, reinforcement learning from human feedback, natural language processing, or other large systems involving machine learning.
  • Analytically rigorous – you think carefully about what offline metrics actually predict about real user experience.
  • Thrive in a customer-focused, tight-knit and cross-functional environment - being a team player and willing to take on whatever is most impactful for the company
  • You care about quality – not just in the systems you build, but in the product you're helping measure and improve.

Responsibilities

  • Design and curate evaluation datasets – sampling strategies, query diversity, and golden sets that give reliable, representative coverage of real assistant behavior.
  • Build and maintain large-scale evaluation pipelines that measure assistant quality across thousands of real user queries.
  • Build LLM-powered judges that score metrics like correctness, completeness, and response quality, and align them against human judgment.
  • Evaluate new models and product changes before they ship – providing the quality signal that gates launches and prevents regressions.
  • Build observability infrastructure for AI agents: trace enrichment, data pipelines, and dashboards that make assistant behavior inspectable.
  • Close the loop between quality measurement and improvement using eval results, customer feedback, and techniques like automated prompt iteration to help drive concrete gains in assistant behavior.
  • Collaborate with engineers across the company to make evals a first-class part of how we ship.

Benefits

  • We offer a comprehensive benefits package including competitive compensation, Medical, Vision, and Dental coverage, generous time-off policy, and the opportunity to contribute to your 401k plan to support your long-term goals.
  • When you join, you'll receive a home office improvement stipend, as well as an annual education and wellness stipends to support your growth and wellbeing.
  • We foster a vibrant company culture through regular events, and provide healthy lunches daily to keep you fueled and focused.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service