AI Engineer - Evaluations

Health GPT IncPalo Alto, CA
57dOnsite

About The Position

As an AI Engineer - Evaluations at Hippocratic AI, you'll define and build the systems that measure, validate, and improve the intelligence, safety, and empathy of our voice-based generative healthcare agents. Evaluation sits at the heart of our model improvement loop - it informs architecture choices, training priorities, and launch decisions for every patient-facing agent. You'll design LLM-based auto-evaluators, agent harnesses, and feedback pipelines that ensure each model interaction is clinically safe, contextually aware, and grounded in healthcare best practices. You'll collaborate closely with research, product, and clinical teams, working across the stack - from backend data pipelines and evaluation frameworks to tooling that surfaces insights for model iteration. Your work will directly shape how our agents behave, accelerating both their reliability and their real-world impact.

Requirements

  • 3+ years of software or ML engineering experience with a track record of shipping production systems end-to-end.
  • Proficiency in Python and experience building data pipelines, evaluation frameworks, or ML infrastructure.
  • Familiarity with LLM evaluation techniques - including prompt testing, multi-agent workflows, and tool-using systems.
  • Understanding of deep learning fundamentals and how offline datasets, evaluation data, and experiments drive model reliability.
  • Excellent communication skills with the ability to partner effectively across engineering, research, and clinical domains.
  • Passion for safety, quality, and real-world impact in AI-driven healthcare products.

Nice To Haves

  • Experience developing agent harnesses or simulation environments for model testing.
  • Background in AI safety, healthcare QA, or human feedback evaluation (RLHF).
  • Familiarity with reinforcement learning, retrieval-augmented evaluation, or long-context model testing.

Responsibilities

  • Design and build evaluation frameworks and harnesses that measure the performance, safety, and trustworthiness of Hippocratic AI's generative voice agents.
  • Prototype and deploy LLM-based evaluators to assess reasoning quality, empathy, factual correctness, and adherence to clinical safety standards.
  • Build feedback pipelines that connect evaluation signals directly to model improvement and retraining loops.
  • Partner with AI researchers and product teams to turn qualitative gaps into clear, defensible, and reproducible metrics.
  • Develop reusable systems and tooling that enable contributions from across the company, steadily raising the quality bar for model behavior and user experience.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

101-250 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service