Applied MLE Evaluations

Health GPT IncPalo Alto, CA
53dOnsite

About The Position

As an Applied Machine Learning Engineer - Evaluations at Hippocratic AI, you'll be at the core of how we measure, understand, and improve our voice-based generative AI healthcare agents. Your work will translate complex, qualitative notions of empathy, safety, and accuracy into quantitative evaluation signals that guide model iteration and deployment. You'll design and implement evaluation harnesses, analysis tools, and visualization systems for multimodal agents that use language, reasoning, and speech. Partnering closely with research, product, and clinical teams, you'll ensure every model update is grounded in data, validated against real-world scenarios, and continuously improving in both intelligence and bedside manner. This is a hands-on, experimental role for ML engineers who care deeply about quality, safety, and user experience-and who thrive at the intersection of research and product.

Requirements

  • 4+ years of experience in applied ML, ML engineering, or AI evaluation, with a focus on building and analyzing model pipelines.
  • Strong skills in Python, with experience in data processing, experiment tracking, and model analysis frameworks (e.g., Weights & Biases, MLflow, Pandas).
  • Familiarity with LLM evaluation methods, speech-to-text/text-to-speech models, or multimodal systems.
  • Understanding of prompt engineering, model fine-tuning, and retrieval-augmented generation (RAG) techniques.
  • Comfortable collaborating with cross-functional partners across research, product, and design teams.
  • Deep interest in AI safety, healthcare reliability, and creating measurable systems for model quality.

Nice To Haves

  • Experience building human-in-the-loop evaluation systems or UX research tooling.
  • Knowledge of visualization frameworks (e.g., Streamlit, Dash, React) for experiment inspection.
  • Familiarity with speech or multimodal model evaluation, including latency, comprehension, and conversational flow metrics.

Responsibilities

  • Design and implement evaluation harnesses for multimodal agent tasks, spanning speech, text, reasoning, and interaction flows.
  • Build interactive visualization and analysis tools that help engineers, researchers, and clinicians inspect model and UX performance.
  • Define, automate, and maintain continuous evaluation pipelines, ensuring regressions are caught early and model releases improve real-world quality.
  • Collaborate with product and clinical teams to translate qualitative healthcare goals (e.g., empathy, clarity, compliance) into measurable metrics.
  • Analyze evaluation data to uncover trends, propose improvements, and support iterative model tuning and fine-tuning.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

101-250 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service