Machine Learning Enginer, Core Evaluations

CantinaSan Francisco, CA

About The Position

Cantina Labs is a social AI company, developing a suite of advanced real-time models that push the boundaries of expression, personality, and realism. We bring characters to life, transforming how people tell stories, connect, and create. We build and power ecosystems. Cantina, our flagship social AI platform, is just the beginning. If you're excited about the potential AI has to shape human creativity and social interactions, join us in building the future! We are seeking an experienced Machine Learning Engineer (MLE) to focus on audio model evaluation, specifically for speech generation and recognition models. This role involves designing and developing comprehensive model evaluation pipelines for both development and production environments, as well as creating automated dashboards for reporting evaluation results. As the founding member of our evaluation team, the ideal candidate is expected to leverage their experience to lead our evaluation efforts and play a key role in the future growth of the evaluation team.

Requirements

  • Strong experience and intuition for designing metrics that capture model performance.
  • Strong experience with designing user studies on Mechanical Turk or similar platforms.
  • Strong experience with model training and fine-tuning for model evaluation.
  • Strong statistical knowledge and experience to statistically compare evaluation results and take decisions.
  • Very strong engineering and programming skills.
  • Experience with training ASR, TTS models.
  • Experience at ML teams working on large-scale machine learning problems. (>3B models with >1m hours of data)

Responsibilities

  • Designing model evaluation pipelines for models in development and production
  • Designing user studies for subjective model evaluations.
  • Converting requirements into measurable metrics.
  • Designing and developing automated evaluation dashboard to see model performances and compare results.
  • Training new models to capture new and different evaluation metrics.
  • Communicating with the model team to help design better models based on the evaluation results.
  • Communicating with the data team to help decide the type of data necessary to improve model performance.
  • Communication with the product-manager to make sure product requirements are correctly measured.
  • Help grow the evaluation team as the founding member.
  • Lead the evaluation team in the future.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

No Education Listed

Number of Employees

11-50 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service