Research Engineer, Evaluations

AssemblyAI
1d$210,000 - $260,000Remote

About The Position

We are looking for a Senior Research Engineer to join our streaming speech-to-text research team—a new role that sits at the intersection of research, product, and engineering. You'll be the person who makes sure we're measuring the right things, benchmarking against the right competitors, building and extending evaluation tooling and translating customer pain points into quantifiable research targets. You'll own the evaluation infrastructure that tells us whether our models are actually better—and by how much. This role is ideal for someone with a Machine Learning / Research Engineering background who is obsessed with understanding what customers actually need, and who gets satisfaction from turning vague feedback ("the model feels slow") into concrete metrics that the whole team can align around. You're comfortable talking to customer-facing teams one hour, designing a new evaluation framework the next, and then convincing researchers why it matters. You'll also operate at the frontier of the voice agent ecosystem. Our streaming product integrates with orchestration frameworks like LiveKit, Pipecat, and Vapi, and you'll need to understand how ASR fits into the broader voice agent stack—alongside VAD, turn detection, TTS, and LLM components. As this stack evolves rapidly, you'll help ensure our evaluations reflect real-world integration scenarios. You'll work directly with our research and engineering teams and become the connective tissue between what customers need and what researchers build. If you're entrepreneurial, rigorous about measurement, and want to have an outsized impact on the success of a rapidly growing product, this is your role.

Requirements

  • ML fundamentals: You understand how ML models are trained and evaluated well enough to interpret results and debug issues. You don't need to train them from scratch.
  • Strong Python skills: You can write clean evaluation scripts, work with data pipelines, and are comfortable with SQL and cloud infrastructure.
  • Metric intuition: You understand what makes a good evaluation metric, when to use relative vs. absolute improvements, and how to ensure statistical rigor.
  • Voice agent stack familiarity: You understand how the components of a voice agent system interact—VAD, ASR, turn detection, LLM, TTS—and can reason about how changes in one affect the others.
  • Tinkerer mentality: You'd rather ship something rough and iterate than spend weeks perfecting it. You're energized by variety.
  • Communication skills: You can explain technical results to researchers, summarize findings for leadership, and translate customer feedback into requirements.
  • Ownership mindset: You don't wait to be told what to evaluate. You see gaps and fill them.
  • Will need to work at least 3-4 hours overlapping with Eastern US Time Zone

Nice To Haves

  • Experience with speech/audio ML or real-time systems
  • Hands-on experience with voice agent orchestrators (LiveKit, Pipecat, Vapi, or similar)
  • Familiarity with standard ML evaluation practices and benchmarks
  • Experience working with customer-facing or product teams
  • Background in QA, data science, or applied ML roles

Responsibilities

  • Own end-to-end and integration-level model evaluation across accuracy, latency, and feature-specific metrics (e.g., turn detection latency, endpointing accuracy)
  • Build and maintain competitive benchmarking pipelines against other providers in the market
  • Design and run systematic experiments to measure the impact of model changes
  • Onboard, curate, and maintain evaluation datasets—both public benchmarks and internal test sets
  • Create evaluation subsets that stress-test specific capabilities and edge cases
  • Define evaluation metrics that capture real-world performance
  • Translate qualitative customer feedback into quantifiable evaluation criteria
  • Work with customer-facing teams to understand pain points and convert them into research priorities
  • Reduce friction for researchers by maintaining clean evaluation pipelines and clear documentation
  • Identify evaluation gaps proactively and propose solutions
  • Move fast—iterate on benchmarking approaches weekly, not monthly

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

11-50 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service