Machine Learning Engineer, Assessments

SpeakSan Francisco, CA

About The Position

We’re hiring an ML Engineer, Assessments to help build best-in-class assessment systems across multiple products (Speak for Business, B2C, and new surfaces). You will work in a tight loop with our Assessment Design Lead (Content/Learning Design), Machine Learning, Product, and Engineering to turn assessment constructs and rubrics into reliable, scalable scoring + feedback systems. This role owns the implementation, deployment, and ongoing quality of our assessment algorithms and ML systems. While there is immediate need to improve and expand production assessments, this work is also building a platform capability that can be reused across the app.

Requirements

  • Domain expertise in spoken language proficiency assessment (linguistics, applied linguistics, pedagogy, or equivalent experience)
  • Strong experience designing and running evaluation + validation for assessment/scoring systems, and tailoring approaches to a specific product use case
  • 4+ years building automatic proficiency assessment systems (or equivalent depth in closely related scoring/evaluation domains)
  • Proven ability to ship ML models to production (not only research), including reliability, monitoring, and iteration
  • Strong generalist ML/analysis skills (statistics, Python, PyTorch/model training)
  • Ability to operate cross-functionally and communicate clearly with non-technical partners (Content/LD, PM, leadership)

Nice To Haves

  • Experience with speech/audio ML
  • Experience with psychometrics concepts (reliability/validity, calibration)
  • PhD is helpful but not required

Responsibilities

  • Ship and own assessment ML systems end-to-end
  • Build, deploy, and maintain scoring models/pipelines (feature extraction → model training → inference → feedback generation)
  • Own monitoring, regression tests, and ongoing iteration to maintain accuracy targets
  • Define and operationalize evaluation
  • Implement validation/evaluation frameworks for assessments, including metrics, test sets, and offline/online analysis
  • Translate assessment requirements into measurable acceptance criteria and guardrails
  • Partner deeply with the Assessment Design Lead
  • Co-develop the strategy, together with the Content team, to grow assessments into a core platform at Speak
  • Work in a tight weekly loop to deliver incremental improvement
  • Drive near-term delivery across products
  • Stand up or improve summative assessments (spoken language ability) and bring them reliably to production
  • Prototype and validate formative assessment approaches to measure improvement over weeks/months
  • Support data and labeling strategy
  • Help define data needs for training/evaluation (including psychometric measurement needs)
  • Build or improve pipelines that support label collection and analysis (especially for efficacy studies)
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service