Machine Learning Engineer - Model Evaluations, Public Sector

Scale AI, Inc.San Francisco, CA
33d$187,000 - $300,000

About The Position

The Public Sector ML team at Scale deploys advanced AI systems-including LLMs, agentic models, and multimodal pipelines-into mission-critical government environments. We build evaluation frameworks that ensure these models operate reliably, safely, and effectively under real-world constraints. As an ML Engineer, you will design, implement, and scale automated evaluation pipelines that help customers trust and operationalize advanced AI systems across defense, intelligence, and federal missions.

Requirements

  • Experience in computer vision, deep learning, reinforcement learning, or NLP in production settings.
  • Strong programming skills in Python; experience with TensorFlow or PyTorch.
  • Background in algorithms, data structures, and object-oriented programming.
  • Experience with LLM pipelines, simulation environments, or automated evaluation systems.
  • Ability to convert research insights into measurable evaluation criteria.
  • This role will require an active security clearance or the ability to obtain a security clearance.

Nice To Haves

  • Graduate degree in CS, ML, or AI.
  • Cloud experience (AWS, GCP) and model deployment experience.
  • Experience with LLM evaluation, CV robustness, or RL validation.
  • Knowledge of interpretability, adversarial robustness, or AI safety frameworks.
  • Familiarity with ML evaluation frameworks and agentic model design.
  • Experience in regulated, classified, or mission-critical ML domains.

Responsibilities

  • Develop and maintain automated evaluation pipelines for ML models across functional, performance, robustness, and safety metrics, including LLM-judge-based evaluations.
  • Design test datasets and benchmarks to measure generalization, bias, explainability, and failure modes.
  • Build evaluation frameworks for LLM agents, including infrastructure for scenario-based and environment-based testing.
  • Conduct comparative analyses of model architectures, training procedures, and evaluation outcomes.
  • Implement tools for continuous monitoring, regression testing, and quality assurance for ML systems.
  • Design and execute stress tests and red-teaming workflows to uncover vulnerabilities and edge cases.
  • Collaborate with operations teams and subject matter experts to produce high-quality evaluation datasets.

Benefits

  • Comprehensive health, dental and vision coverage
  • Retirement benefits
  • A learning and development stipend
  • Generous PTO
  • This role may be eligible for additional benefits such as a commuter stipend.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Computing Infrastructure Providers, Data Processing, Web Hosting, and Related Services

Number of Employees

501-1,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service