Member of Technical Staff, Model Evaluation

InceptionSan Francisco, CA
5h

About The Position

Inception creates the world’s fastest, most efficient AI models. Our Mercury model is the world’s fastest reasoning LLM and first commercially available diffusion LLM, delivering 5x greater speed and efficiency than today’s LLMs, with best-in-class quality. We are the AI researchers and engineers behind such breakthrough AI technologies as diffusion models, flash attention, and DPO. We seek experienced engineers and scientists to develop the evaluation metrics and systems that are key to advancing frontier LLM performance. You'll be instrumental in building our core product offerings while ensuring our models perform reliably at scale in production environments.

Requirements

  • BS/MS/PhD in Computer Science, Machine Learning, Statistics, or a related field (or equivalent experience).
  • At least 2 years of experience in ML evaluation, applied ML research, or a related engineering role.
  • Strong understanding of LLM fundamentals (e.g., autoregressive generation, instruction tuning, RLHF, in-context learning, and decoding strategies).
  • Proficiency in Python and ML frameworks such as PyTorch.
  • Experience designing and implementing evaluation metrics and benchmarks for generative models.
  • Solid foundation in statistics, experimental design, and hypothesis testing.
  • Experience with version control (Git) and containerization (Docker).
  • Excellent communication skills with the ability to distill complex evaluation results into actionable insights for technical and non-technical audiences.

Nice To Haves

  • Experience with human-in-the-loop evaluation systems (e.g., Likert-scale annotation, pairwise preference ranking, red-teaming).
  • Familiarity with LLM safety and alignment evaluation (toxicity, hallucination detection, factual grounding).
  • Knowledge of existing benchmark suites (e.g., MMLU, HumanEval, HELM, BIG-Bench) and their limitations.
  • Experience building evaluation infrastructure at scale using cloud platforms (AWS, GCP, Azure).
  • Familiarity with MLOps practices and CI/CD pipelines for model validation.
  • Experience with data engineering, large-scale data labeling, or synthetic data generation for evaluation purposes.
  • Exposure to LLM serving frameworks (vLLM, SGLang, TensorRT) and production monitoring.

Responsibilities

  • Design, develop, and maintain robust evaluation frameworks and benchmarks for measuring LLM performance across diverse tasks and domains.
  • Define and implement quantitative metrics that capture model quality, safety, reliability, and regression detection.
  • Build scalable, automated evaluation pipelines that integrate into model training and deployment workflows.
  • Conduct rigorous statistical analysis of model outputs to identify failure modes, biases, and performance gaps.
  • Collaborate with research and training teams to establish evaluation-driven feedback loops that directly inform model improvements.
  • Develop human evaluation protocols and tooling to complement automated metrics where ground truth is ambiguous.
  • Partner with product and customer-facing teams to translate real-world use cases into meaningful evaluation criteria.

Benefits

  • Competitive salary and equity in a rapidly growing startup.
  • Access to the latest GPU hardware and cloud resources
  • Flexible vacation and paid time off (PTO).
  • Health, dental, and vision insurance.
  • A collaborative and inclusive culture

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service