About The Position

As a Research Engineer at Mercor, you’ll work at the intersection of engineering and applied AI research. You’ll own benchmarking pipelines, evaluation systems, and failure analysis workflows that directly inform how we train and improve frontier language models. Your work will define how we measure tool use, agentic behavior, and real-world reasoning. You’ll design and run evals, build rubrics and scorers, and turn failure analysis into actionable improvements for post-training, RLVR, and data pipelines.

Requirements

  • Strong applied research background, with focus on model evaluation, benchmarking, and/or failure analysis.
  • Strong coding skills and hands-on experience with ML models and evaluation code.
  • Solid grasp of data structures, algorithms, and backend systems.
  • Comfort with APIs, SQL/NoSQL, and cloud platforms for running and storing eval results.
  • Ability to reason about model behavior, experimental results, and data quality from evals and failure analyses.
  • Excitement to work in person in San Francisco five days a week in a high-intensity, high-ownership environment.

Nice To Haves

  • Industry experience on a post-training or evaluation/benchmarking team (highest priority).
  • Publications at top-tier venues (NeurIPS, ICML, ACL), especially in evaluation or benchmarking.
  • Experience building or running LLM evaluations, benchmarks, or failure-analysis pipelines.
  • Experience with synthetic data generation, rubric design, or RL-style workflows that use evals for reward shaping.
  • Work samples or code (e.g., eval frameworks, benchmark suites, failure-analysis reports or tooling) that demonstrate relevant skills.

Responsibilities

  • Benchmarking: Design, implement, and maintain benchmarks and metrics for tool use, agentic behavior, and real-world reasoning; ensure benchmarks scale with training and stay aligned with product and research goals.
  • Evaluation systems: Build and operate LLM evaluation systems end-to-end runs, scoring, dashboards, and reporting, so researchers and applied AI teams can track model performance and compare runs at scale.
  • Failure analysis: Run systematic failure analysis on model outputs (e.g., wrong tool use, reasoning errors, safety/alignment issues); categorize failure modes, quantify prevalence, and feed findings into reward design, data curation, and benchmark design.
  • Rubrics and evaluators: Create and refine rubrics, automated evaluators, and scoring frameworks that drive training and evaluation decisions; balance rigor with scalability (human vs. model-as-judge, calibration, agreement).
  • Data quality and usability: Quantify data usability, quality, and impact on key benchmarks; use evals and failure analysis to guide data generation, augmentation, and curation.
  • Cross-team collaboration: Work with AI researchers, applied AI teams, and data producers to align evals with training objectives and to prioritize benchmarks and failure analyses that matter most.
  • Ownership in a fast-paced environment: Operate in a high-iteration research setting with strong ownership of benchmarks, evals, and failure-analysis workflows.

Benefits

  • Generous equity grant vested over 4 years
  • A $10K housing bonus (if you live within 0.5 miles of our office)
  • A $1.5K monthly stipend for meals
  • Free Equinox membership
  • Health insurance
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service