Research Engineer

MercorSan Francisco, CA
1dOnsite

About The Position

As a Research Engineer at Mercor, you’ll work at the intersection of engineering and applied AI research. You’ll contribute directly to post-training and RLVR, synthetic data generation, and large-scale evaluation workflows that meaningfully impact frontier language models. Your work will be used to train large language models to master tool use, agentic behavior, and real-world reasoning in real-world production environments. You’ll shape rewards, run post-training experiments, and build scalable systems that improve model performance. You’ll help design and evaluate datasets, create scalable data augmentation pipelines, and build rubrics and evaluators that push the boundaries of what LLMs can learn.

Requirements

  • Strong applied research background, with a focus on post-training and/or model evaluation.
  • Strong coding proficiency and hands-on experience working with machine learning models.
  • Strong understanding of data structures, algorithms, backend systems, and core engineering fundamentals.
  • Familiarity with APIs, SQL/NoSQL databases, and cloud platforms.
  • Ability to reason deeply about model behavior, experimental results, and data quality.
  • Excitement to work in person in San Francisco, five days a week (with optional remote Saturdays), and thrive in a high-intensity, high-ownership environment.

Nice To Haves

  • Real-world post-training team experience in industry (highest priority).
  • Publications at top-tier conferences (NeurIPS, ICML, ACL).
  • Experience training models or evaluating model performance.
  • Experience in synthetic data generation, LLM evaluations, or RL-style workflows.
  • Work samples, artifacts, or code repositories demonstrating relevant skills.

Responsibilities

  • Work on post-training and RLVR pipelines to understand how datasets, rewards, and training strategies impact model performance.
  • Design and run reward-shaping experiments and algorithmic improvements (e.g., GRPO, DAPO) to improve LLM tool-use, agentic behavior, and real-world reasoning.
  • Quantify data usability, quality, and performance uplift on key benchmarks.
  • Build and maintain data generation and augmentation pipelines that scale with training needs.
  • Create and refine rubrics, evaluators, and scoring frameworks that guide training and evaluation decisions.
  • Build and operate LLM evaluation systems, benchmarks, and metrics at scale.
  • Collaborate closely with AI researchers, applied AI teams, and experts producing training data.
  • Operate in a fast-paced, experimental research environment with rapid iteration cycles and high ownership.

Benefits

  • Generous equity grant vested over 4 years
  • A $20K relocation bonus (if moving to the Bay Area)
  • A $10K housing bonus (if you live within 0.5 miles of our office)
  • A $1K monthly stipend for meals
  • Free Equinox membership
  • Health insurance

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

251-500 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service