About The Position

LMArena is seeking a variety of Machine Learning Scientist to help advance how we evaluate and understand AI models. You’ll help design and analyse experiments that uncover what makes models useful, trustworthy and capable through human preference signals. Your work will contribute to the scientific foundations of understanding AI at scale. This role is deeply interdisciplinary. You’ll work closely with engineers, product teams, marketing and the broader research community to develop new methods for comparing models, analyzing preference data, and disentangling performance factors like style, reasoning, and robustness. Your work will inform both the public leaderboard and the tools we provide to model developers. If you’re excited by open-ended questions, rigorous evaluation, and research that’s grounded in real-world impact, you’ll find a meaningful home here. We’re looking for: Hands-on experience training large-scale models, including reward models, preference models, and fine-tuning LLMs with methods like RLHF, DPO, and contrastive learning. Strong foundation in ML and statistics, with a track record of designing novel training objectives, evaluation schemes, or statistical frameworks to improve model reliability and alignment. Fluent in the full experimental stack, from dataset design and large-batch training to rigorous evaluation and ablation, with an eye for what scales to production. Deeply collaborative mindset, working closely with engineers to productionize research insights and iterating with product teams to align modeling goals with user needs.

Requirements

  • PhD or equivalent research experience in Machine Learning, Natural Language Processing, Statistics, or a related field
  • Strong understanding of LLMs and modern deep learning architectures (e.g., Transformers, diffusion models, reinforcement learning with human feedback)
  • Proficiency in Python and ML research libraries such as PyTorch, JAX, or TensorFlow
  • Demonstrated ability to design and analyze experiments with statistical rigor
  • Experience publishing research or working on open-source projects in ML, NLP, or AI evaluation
  • Comfortable working with real-world usage data and designing metrics beyond standard benchmarks
  • Ability to translate research questions into practical systems and collaborate across engineering and product teams
  • Passion for open science, reproducibility, and community-driven research.

Responsibilities

  • Design and conduct experiments to evaluate AI model behavior across reasoning, style, robustness, and user preference dimensions
  • Develop new metrics, methodologies, and evaluation protocols that go beyond traditional benchmarks
  • Analyze large-scale human voting and interaction data to uncover insights into model performance and user preferences
  • Collaborate with engineers to implement and scale research findings into production systems
  • Prototype and test research ideas rapidly, balancing rigor with iteration speed
  • Author internal reports and external publications that contribute to the broader ML research community
  • Partner with model providers to shape evaluation questions and support responsible model testing
  • Contribute to the scientific integrity and transparency of the LMArena leaderboard and tools

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

Number of Employees

11-50 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service