Staff Machine Learning Research Scientist, LLM Evals

Scale AI, Inc.San Francisco, CA
48d

About The Position

As the leading data and evaluation partner for frontier AI companies, Scale is dedicated to advancing the evaluation and benchmarking of large language models (LLMs). We are building industry-leading LLM evals, setting new standards for model performance assessment. Our mission is to develop rigorous, scalable, and fair evaluation methodologies to drive the next generation of AI capabilities. Our Research teams work with the industry's leading AI labs to provide high quality data and accelerate progress in GenAI research. As a Staff Machine Learning Research Scientist on the LLM Evals team, you will lead the development of novel evaluation methodologies, metrics, and benchmarks to measure the capabilities and limitations of frontier LLMs. You will help define what "good" looks like in generative AI, driving research that informs both our internal roadmap and the broader research community. This role is critical for designing and executing a roadmap that defines best practices in data driven AI development and will accelerate the next generation of generative AI models in partnership with top foundational model labs.

Requirements

  • 5+ years of hands-on experience in large language model, NLP, and Transformer modeling, in the setting of both research and engineering development
  • Experience and track of recording in landing major research impacts in a fast-paced environment
  • Experience tech leading a team of research scientists and research engineers
  • Excellent written and verbal communication skills
  • Published research in areas of machine learning at major conferences (NeurIPS, ICML, ICLR, ACL, EMNLP, CVPR, etc.) and/or journals
  • Previous experience in a customer facing role.

Responsibilities

  • Drive research on the effectiveness and limitations of existing LLM evaluation techniques.
  • Design and develop novel evaluation benchmarks for large language models, covering areas such as instruction following, factuality, robustness, and fairness.
  • Communicate, collaborate, and build relationships with clients and peer teams to facilitate cross-functional projects.
  • Collaborate with internal teams and external partners to refine metrics and create standardized evaluation protocols.
  • Implement scalable and reproducible evaluation pipelines using modern ML frameworks.
  • Publish research findings in top-tier AI conferences and contribute to open-source benchmarking initiatives.
  • Mentor and guide research scientists and engineers, providing technical leadership across cross-functional projects.
  • Stay deeply engaged with the ML research community, tracking emerging work and contributing to the advancement of LLM evaluation science.
  • Thrive in a high-energy, fast-paced startup environment and are ready to dedicate the time and effort needed to drive impactful results.

Benefits

  • Comprehensive health, dental and vision coverage
  • retirement benefits
  • a learning and development stipend
  • generous PTO
  • a commuter stipend

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Computing Infrastructure Providers, Data Processing, Web Hosting, and Related Services

Education Level

No Education Listed

Number of Employees

501-1,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service