Staff Applied Researcher, AI Quality

GitHub, Inc.
21dRemote

About The Position

At GitHub, we’re building the next generation of AI‑powered developer experiences. We’re looking for a Staff Applied Researcher with deep expertise in Large Language Model (LLM) evaluation, LLM agents, strong engineering instincts, and a bias for action to help shape the future of GitHub Copilot and our AI platform. This is a high‑impact role where you will design evaluation systems that directly influence how millions of developers experience AI every day.

Requirements

  • Bachelor's degree in Data Science, Mathematics, Physics, Statistics, Economics, Operations Research, Computer Science, or related field AND 8+ years' experience in data science (e.g., managing structured and unstructured data, applying statistical techniques) or related field, OR master's degree in Data Science, Mathematics, Physics, Statistics, Economics, Operations Research, Computer Science, or related field AND 6+ years' experience in data science (e.g., managing structured and unstructured data, applying statistical techniques) or related field, OR doctorate in Data Science, Mathematics, Physics, Statistics, Economics, Operations Research, Computer Science, or related field AND 4+ years' experience in data science (e.g., managing structured and unstructured data, applying statistical techniques) or related field, OR equivalent experience.
  • 3+ years of strong engineering skills in Python/Typescript and experience building production grade evaluation or data/ML pipelines at scale.
  • Proven track record shipping research or evaluation systems in production environments.
  • Strong cross‑functional communication and influence skills.

Nice To Haves

  • Experience with LLM judge systems, reward modeling, alignment, or safety evaluations.
  • Background in code generation, developer tools, or AI‑assisted programming.
  • Experience with large‑scale experimentation and online/offline evaluation strategies.
  • Open‑source contributions or experience working with developer communities.
  • Experience designing and leading complex research projects from ideation to implementation
  • Ability to define and articulate data-driven strategies that consider cross-functional impacts and align with organizational priorities, particularly in a software development platform context

Responsibilities

  • Design next‑generation evaluation frameworks for code generation, reasoning, safety, multimodal tasks, and agentic workflows.
  • Develop scalable automatic metrics, LLM‑judge systems, reward models, and human‑in‑the‑loop evaluation pipelines.
  • Establish high‑signal, repeatable methodologies that influence product decisions across GitHub AI.
  • Build and optimize evaluation tooling, datasets, benchmarking systems, and experimentation pipelines.
  • Create and onboard new benchmarks for the hardest tasks for the coding agents.
  • Collaborate closely with engineering teams to productionize research, validate improvements, and accelerate model iteration cycles.
  • Own end‑to‑end quality insights for the models behind GitHub Copilot and new AI features.
  • Work closely with product development, engineering, and design teams to integrate advanced research findings into practical applications, ensuring alignment with product goals and user needs.
  • Shape GitHub’s strategy for model quality, alignment, and evaluation.
  • Mentor other researchers and engineers, helping elevate technical standards across the organization.
  • Drive clarity in ambiguous problem spaces and champion fast, high‑quality execution.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service