Learning Commons aims to scale proven teaching and learning practices to benefit every learner by building AI infrastructure that better connects the way students learn to the tools they learn with. The Team At Learning Commons, we operate at the intersection of technology, research, and philanthropy. We pair product development with grantmaking to scale proven teaching and learning practices for the benefit of every learner. We aim to bring learning science into the tools educators and students use every day. Our work is grounded in a deep belief: when technology reflects the realities of classrooms and the science of how students learn, it can meaningfully strengthen teaching and unlock new possibilities for students. The rise of generative AI offers us a once-in-a-generation opportunity to dramatically accelerate the translation of research insights into practical, classroom-ready tools; tools that honor teachers’ expertise, adapt to students’ needs, and make effective learning practices easier to access, implement, and sustain. In today’s fragmented edtech landscape, school districts are often left piecing together products that don’t always align with curricula or instructional needs. While AI holds enormous potential to support teachers and students, it can only deliver on that promise when grounded in research, high-quality educational data, and expert evaluation. That’s why we’re building open, public-purpose infrastructure — datasets, rubrics, and resources — that help raise the standard for educational tools and create more consistent, impactful learning experiences for all students and teachers. The Opportunity Learning Commons aims to scale proven learning science practices through AI-powered tools, datasets, and evaluation frameworks. As part of the Evaluators team, you will play a critical role in ensuring that AI and education products are grounded in rigorous, research-backed evaluation. You will define and operationalize evaluation frameworks for AI-enabled learning tools, develop metrics and methodologies to assess quality and impact, and generate insights that inform product, research, and ecosystem decisions. This includes evaluating model performance, alignment to pedagogy, and real-world effectiveness in classrooms. You will partner closely with Product, Engineering, Learning Science, and external researchers to ensure that evaluation is embedded throughout the product lifecycle—from early experimentation to scaled deployment. This role sits at the intersection of data science, learning science, and AI system evaluation.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Education Level
No Education Listed