At Snorkel, we believe meaningful AI doesn’t start with the model, it starts with the data. We’re on a mission to help enterprises transform expert knowledge into specialized AI at scale. The AI landscape has gone through incredible changes between 2015, when Snorkel started as a research project in the Stanford AI Lab, to the generative AI breakthroughs of today. But one thing has remained constant: the data you use to build AI is the key to achieving differentiation, high performance, and production-ready systems. We work with some of the world’s largest organizations to empower scientists, engineers, financial experts, product creators, journalists, and more to build custom AI with their data faster than ever before. Excited to help us redefine how AI is built? Apply to be the newest Snorkeler! As a Software Engineer on the Evaluation Engineering team, you'll build systems to power large-scale AI workloads for top tier AI research labs. You’ll work closely with other engineers, product managers, and field team members to ensure that Snorkel’s frontier AI datasets meet and surpass the capabilities of the most advanced foundation models. You will launch agents into production to accomplish long running reasoning and verification tasks. The Evaluation Engineering team owns critical systems for building high quality post-training and benchmark datasets, integrating with the latest foundation model technology to push the frontier of models used globally across coding, mathematics, law, medicine, and other advanced domains.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed
Number of Employees
101-250 employees