At Variance, we are teaching machines to make the hardest judgment calls at scale. We build AI agents for the high-precision gray area of stopping fraud, scams, and abuse. This isn't another sales tool or a customer service system. We're solving real problems in investigations and fraud prevention to protect innocent people from being harmed. We’re a small, talent-dense team in San Francisco working on a problem at the edge of what AI systems can reliably do: making good decisions in messy, adversarial, real-world environments. We’re looking for a Research Engineer to help push that frontier forward. You’ll design evals, study failures, build new research loops, and turn research ideas into production capabilities. This role sits at the intersection of research and engineering: part model builder, part experimentalist, part systems engineer. You’re a fit if you: Care deeply about protecting people from fraud, scams, and abuse Have strong opinions about model quality, evaluation, and experimental rigor Want to work on core model and agent behavior Are excited to train, fine-tune, and improve models for hard real-world judgment tasks Think in tight research loops: hypothesis, experiment, evaluation, failure analysis, iteration Thrive in ambiguous, fast-moving environments where the path is not obvious and the feedback loop is short Are motivated by the challenge of making AI systems work in adversarial, regulated, and high-consequence settings Want to help define what trustworthy AI means in real-world use cases
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed