This role will lead the development of machine learning systems to detect fraud, abuse, and trust violations across Scale’s contributor platform. As a core part of our Generative AI data engine, these systems are critical to ensuring the quality, safety, and reliability of the data used to train and evaluate frontier models. You will build scalable ML services that analyze behavioral and content signals, incorporating both classical models and advanced LLM-based techniques. This is a high-impact, product-focused role where you’ll collaborate across engineering, product, and operations teams to proactively surface misuse, defend against adversarial behavior, and ensure the long-term health of our human-in-the-loop data workflows.