The future of AI — whether in training or evaluation, classical ML or agentic workflows — starts with high-quality data. At HumanSignal, we’re building the platform that powers the creation, curation, and evaluation of that data. From fine-tuning foundation models to validating agent behaviors in production, our tools are used by leading AI teams to ensure models are grounded in real-world signal, not noise. Our open-source product, Label Studio, has become the de facto standard for labeling and evaluating data across modalities — from text and images to time series and agents-in-environments. With over 250,000 users and hundreds of millions of labeled samples, it’s the most widely adopted OSS solution for teams working on building AI systems. Label Studio Enterprise builds on that traction with the security, collaboration, and scalability features needed to support mission-critical AI pipelines — powering everything from model training datasets to eval test sets to continuous feedback loops.We started before foundation models were mainstream, and we’re doubling down now that AI is eating the world. If you're excited to help leading AI teams build smarter, more accurate systems — we’d love to talk. You'll evaluate and rate graphic design elements on standardized quality scales to train AI models that assess design effectiveness—contributing to cutting-edge technology that advances how AI understands and evaluates visual content quality.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Part-time
Career Level
Mid Level
Education Level
No Education Listed
Number of Employees
51-100 employees