Information Specialist - Freelance AI Trainer Project

Invisible Agency
$10 - $30Remote

About The Position

We are sourcing independent Information Specialists to provide their expertise for an AI benchmark evaluation project. As AI models increasingly generate professional-grade data management strategies, information retrieval systems, and content architecture deliverables, their accuracy relies entirely on robust, expert-crafted training data. The objective of this project is to autonomously produce high-quality evaluation tasks, strong prompts, and clear, well-structured rubrics that generate clean, reliable data for model training. Project Deliverables & Scope Operate autonomously to design complex evaluation frameworks and provide structured training data. Expected deliverables include: Task & Prompt Creation: Generating realistic, high-quality prompts that compel the AI model to produce complex, professional-grade deliverables specific to information science, data management, and knowledge organization. Rubric Development: Writing clear, well-structured evaluation rubrics with criteria that are highly specific, non-ambiguous, and easy to score. Benchmark Evaluation Data Generation: Producing clean, reliable training data that directly aids in the evaluation and refinement of AI models handling complex information architecture and retrieval tasks. Quality Assurance & Fact-Checking: Ensuring all generated tasks and scoring criteria reflect strict, real-world data governance standards, research methodologies, and information management best practices.

Requirements

  • Demonstrable professional expertise within the information science, library science, data governance, or knowledge management sectors, with a deep understanding of industry standards, metadata schemas, and search methodologies.
  • Strong writing and prompt generation skills, with the ability to design highly realistic, complex information retrieval task scenarios for AI evaluation.
  • Proficiency in rubric generation, specifically the ability to create objective, non-ambiguous scoring criteria that leave no room for subjective interpretation.
  • A meticulous, detail-oriented approach to fact-checking data architectures, search mechanics, and knowledge bases to generate reliable data for system benchmarking.
  • Supply a secure computer and high‑speed internet

Responsibilities

  • Task & Prompt Creation: Generating realistic, high-quality prompts that compel the AI model to produce complex, professional-grade deliverables specific to information science, data management, and knowledge organization.
  • Rubric Development: Writing clear, well-structured evaluation rubrics with criteria that are highly specific, non-ambiguous, and easy to score.
  • Benchmark Evaluation Data Generation: Producing clean, reliable training data that directly aids in the evaluation and refinement of AI models handling complex information architecture and retrieval tasks.
  • Quality Assurance & Fact-Checking: Ensuring all generated tasks and scoring criteria reflect strict, real-world data governance standards, research methodologies, and information management best practices.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service