About The Position

AI Tool Development Support The AI Tool Development Support will primarily be responsible for collaborating and providing technical support. Responsibilities: • Assist AI developers by collecting, organizing and preparing necessary data, documentation and technical information required for AI model development. • Support the team by setting up environments, gathering logs, preparing test datasets, and maintaining internal tools that streamline AI development workflows. • Collaborate with AI developers to ensure they have the information, context, and resource access needed to efficiently build and improve AI/LLM models. • Maintain documentation related to model updates, experiment tracking, configuration details, and release notes to support ongoing AI activities. • Evaluate and analyze AI model outputs to verify accuracy, consistency, and alignment with expected behavior. • Validate the output of multimedia testing related AI models by preparing test cases and expected outcomes. • Identify incorrect, unexpected, or low‑quality model results and report findings to AI developers. • Assist in building tools/scripts that help visualize, compare, or automatically assess AI outputs. • Document testing results, edge cases, and observe model behaviors to support ongoing model improvement. • Communicate clearly and proactively with AI developers, software engineers, QA teams, and project stakeholders. • Plan, prioritize, and execute tasks with strong ownership and alignment to project timelines. • Drive assigned tasks to on‑time completion, raising risks or blockers early. • Document test procedures, troubleshooting steps, configuration changes, and system behaviors professionally.

Responsibilities

  • Assist AI developers by collecting, organizing and preparing necessary data, documentation and technical information required for AI model development.
  • Support the team by setting up environments, gathering logs, preparing test datasets, and maintaining internal tools that streamline AI development workflows.
  • Collaborate with AI developers to ensure they have the information, context, and resource access needed to efficiently build and improve AI/LLM models.
  • Maintain documentation related to model updates, experiment tracking, configuration details, and release notes to support ongoing AI activities.
  • Evaluate and analyze AI model outputs to verify accuracy, consistency, and alignment with expected behavior.
  • Validate the output of multimedia testing related AI models by preparing test cases and expected outcomes.
  • Identify incorrect, unexpected, or low‑quality model results and report findings to AI developers.
  • Assist in building tools/scripts that help visualize, compare, or automatically assess AI outputs.
  • Document testing results, edge cases, and observe model behaviors to support ongoing model improvement.
  • Communicate clearly and proactively with AI developers, software engineers, QA teams, and project stakeholders.
  • Plan, prioritize, and execute tasks with strong ownership and alignment to project timelines.
  • Drive assigned tasks to on‑time completion, raising risks or blockers early.
  • Document test procedures, troubleshooting steps, configuration changes, and system behaviors professionally.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service