About The Position

This role supports the design and development of safety evaluation methodologies for generative and agentic AI features that enable users across the globe to interact with our media products and services. You will play an impactful role: shaping responsible AI and safety policies, evaluating fidelity to product safety requirements, creating risk assessments and taxonomies, curating exemplar safety evaluation datasets, and ensuring that evaluation frameworks are culturally and linguistically grounded. An ideal candidate possesses a strong understanding of issues in responsible AI and A and society, technology evaluation design principles and practices, and brings experience designing evaluations to support policies and/or product requirements, classification systems, and annotation and/or study participant guidelines.

Requirements

  • 4+ years of experience in an applied research setting related to evaluation design, AI ethics, Responsible AI, AI safety, computational social science, content analysis, or a closely related field
  • Strong understanding of taxonomy design, classification systems, and annotation methodology
  • Experience developing evaluation guidelines and exemplar sets for human annotation or labeling tasks
  • Demonstrated ability to collaborate with subject matter experts (e.g., linguists, cultural consultants, multi-lingual annotators) to inform research design
  • Able to work independently to drive outcomes among cross-functional teams, with minimal direction
  • Organized, highly attentive to detail, and manages time well
  • Excellent written and oral communication skills
  • Experience working in industry
  • Advanced degree (MS/PhD) in Linguistics, Information Science, Computational Social Science, or a related socio-technical field

Nice To Haves

  • Experience designing evaluation frameworks for multilingual or cross-cultural contexts
  • Familiarity with responsible AI, AI safety, or content moderation policy frameworks
  • Experience with experimental design methodologies, inter-rater reliability data analysis and annotation quality assessment methods
  • Prior experience working with localization, internationalization, or language service teams
  • Experience with survey design, AI policy development, and/or structured content analysis methodologies

Responsibilities

  • shaping responsible AI and safety policies
  • evaluating fidelity to product safety requirements
  • creating risk assessments and taxonomies
  • curating exemplar safety evaluation datasets
  • ensuring that evaluation frameworks are culturally and linguistically grounded
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service