Conversational and Generative Data AI Specialist

Morgan StanleyNew York, NY
1d$120,000 - $205,000

About The Position

The Morgan Stanley Firmwide AI team is seeking a highly skilled and motivated Conversational and Generative Data AI Specialist. The Conversational and Generative AI Data Specialist will play a crucial role in supporting the construction, analysis, and refinement of natural language datasets that power multiple conversational and generative AI products and solutions. The ability to practically apply and communicate conversational and generative AI concepts to a financial services audience is pivotal in this role. The ideal candidate will have a sharp understanding of natural language processing/understanding, data annotation management, conversation design, prompt engineering, and other related concepts. They must be able to distill complex technical insights into clear, actionable guidance for stakeholders and users with varying levels of technical expertise.

Requirements

  • 5+ years’ experience annotating, managing, and evaluating datasets in conversational AI (chatbots, dialogue systems, voice assistants such as Alexa or Google Assistant) and/or generative AI (prompt engineering, generative AI evaluation) contexts.
  • Bachelor’s degree in linguistics, computational linguistics, data science, or related field.
  • Familiarity with extracting qualitative and quantitative insights from large natural language corpora.
  • Ability to function and collaborate within a diverse, cross-functional global organization.
  • Demonstrated ability to communicate data-driven insights and recommendations to technical and non-technical stakeholders.

Nice To Haves

  • Past applicable experience in finance, wealth management, or retail/investment banking.
  • Basic competency in programming languages commonly used in data analysis, such as SQL and Python.
  • Hands-on experience building a generative AI product.

Responsibilities

  • Data and Annotation Management. Develop clear, consistent annotation guidelines that ensure high-quality labeled data across multiple diverse use cases.
  • Manage end-to-end annotation workflows, including task onboarding, annotator support, annotation quality assurance, and performance monitoring.
  • Analyze annotation output to detect patterns, ambiguities, and failure modes, and adjust guidelines or processes accordingly.
  • Performance Reporting and Feedback Analysis. Communicate annotation insights and dataset quality metrics to both technical and non-technical stakeholders to inform modeling decisions and product direction.
  • Recommend process or guideline adjustments based on recurring insights and performance trends.
  • Translate qualitative and quantitative feedback into clear recommendations for improving datasets, product behavior, and model performance.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service