AI Product Owner - Vice President

Morgan StanleyTown/Village of Harrison, NY
2d$110,000 - $190,000

About The Position

AI Product Owner – Vice President Wealth Management Platforms Purchase, NY Morgan Stanley Wealth Management Platforms (WMP) Digital Client Experience (DCE) is seeking a product leader to serve as an AI Product Owner driving the strategy and delivery of AI-enabled digital servicing experiences across Morgan Stanley and ETRADE client platforms. You will own AI-centric product outcomes across virtual agents, risk programs, and platform unification, delivering measurable improvements in client experience, operational efficiency, and risk posture. This role is a hands-on “builder” PO: you will own OKRs, partner deeply with UX, work across large dependency teams in a legacy environment, and be capable of using tooling to generate code, prototypes, and analytics dashboards to accelerate delivery. What you’ll own (focus areas) Virtual Agents & Conversational Experiences: AI-assisted self-service, guided journeys, escalation/handoff patterns, and agent-assist capabilities. Risk Programs for AI + Digital Servicing: Responsible AI controls, governance, monitoring, and audit-ready documentation embedded into the product lifecycle. Platform Unification: Converging servicing capabilities across web/mobile and Morgan Stanley/ETRADE surfaces, including shared components, APIs, and consistent UX patterns. About Morgan Stanley Morgan Stanley is a leading global financial services firm providing a wide range of investment banking, securities, investment management and wealth management services. The Firm’s employees serve clients worldwide including corporations, governments and individuals from more than 1,200 offices in 43 countries. Morgan Stanley is committed to helping its employees build meaningful careers and we strive to be a place for people to learn, achieve and grow. Department Overview In the Wealth Management division, we help people, businesses and institutions build, preserve, and manage wealth so they can pursue their financial goals Wealth Management (WM) Platforms manages industry-leading platforms, across all WM channels and client segments, to provide a unified digital experience, unlock growth, and deliver efficiencies for Advisors, Clients, and Institutions. WM Platforms consists of ten sub-teams including: Field Experience & Platforms, Digital Client Experience & Platforms, Workplace Platforms, Automation & Workflow, Digital Trading & Investing, Generative AI, UX Design & Research, Strategy & Execution, WM Platforms Risk, and the Chief Operating Office. What you’ll do in the role: Product strategy & outcome ownership Define product vision and OKRs for AI-enabled servicing (e.g., containment/deflection, time-to-resolution, CSAT/NPS impact, cost-to-serve reduction, risk events reduction). Own a metrics-first operating cadence: set baselines, targets, instrumentation requirements, and post-launch optimization loops. (Expands the current KPI ownership expectation) Disciplined AI product development lifecycle Lead end-to-end product development: problem framing → discovery → delivery → launch → optimization, using usage data, client feedback, and competitive intelligence as in the original role —but adapted to AI systems (probabilistic outputs, continuous improvement). Translate ambiguous needs into AI-suitable scope and testable acceptance criteria, including explicit “do-not-automate” boundaries and human oversight needs. Backlog, roadmap, and dependency orchestration in a complex environment Build and maintain a prioritized backlog in Jira and manage roadmap sequencing across multiple platforms and legacy services. Drive alignment across many dependency teams (technology, service, UX, Legal, Risk, Compliance, Data, Operations). UX ownership (experience and conversation design) Be accountable for end-to-end experience quality (not “requirements handoff”): co-own IA, content strategy, and interaction design with UX. Ensure experiences meet usability standards: clarity, recovery from failure, safe fallback behavior, and accessible design. Establish and enforce design patterns across unified platforms to reduce inconsistency and cognitive load. AI evaluation, quality, and release readiness Define AI quality measures (task success rate, hallucination/error rates, escalation accuracy, safety policy adherence). Own evaluation strategy: offline test sets, human review workflows, pilot/A-B plans, and regression checks. Ensure release readiness includes monitoring, rollback, incident playbooks, and measurable guardrails. Embedded risk management for AI + digital servicing Identify risks that impact roadmap delivery and client outcomes (as in the original), expanding to include AI-specific risks: privacy leakage, unsafe responses, bias, explainability expectations, and misuse. Partner with Legal, Risk, Compliance, and Fraud, to define required controls (approvals, logs, disclosures, audit trails) and integrate into the Definition of Done. Data product ownership for AI readiness Drive requirements for data access, quality, labeling/ground truth, taxonomy, and lifecycle management needed to support virtual agents and servicing automation. Ensure analytics/events are implemented to measure OKRs and model performance in production. Hands-on acceleration: dashboards, prototypes, and code generation Create and maintain dashboards for OKRs/KPIs, experimentation results, and operational health (containment, escalations, top intents, failure modes, drift indicators). Use approved tooling to generate code snippets, API examples, test scripts, prompt/policy configurations, and lightweight prototypes to accelerate engineering throughput (with appropriate review and SDLC controls). Provide high-quality, developer-ready artifacts: sequence diagrams, edge cases, error states, instrumentation specs. Stakeholder leadership & business reviews Orchestrate business reviews, exec updates, and working forums (planning, materials, execution, follow-ups) as in the original role, adding AI program reporting: risk posture, evaluation results, and production health.

Requirements

  • 9.5 Years of transferable experience across work and higher education
  • Master of Business Administration (MBA) or Bachelor’s degree (BS/BA) with ample work experience.
  • 5+ years building digital products/platforms, including backlog management, roadmap planning, and metrics ownership.
  • Experience owning digital containment KPIs (e.g., containment/deflection, escalation precision, task success rate) and operating a post-release optimization loop.
  • Experience defining and running AI evaluation (offline ‘golden set’, regression testing, human review rubric) and production monitoring/incident response.
  • Ability to define AI product requirements: guardrails, human-in-the-loop points, evaluation metrics, and monitoring.
  • Ability to understand technical architecture and code, converse in detail with engineering about APIs, logs, and system diagrams; able to work effectively in legacy architectures and across multiple dependency teams.
  • Uses approved AI/dev tooling to produce reviewable code artifacts (scripts, prototypes, test cases, prompt/policy configs) to accelerate delivery; engineering owns production implementation, review, and SDLC compliance.
  • Demonstrated capability with UX and engineering to deliver high-quality, client-friendly experiences —including ownership of end-to-end flows, content, and interaction patterns.
  • Strong written/verbal communication, critical thinking, organization, and ability to drive cross-functional alignment.

Nice To Haves

  • Experience partnering with Legal/Risk/Compliance on customer-facing digital experiences, and capable of embedding controls into product delivery for AI-enabled features.
  • Wealth management / brokerage / banking domain familiarity preferred.
  • Customer servicing process knowledge (intent taxonomy, call drivers, servicing flows) preferred.

Responsibilities

  • Define product vision and OKRs for AI-enabled servicing (e.g., containment/deflection, time-to-resolution, CSAT/NPS impact, cost-to-serve reduction, risk events reduction).
  • Own a metrics-first operating cadence: set baselines, targets, instrumentation requirements, and post-launch optimization loops.
  • Lead end-to-end product development: problem framing → discovery → delivery → launch → optimization, using usage data, client feedback, and competitive intelligence as in the original role —but adapted to AI systems (probabilistic outputs, continuous improvement).
  • Translate ambiguous needs into AI-suitable scope and testable acceptance criteria, including explicit “do-not-automate” boundaries and human oversight needs.
  • Build and maintain a prioritized backlog in Jira and manage roadmap sequencing across multiple platforms and legacy services.
  • Drive alignment across many dependency teams (technology, service, UX, Legal, Risk, Compliance, Data, Operations).
  • Be accountable for end-to-end experience quality (not “requirements handoff”): co-own IA, content strategy, and interaction design with UX.
  • Ensure experiences meet usability standards: clarity, recovery from failure, safe fallback behavior, and accessible design.
  • Establish and enforce design patterns across unified platforms to reduce inconsistency and cognitive load.
  • Define AI quality measures (task success rate, hallucination/error rates, escalation accuracy, safety policy adherence).
  • Own evaluation strategy: offline test sets, human review workflows, pilot/A-B plans, and regression checks.
  • Ensure release readiness includes monitoring, rollback, incident playbooks, and measurable guardrails.
  • Identify risks that impact roadmap delivery and client outcomes (as in the original), expanding to include AI-specific risks: privacy leakage, unsafe responses, bias, explainability expectations, and misuse.
  • Partner with Legal, Risk, Compliance, and Fraud, to define required controls (approvals, logs, disclosures, audit trails) and integrate into the Definition of Done.
  • Drive requirements for data access, quality, labeling/ground truth, taxonomy, and lifecycle management needed to support virtual agents and servicing automation.
  • Ensure analytics/events are implemented to measure OKRs and model performance in production.
  • Create and maintain dashboards for OKRs/KPIs, experimentation results, and operational health (containment, escalations, top intents, failure modes, drift indicators).
  • Use approved tooling to generate code snippets, API examples, test scripts, prompt/policy configurations, and lightweight prototypes to accelerate engineering throughput (with appropriate review and SDLC controls).
  • Provide high-quality, developer-ready artifacts: sequence diagrams, edge cases, error states, instrumentation specs.
  • Orchestrate business reviews, exec updates, and working forums (planning, materials, execution, follow-ups) as in the original role, adding AI program reporting: risk posture, evaluation results, and production health.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service