About The Position

Manages the entire ICRM AI use case lifecycle review process from ideation to POC, production, issue remediation, release management, ongoing monitoring, and retirement for assigned area. Works effectively across all lines of defense to surface issues and weaknesses for use cases Develops, enhances, and validates the methods for measuring, analyzing, and managing risk, for all risk types including market, credit and operational and their corresponding overlap with regulatory and AI developments. Also, may develop, validate and evangelize uses of scorecards for inherent risks of AI POCs and production migrations, develops wholistic, effective, and sustainable AI ICRM control libraries designed to mitigate inherent risks, and articulates compensating control effectiveness for nuanced risk scenarios and use cases in internal and external effective challenge settings. Maintains documentation of reviews and outcomes, including process maps, summary notes and artifacts Working closely with the AI Compliance Governance and Framework Lead to develop the foundational compliance framework model, appropriate controls, and execute on advisory services to Product ICRM partners Organizing periodic peer reviews of AI use case documentation, model performance monitoring, and model input validation against agreed standards and ICRM AI comprehensive governance and control framework. Ensures policies and procedures are kept up to date and reviewed periodically by governance committees. Coordinates and level sets discussions with first, second, and third line partners as well as regulatory relations peers in order to achieve strategic business objectives within risk appetite and budgets. Participation in quantitative impact studies and hypothetical portfolios exercises requested by regulators, leads ICRM responses to AI inquiries, first day letters, exam/audit questions, etc. Providing oversight and guidance over the assessment of complex regulatory and/or audit issues, structures potential solutions and drives effective resolution with other stakeholders. Collaborate with the AI Compliance team to prepare and collect the documents needed to for critical regulatory and internal audit matters as required.

Requirements

  • 6-10 years experience in AI model development/validation and/or compliance product advisory
  • Expertise and hands-on experience in advance programming using: SAS / R, Python and SQL for basic data mining; additional experience and knowledge of Big Data tools preferred.
  • Experience with OpenAI/Chap GPT, Anthropic/Claude or Google/Gemini
  • Highly motivated with attention to detail, team oriented, curious and organized
  • Ability to interact and communicate effectively with senior leaders
  • Ability to influence and lead people across cultures at a senior level using sound judgment and successful execution, understanding how to operate effectively across diverse businesses
  • Ability to challenge business management and escalate issues when appropriate
  • Bachelor's/University degree required
  • Analytical Thinking, Business Acumen, Credible Challenge, Data Analysis, Governance, Policy and Procedure, Policy and Regulation, Risk Controls and Monitors, Risk Identification and Assessment, Statistics

Nice To Haves

  • Masters degree preferred

Responsibilities

  • Manages the entire ICRM AI use case lifecycle review process from ideation to POC, production, issue remediation, release management, ongoing monitoring, and retirement for assigned area.
  • Works effectively across all lines of defense to surface issues and weaknesses for use cases
  • Develops, enhances, and validates the methods for measuring, analyzing, and managing risk, for all risk types including market, credit and operational and their corresponding overlap with regulatory and AI developments.
  • Develops, validate and evangelize uses of scorecards for inherent risks of AI POCs and production migrations
  • Develops wholistic, effective, and sustainable AI ICRM control libraries designed to mitigate inherent risks
  • Articulates compensating control effectiveness for nuanced risk scenarios and use cases in internal and external effective challenge settings.
  • Maintains documentation of reviews and outcomes, including process maps, summary notes and artifacts
  • Works closely with the AI Compliance Governance and Framework Lead to develop the foundational compliance framework model, appropriate controls, and execute on advisory services to Product ICRM partners
  • Organizes periodic peer reviews of AI use case documentation, model performance monitoring, and model input validation against agreed standards and ICRM AI comprehensive governance and control framework.
  • Ensures policies and procedures are kept up to date and reviewed periodically by governance committees.
  • Coordinates and level sets discussions with first, second, and third line partners as well as regulatory relations peers in order to achieve strategic business objectives within risk appetite and budgets.
  • Participates in quantitative impact studies and hypothetical portfolios exercises requested by regulators
  • Leads ICRM responses to AI inquiries, first day letters, exam/audit questions, etc.
  • Provides oversight and guidance over the assessment of complex regulatory and/or audit issues, structures potential solutions and drives effective resolution with other stakeholders.
  • Collaborates with the AI Compliance team to prepare and collect the documents needed to for critical regulatory and internal audit matters as required.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service