Risk Manager, AI Risk Program

Early Warning ServicesScottsdale, AZ
5h$104,000 - $156,000Hybrid

About The Position

At Early Warning, we’ve powered and protected the U.S. financial system for over thirty years with cutting-edge solutions like Zelle®, Paze℠, and so much more. As a trusted name in payments, we partner with thousands of institutions to increase access to financial services and protect transactions for hundreds of millions of consumers and small businesses. Positions located in Scottsdale, San Francisco, Chicago, or New York follow a hybrid work model to allow for a more collaborative working environment. Candidates responding to this posting must independently possess the eligibility to work in the United States, for any employer, at the date of hire. This position is ineligible for employment Visa sponsorship. The Risk Manager, Artificial Intelligence Risk Program, will lead and manage the design, implementation, and oversight of the firm’s AI Risk Management Program within the Second Line of Defense (SLOD). This role is responsible for providing the management, independent review, challenge, and advisory support to ensure the organization’s development and use of artificial intelligence — including generative AI — is safe, responsible, compliant, and aligned with the firm’s risk appetite, ethical principles, and regulatory expectations. Reporting to the Senior Director of Data and AI Risk Management within Operational Risk Management , the Manager partners closely with the first-line business managers , product, technology (including the CDO office) , data science, as well as Compliance, Legal, Privacy, Third-Party and Technology & Security Risk, to embed AI risk requirements across the enterprise. The role plays a key part in enabling innovation while ensuring AI-related risks are appropriately identified , assessed, monitored , and governed. The above job description is not intended to be an all-inclusive list of duties and standards of the position. Incumbents will follow instructions and perform other related duties as assigned by their supervisor.

Requirements

  • Bachelor’s degree or equivalent experience.
  • 8 years of experience in operational risk management, technology risk, model risk management, data risk, or a related discipline within financial services or another highly regulated industry.
  • Direct experience supporting or leading AI risk management, model governance, or emerging technology risk programs.
  • Strong working knowledge of industry-recognized AI risk and governance frameworks, including the NIST AI Risk Management Framework and ISO/IEC 42001.
  • Experience designing or executing risk assessments, governance frameworks, metrics, and reporting for complex risk domains.
  • Excellent written and verbal communication skills, with the ability to clearly explain complex AI risks to technical and non-technical stakeholders.
  • Strong analytical skills, sound judgment, and attention to detail.
  • Proven ability to work independently, manage multiple priorities, and influence across a matrixed organization.

Nice To Haves

  • Experience with generative AI use cases, large language models (LLMs), or AI-enabled customer-facing products.
  • Risk management, technology, or audit certifications (e.g., CRISC, CISM, CIA) or AI governance–related credentials .

Responsibilities

  • Manage the development, maintenance, and ongoing enhancement of the enterprise AI Risk Management framework, policies, standards, procedures, and control expectations, aligned with industry-recognized frameworks such as the NIST AI Risk Management Framework and ISO/IEC 42001.
  • Maintain and evolve the AI risk and control taxonomy, ensuring consistency with operational risk, model risk management, data governance, privacy, and technology risk frameworks.
  • Oversee the development and use of risk management technologies and tooling used to inventory AI use cases, track risks, controls, issues, and approvals.
  • Participate in and support enterprise governance forums, committees, and working groups related to AI providing independent risk perspectives and recommendations.
  • Develop and deliver training on the AI Risk Management program.
  • Support the development and maintenance of AI-related risk appetite / tolerance statements, thresholds, and tolerances, in alignment with the enterprise risk appetite and regulatory expectations.
  • Design, implement, and monitor key risk indicators (KRIs), key performance indicators (KPIs), and key control indicators (KCIs) to measure AI risk exposure and program effectiveness.
  • Analyze trends, emerging risks, and control performance related to AI risk exposures .
  • Develop and maintain AI use case risk assessment methodologies, including inherent risk identification, control evaluation, residual risk determination, and escalation criteria.
  • Execute the second line of defense enterprise-level AI risk profile assessment to measure compliance with our approved risk appetite / tolerance .
  • Embed AI risk considerations and requirements into other risk domain assessments (e.g., operational risk, model risk, third-party risk, data risk, privacy, and technology risk).
  • Identify emerging AI risks related to bias, explainability, data quality, security, resilience, regulatory compliance, and customer impact.
  • Provide effective independent review and challenge of first-line AI risk assessments, control design, mitigation strategies, and risk acceptance decisions .
  • Execute and / or oversee quality assurance (QA) activities to assess adherence to AI risk management policies, standards, and governance requirements .
  • Identify gaps, weaknesses, or inconsistencies in AI risk practices and ensure issues are documented, escalated, and tracked through remediation .
  • Partner with other second-line risk domains to deliver integrated, holistic risk oversight of AI-enabled processes and products .
  • Develop and deliver insightful, enterprise-level AI risk reporting that clearly communicates risk posture, trends, emerging issues, and program health .
  • Prepare materials for senior management, governance committees, and external stakeholders that drive informed decision-making and timely action .
  • Lead regulatory exam support, internal audits, and management self-assessments related to AI governance and risk management .
  • Serve as a trusted risk advisor to first-line leaders across Product Management, Technology, Data Science, Model Development, and Business Operations.
  • Collaborate closely with Compliance, Legal, Privacy, Model Risk Management, Technology & Security Risk, and Operational Risk to ensure coordinated oversight of AI-related risks.
  • Support responsible innovation by helping the business understand AI risk requirements while enabling safe and compliant adoption of AI capabilities.

Benefits

  • Healthcare Coverage – Competitive medical (PPO/HDHP), dental, and vision plans as well as company contributions to your Health Savings Account (HSA) or pre-tax savings through flexible spending accounts (FSA) for commuting, health & dependent care expenses.
  • 401(k) Retirement Plan – Featuring a 100% Company Safe Harbor Match on your first 6% deferral immediately upon eligibility.
  • Paid Time Off – Flexible Time Off for Exempt (salaried) employees, as well as generous PTO for Non-Exempt (hourly) employees, plus 11 paid company holidays and a paid volunteer day.
  • 12 weeks of Paid Parental Leave
  • Maven Family Planning – provides support through your Parenting journey including egg freezing, fertility, adoption, surrogacy, pregnancy, postpartum, early pediatrics, and returning to work.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service