AI Governance Compliance Manager

McKessonMississauga, ON
Hybrid

About The Position

McKesson is an impact-driven, Fortune 10 company that touches virtually every aspect of healthcare. We are known for delivering insights, products, and services that make quality care more accessible and affordable. Here, we focus on the health, happiness, and well-being of you and those we serve – we care. What you do at McKesson matters. We foster a culture where you can grow, make an impact, and are empowered to bring new ideas. Together, we thrive as we shape the future of health for patients, our communities, and our people. If you want to be part of tomorrow’s health today, we want to hear from you. Role Summary: The AI Governance Compliance Manager ensures AI systems meet regulatory requirements across the enterprise that span the AI governance lifecycle. This role sits at the intersection of legal, risk management, Cyber, data science, data engineering and technology governance. This role translates complex regulatory requirements into operational controls, designs standards and protocols for data and AI governance. As RAIBs are operationalized, this role will be mission critical for enabling and designing controls, protocols and standards aligned to the McKesson’s AI policy, NIST-AI Risk Management framework, to drive enterprise AI enablement and deployment at scale.

Requirements

  • Degree or equivalent and typically requires 10+ years of relevant experience
  • Proven experience with stakeholder management, strategic communication, and ability to translate between technical and business audiences.
  • Technical expertise includes risk assessment, AI audit methodologies and bias detection techniques, data governance and lineage tracking, model explainability tools, compliance monitoring platforms, and GRC tools e.g. Alation, Microsoft Purview, ServiceNow, and OneTrust.
  • AI governance expertise spanning bias and fairness auditing, model risk management and validation, transparency and explainability documentation, data governance (quality, lineage, retention, consent) regulatory compliance mapping across jurisdictions, AI ethics framework development, algorithmic impact assessments.
  • Familiarity and understanding of bias and fairness in algorithmic systems and industry-specific regulatory knowledge (Healthcare).

Nice To Haves

  • building AI governance frameworks from scratch
  • hands on experience with GRC tools
  • Generative AI governance (LLM-specific risks including hallucinations, prompt injection, and IP concerns)
  • third-party vendor risk management

Responsibilities

  • Build, operationalize and enforce governance frameworks that align AI deployments with regulatory requirements (NIST) and Enterprise AI policy translating legal requirements into operational controls
  • Monitor regulatory and compliance requirements and map changes to affected internal AI systems and processes; continuously maintain all standards, guidelines and protocols
  • Develop or refine governance controls that map regulatory requirements to operational procedures. Link specific requirement (regulatory / NIST) to implementation steps for business units.
  • AI governance frameworks and policies, risk assessments and algorithmic impact assessments, compliance audit reports and remediation plans.
  • Incident response protocols for AI requirements failures.
  • Define standards for the enterprise AI system inventory and lifecycle.
  • Define catalog requirements and classification levels against risk taxonomy and verify that existing systems have current documentation and governance controls in place.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service