AI Governance Consulting Manager

CroweNew York, NY
64d$102,400 - $204,100

About The Position

AI Governance Consulting Manager At Crowe, you can build a meaningful and rewarding career. With real flexibility to balance work with life moments, you're trusted to deliver results and make an impact. We embrace you for who you are, care for your well-being, and nurture your career. Everyone has equitable access to opportunities for growth and leadership. Over our 80-year history, delivering excellent service through innovation has been core to our DNA across our audit, tax, and consulting teams. Crowe's AI Governance Consulting team helps organizations build, assess, run, and audit responsible AI programs. We align AI practices with business goals, risk appetite, and evolving regulations and standards (e.g., NIST AI RMF 1.0, ISO/IEC 42001, EU AI Act), enabling clients to adopt AI confidently and safely. As an AI Governance Manager, you'll lead client delivery, drive sales enablement, and shape our AI Governance offering. You will manage projects end-to-end, mentor teams, and collaborate across Crowe (Cyber, Risk, Legal/Privacy, Security, Enterprise Risk Management, Model Risk Management, and Audit) to bring integrated solutions to market. You'll be responsible for: Client delivery & program build: Lead engagements to stand up or mature AI governance programs across the lifecycle (strategy, policy, controls, operating model, metrics). Map practices to frameworks such as NIST AI RMF and ISO/IEC 42001; translate requirements into practical controls, workflows, and guardrails. Assessment & assurance: Plan and execute current-state assessments, model/system reviews, control testing, and readiness audits for AI uses (including genAI) against policy and regulatory expectations (e.g., EU AI Act obligations/timelines). Deliver clear findings and prioritized roadmaps. Run-state operations: Design operating rhythms (intake, review/approval, model registry, risk scoring, human-in-the-loop, monitoring, incident management) and help clients operationalize “three lines” responsibilities with measurable KPIs. Sales enablement: Partner with teams to qualify opportunities, shape solutions/SOW/ELs, develop proposals and pricing, and contribute to pipeline reviews. Build client-ready collateral. Offering development: Evolve Crowe's AI Governance methodologies, accelerators, control libraries, templates, and training. Incorporate updates from standards/regulators into our playbooks (e.g., NIST's GAI profile). Thought leadership: Publish insights, speak on webinars/events, and support marketing campaigns to grow brand presence. People leadership: Supervise, coach, and develop consultants; manage engagement economics (scope, timeline, budget, quality) and support recruiting.

Requirements

  • 3+ years hands-on AI governance/Responsible AI experience (policy, controls, risk, compliance, or assurance of AI/ML systems).
  • 5+ years in compliance, risk management, and/or professional services/consulting with client-facing delivery and team leadership.
  • Demonstrated ability to translate regulatory/standard requirements (e.g., NIST AI RMF, EU AI Act) into actionable policies, processes, and controls.
  • Prior experience should include progressive responsibilities, including supervising and reviewing the work of others, and project management, including self-management of simultaneous work-streams and responsibilities.
  • Strong written and verbal communication and comprehension both formally and informally to our clients and our teams, in a variety of formats and settings, including in interviews, meetings, calls, e-mails, reports, process narratives, presentations, etc.
  • Networking and relationship management
  • Willingness to travel.

Nice To Haves

  • Experience with genAI risk controls (prompt/data controls, evaluation, monitoring) and model documentation/testing practices.
  • Familiarity with GRC, data, and model tooling (e.g., ServiceNow/Archer, Collibra/Alation, model registries/ML platforms).
  • Experience coordinating across privacy, security, risk, and engineering functions; understanding of U.S. federal/state AI policy landscape (e.g., EO 14110 and subsequent developments).
  • Bachelor's degree required; advanced degree in a relevant field (e.g., information systems, public policy, statistics, law) a plus.
  • Certification: AIGP - Artificial Intelligence Governance Professional (IAPP) or equivalent credential in AI governance/privacy/risk (e.g., CIPP/CIPM/CIPT with AI coursework, ISO/IEC 42001 implementer/auditor).

Responsibilities

  • Lead engagements to stand up or mature AI governance programs across the lifecycle (strategy, policy, controls, operating model, metrics). Map practices to frameworks such as NIST AI RMF and ISO/IEC 42001; translate requirements into practical controls, workflows, and guardrails.
  • Plan and execute current-state assessments, model/system reviews, control testing, and readiness audits for AI uses (including genAI) against policy and regulatory expectations (e.g., EU AI Act obligations/timelines). Deliver clear findings and prioritized roadmaps.
  • Design operating rhythms (intake, review/approval, model registry, risk scoring, human-in-the-loop, monitoring, incident management) and help clients operationalize “three lines” responsibilities with measurable KPIs.
  • Partner with teams to qualify opportunities, shape solutions/SOW/ELs, develop proposals and pricing, and contribute to pipeline reviews. Build client-ready collateral.
  • Evolve Crowe's AI Governance methodologies, accelerators, control libraries, templates, and training. Incorporate updates from standards/regulators into our playbooks (e.g., NIST's GAI profile).
  • Publish insights, speak on webinars/events, and support marketing campaigns to grow brand presence.
  • Supervise, coach, and develop consultants; manage engagement economics (scope, timeline, budget, quality) and support recruiting.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service