About The Position

At Crowe, you can build a meaningful and rewarding career. With real flexibility to balance work with life moments, you’re trusted to deliver results and make an impact. We embrace you for who you are, care for your well-being, and nurture your career. Everyone has equitable access to opportunities for growth and leadership. Over our 80-year history, delivering excellent service through innovation has been core to our DNA across our audit, tax, and consulting teams. Crowe’s AI Governance Consulting team helps organizations build, assess, run, and audit responsible AI programs. We align AI practices with business goals, risk appetite, and evolving regulations and standards (e.g., NIST AI RMF 1.0, ISO/IEC 42001, EU AI Act), enabling clients to adopt AI confidently and safely. As an AI Governance Technical Manager, you will be the hands-on lead for independent testing and operational monitoring of AI systems (including GenAI). You’ll design and run evaluations, stand up monitoring pipelines, quantify risks (bias, robustness, safety, privacy), and provide transparent reporting to business, risk, and technology stakeholders. You’ll also mentor consultants and help evolve Crowe’s run-state accelerators, test harnesses, and control libraries anchored in the NIST AI RMF and related guidance.

Requirements

  • 3+ years hands-on AI governance/Responsible AI experience (policy, controls, risk, compliance, or assurance of AI/ML systems).
  • 5+ years in compliance, risk management, and/or professional services/consulting with client-facing delivery and team leadership.
  • Strong Python and SQL (evaluation pipelines, data prep, metric computation, scripting CI jobs).
  • Demonstrated experience designing fairness/bias tests and applying explainability methods; ability to translate results for non-technical stakeholders.
  • Practical knowledge of NIST AI RMF 1.0 (and GenAI profile), ISO/IEC 42001, and awareness of EU AI Act obligations for high-risk systems.
  • Prior experience should include progressive responsibilities, including supervising and reviewing the work of others, and project management, including self-management of simultaneous work-streams and responsibilities.
  • Strong written and verbal communication and comprehension both formally and informally to our clients and our teams, in a variety of formats and settings, including in interviews, meetings, calls, e-mails, reports, process narratives, presentations, etc.
  • Networking and relationship management
  • Willingness to travel.

Nice To Haves

  • Experience operationalizing LLM/GenAI evaluations (adversarial/red-team testing, toxicity/harm scoring, retrieval/grounding, hallucination measurement, safety policies) consistent with NIST guidance.
  • Hands-on with ML Ops/observability (e.g., model registries, data validation, drift detection), cloud (AWS/Azure/GCP), and containerization.
  • Familiarity with governance and compliance platforms (e.g., GRC systems) and collaboration with privacy/security/legal.
  • Bachelor’s degree required; advanced degree a plus (CS, statistics, data science, information systems, or related).
  • Certification: AIGP – Artificial Intelligence Governance Professional (IAPP) or equivalent credential in AI governance/privacy/risk (e.g., CIPP/CIPM/CIPT with AI coursework, ISO/IEC 42001 implementer/auditor).

Responsibilities

  • Independent Testing: Design and execute independent test plans for classical ML and LLMs/GenAI (functional accuracy, robustness, safety, toxicity, jailbreak/prompt-injection, hallucination/error rates); define acceptance criteria and go/no-go recommendations.
  • Sales enablement: Partner with teams to qualify opportunities, shape solutions/SOW/ELs, develop proposals and pricing, and contribute to pipeline reviews. Build client-ready collateral.
  • Offering development: Evolve Crowe’s AI Governance methodologies, accelerators, control libraries, templates, and training. Incorporate updates from standards/regulators into our playbooks (e.g., NIST’s GAI profile).
  • Thought leadership: Publish insights, speak on webinars/events, and support marketing campaigns to grow brand presence.
  • People leadership: Supervise, coach, and develop consultants; manage engagement economics (scope, timeline, budget, quality) and support recruiting.
  • Bias/Fairness: Plan and run bias/fairness assessments using appropriate population slices and fairness metrics; document mitigations per NIST guidance on identifying/managing bias.
  • Evaluate Explainability: Produce model explainability/transparency artifacts (e.g., model cards, method docs) and apply techniques (SHAP, LIME, feature attributions) aligned to NIST’s Four Principles of Explainable AI.

Benefits

  • real flexibility to balance work with life moments
  • equitable access to opportunities for career growth and leadership
  • comprehensive total rewards package
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service