AI Governance & Explainability Engineer

Arbitration Forums Inc.Tampa, FL
Remote

About The Position

This role at Arbitration Forums is as unique as it is rewarding because of the AF IPAAL Values (Integrity, Passion, Accountability, Achievement, Leadership) and TRI Model (Trust, Respect, Inclusion). The AI Governance & Explainability Engineer is a hands‑on technical role within the Data Governance team responsible for ensuring AI, GenAI, and Agentic AI solutions are explainable, governable, auditable, and production‑ready. This role embeds governance directly into the AI technology stack, translating policies, regulatory expectations, and risk requirements into technical controls, automated checks, standardized artifacts, and release gates across the AI lifecycle. The role combines AI/ML engineering depth, GenAI & Agentic AI design knowledge, and governance discipline to ensure AI solutions deliver explainability, can be trusted, defended, and audited in production, particularly within the Microsoft Fabric and Purview ecosystem.

Requirements

  • Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Science, Engineering, or a related field.
  • Minimum 7 years of experience in AI/ML engineering, data science, GenAI/LLMs, NLP, Agentic AI, data governance, or related roles.
  • Demonstrated experience operationalizing AI governance, explainability, and risk controls in production environments.
  • Deep understanding of Agentic AI architectures and lifecycle considerations.
  • Strong proficiency in Python with hands‑on experience in AI/ML engineering workflows.
  • Working knowledge of Microsoft Fabric (Lakehouse, OneLake, notebooks, pipelines).
  • Experience with Microsoft Purview (catalog, lineage, classification, ownership).
  • Experience with AI/ML and GenAI tooling, including: Azure AI Foundry / Azure ML; ML explainability libraries (e.g., SHAP); LLMs, RAG architecture, and prompt engineering
  • Familiarity with Agentic AI frameworks and patterns (e.g., tool use, planning, reflection).
  • Experience integrating governance controls into CI/CD pipelines using GitHub or Azure DevOps.
  • Understanding of cloud platforms (Azure preferred; AWS/GCP a plus)
  • Experience producing audit‑ready technical documentation and evidence artifacts.
  • Familiarity with reporting and visualization tools (e.g., Power BI) for governance and monitoring views.
  • Strong analytical and problem‑solving abilities, particularly in risk‑based decision‑making.
  • Excellent written and verbal communication skills, with the ability to translate technical details into governance‑relevant insights.
  • Ability to lead governance execution initiatives and influence cross‑functional teams without direct authority.
  • Strong organizational skills with attention to detail and audit readiness.

Nice To Haves

  • Auto insurance or claims industry experience preferred.
  • Experience evaluating or governing model training approaches (e.g., NLP, generative models) without owning full training pipelines.
  • Familiarity with synthetic data governance (generation methods, limitations, risk documentation).
  • Experience with additional AI platforms (Databricks AI, Snowflake Cortex, Dataiku).
  • Experience in regulated industries (insurance, financial services, healthcare).

Responsibilities

  • Embed governance, explainability, and risk controls directly into AI, GenAI, and Agentic AI workflows
  • Translate enterprise AI policies, standards, and Responsible AI principles into technical guardrails, automated checks, required evidence artifacts, and CI/CD release gates
  • Implement governance as code and automation, eliminating reliance on manual or after-the-fact reviews.
  • Advise solution teams on explainability requirements for automated, semi-automated, and decision-support AI systems.
  • Ensure human-in-the-loop (HITL) controls are implemented where required by risk level or use case.
  • Define, generate, and manage explainability outputs that are appropriate to the end-user or reviewer persona, aligned to the decision context and operational use
  • Document explainability assumptions, limitations, and residual risk as governance evidence.
  • Operationalize AI Governance in Microsoft Purview by registering and maintaining AI models, features, prompts, agents, notebooks, and pipelines
  • Maintain end to end lineage across Data → features → models → inferences → outputs
  • Apply ownership, stewardship, sensitivity, and classification metadata.
  • Ensure governance is maintained: Discoverable, Versioned, Traceable, Audit-defensible
  • Apply governance patterns to LLMs, RAG, and Agentic AI solutions
  • Ensure governance traceability when synthetic data or augmented data is used for training, testing, or evaluation.
  • Implement Agentic AI lifecycle governance, including: Observability of agent actions, deviations, and failures; Oversight of planning, reflection, and tool-use behavior; Controls on autonomous vs. constrained operation
  • Enable GenAI explainability, including: Retrieval transparency for RAG (sources, relevance); Inference context documentation; Decision trace generation where applicable
  • Own and operate explainability capabilities used for governance, audit, and trust.
  • Implement and operationalize techniques such as: Feature attribution (e.g., SHAP or equivalent); Driver and proxy detection; Global and local model explanations
  • Identify bias signals, risk indicators, and explainability gaps.
  • Store and manage explainability and observability outputs as governed, audit-ready artifacts.
  • Support audit, compliance, and risk review activities with defensible evidence.
  • Define and implement AI monitoring metrics, alerts, and thresholds for: Performance degradation; Bias and ethical risk indicators; Drift and instability
  • Partner with MLOps and platform teams to integrate monitoring into production pipelines.
  • Support AI incident response and post-incident reviews with governance evidence.
  • Ensure all observability outputs are retained, traceable, and audit‑ready.
  • Define and enforce governance checkpoints within CI/CD pipelines (DEV-> TEST/UAT -> PROD).
  • Implement automated release checks for: Required documentation and evidence artifacts; Explainability artifacts; Monitoring configuration; Data usage, lineage completeness, and medallion-layer alignment
  • Partner with Engineering and MLOps teams on promotion decisions while owning governance readiness, not platform approval.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service