About The Position

Businessolver has been a leader in benefits technology and services since 1998, focusing on client success through innovative solutions and a strong service-oriented culture. The Principal PM, AI Governance and Compliance will be responsible for the technical and operational control layer for AI governance and compliance across the company’s AI-enabled capabilities. This role is crucial for ensuring that AI systems are deployed and operated safely by establishing appropriate technical standards, review workflows, control points, documentation, evidence, and risk management practices. The leader will collaborate with Security, Legal, Privacy, Product, Engineering, and Architecture teams to create practical governance mechanisms integrated into the AI system lifecycle, from design and development to monitoring and change management. The position requires deep technical expertise in AI system lifecycles, software delivery, model and prompt controls, vendor assessments, and evidence-based compliance operations.

Requirements

  • Bachelor’s degree required in Computer Science, Information Security, Software Engineering, Information Systems, Engineering, or a related technical field.
  • 8+ years of experience in technical product management, security engineering, risk engineering, compliance engineering, platform governance, or a related field.
  • Strong technical understanding of AI and software system lifecycles, including APIs, model integration patterns, testing approaches, logging, monitoring, and deployment controls.
  • Experience working with governance, compliance, privacy, or security requirements in software products, especially in environments involving sensitive data.
  • Proven ability to translate policy and control requirements into technical workflows, engineering requirements, and operating processes.
  • Experience coordinating across Legal, Privacy, Security, Product, and Engineering teams on control design and risk management.
  • Strong written communication skills, with the ability to produce clear documentation, review artifacts, and diligence materials for internal and external audiences.

Nice To Haves

  • Master’s degree preferred in Cybersecurity, Computer Science, Engineering, Information Assurance, Artificial Intelligence, or a related discipline.
  • Ongoing professional development in AI governance, secure software delivery, privacy engineering, compliance frameworks, and model risk management expected.
  • Experience governing AI or machine learning systems in production environments.
  • Familiarity with emerging AI governance frameworks, model risk management practices, and responsible AI control structures.
  • Experience with technical documentation systems, workflow tools, control repositories, and audit evidence management.
  • Background in security architecture, privacy engineering, enterprise compliance, or regulated SaaS platforms.
  • Experience evaluating third-party AI vendors and integrating vendor controls into internal governance processes.

Responsibilities

  • Define and maintain the governance framework for AI-enabled capabilities across the software and model lifecycle, including intake, design review, implementation controls, testing expectations, deployment review, and ongoing monitoring.
  • Establish technical control requirements for AI systems, including documentation standards, model and prompt inventories, traceability, approval paths, and change management expectations.
  • Ensure governance requirements are practical for engineering teams and embedded into delivery workflows where possible.
  • Operate the processes required to support internal and external compliance expectations for AI-enabled products and internal AI use cases.
  • Maintain evidence, decision records, inventories, risk assessments, and control mappings needed for audits, client diligence, investor diligence, and internal reviews.
  • Coordinate responses to AI-related diligence requests and partner with subject matter experts to ensure responses are accurate and supportable.
  • Partner with Security, Privacy, Legal, and Engineering to identify and manage risks related to model behavior, data handling, access patterns, third-party AI services, output quality, explainability, and system changes.
  • Build and run review paths for new AI use cases, material updates, and exceptions requiring elevated scrutiny.
  • Define escalation criteria, mitigation tracking, and approval workflows for higher-risk AI implementations.
  • Work directly with product and engineering teams to translate policy and control requirements into technical implementation guidance.
  • Help teams design compliant approaches for logging, testing, access control, human review, fallback behavior, documentation, and monitoring.
  • Influence architecture and delivery decisions so governance is built into systems rather than applied after the fact.
  • Maintain current inventories of AI systems, models, vendors, prompts, datasets, and related technical dependencies as required by company governance standards.
  • Ensure documentation is complete and usable across lifecycle stages, including design intent, data usage, review outcomes, testing artifacts, and operational controls.
  • Improve the tooling and process model for collecting, maintaining, and retrieving governance evidence.
  • Identify opportunities to automate governance activities within engineering and product workflows, including intake routing, policy checks, documentation capture, control verification, and evidence collection.
  • Partner with engineering teams to embed governance checks into existing delivery systems and lifecycle tooling.
  • Scale governance operations in a way that increases control coverage without creating unnecessary process overhead.

Benefits

  • Comprehensive benefits package
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service