Cybersecurity AI Risk and Governance Director, Global

Vantage Data CentersDenver, CO
Hybrid

About The Position

The AI Cybersecurity Director is responsible for the technical security, risk management, and governance enforcement of artificial intelligence (AI), machine learning (ML), and large language model (LLM) systems deployed across Vantage Data Centers’ operational, OT, and enterprise environments. This role serves as the technical and security authority for AI security, ensuring AI systems are architected, deployed, and operated with appropriate controls for data protection, model integrity, access governance, monitoring, and human‑in‑the‑loop decision enforcement. The AI Cybersecurity Manager ensures AI technologies deliver business value without introducing unacceptable cyber, operational, safety, workforce, or regulatory risk, in alignment with the Global Policies and Standards. This role is based in Denver, CO or Ashburn, VA. In alignment with our flexible work policy (3 days on site required, 2 days flexible).

Requirements

  • Bachelor’s degree in Cybersecurity, Computer Science, Data Science, Engineering, or related field, or equivalent experience.
  • Minimum 10+ years of experience in cybersecurity, security architecture, or risk engineering roles.
  • Hands‑on experience securing data pipelines, APIs, cloud platforms, and analytics or ML‑enabled systems.
  • Strong understanding of identity, access management, encryption, logging, and secure system design.

Nice To Haves

  • Direct experience securing AI/ML platforms, LLMs, or analytics pipelines.
  • Experience with cloud security (Azure, AWS, GCP) and SaaS‑based AI platforms.
  • Familiarity with OT, critical infrastructure, or safety‑critical environments.
  • Security certifications such as CISSP, CCSP, CISM, or cloud security certifications.

Responsibilities

  • Establish enterprise governance for detection, classification, and management of unauthorized (shadow) AI across business units, in coordination with centralized AI functions.
  • Define and enforce security architecture standards for AI, ML, and LLM platforms across cloud, hybrid, on‑prem, and OT‑adjacent environments.
  • Provide security design oversight and approval for AI systems, including data pipelines, model hosting, inference paths, APIs, and integrations.
  • Define enterprise methodology for AI security assessment covering architecture, design, and implementation across applications, agents, and workflows.
  • Ensure AI architectures enforce segmentation, least privilege, deterministic behavior, and fail‑safe operation, particularly where OT or critical infrastructure data is involved.
  • Establish AI‑specific incident response playbooks and lead response to AI‑related security, safety, or governance incidents.
  • Enforce controls preventing unauthorized model retraining, autonomous learning, or use of live production or OT data outside approved intent.
  • Define security requirements for explainability, traceability, and output validation where AI influences operational, workforce, safety, or compliance outcomes.
  • Drive alignment with ISO 42001 and related AI governance standards across applicable teams.
  • Enforce protections against prompt injection, data leakage, hallucination risk, unauthorized context expansion, and external model training exposure.
  • Ensure sensitive enterprise, operational, personnel, and contractual data is not exposed to or retained by external AI platforms without approved safeguards.
  • Approve and oversee AI data ingestion pipelines, enforcing purpose limitation, data minimization, and classification requirements.
  • Validate encryption, access logging, retention, and deletion controls for data used by AI systems.
  • Define and enforce controls preventing cross‑domain data correlation that violates trust boundaries or governance constraints.
  • Perform AI‑specific threat modeling, including risks such as data poisoning, model theft, inference abuse, output manipulation, and decision integrity compromise.
  • Integrate AI threats into enterprise cybersecurity and OT risk models, including definition of compensating controls and escalation for systems exceeding risk tolerance.
  • Own and maintain the AI risk register covering confidentiality, integrity, availability, explainability, data quality, model drift, adversarial attacks, and business impact.
  • Ensure AI systems generate telemetry, logging, and audit trails sufficient to detect misuse, drift, or anomalous behavior.
  • Integrate AI security monitoring into SOC, SIEM, and enterprise incident response workflows.
  • Enforce prohibitions on autonomous AI control of OT assets, including power, cooling, BMS, fire suppression, and physical access systems.
  • Validate one‑way data flows, read‑only access models, and manual override requirements where AI consumes OT telemetry.
  • Partner with OT and infrastructure teams to ensure AI enhances visibility and decision support without compromising safety, reliability, or uptime.
  • Oversee security reviews of vendor‑provided and embedded AI capabilities, including model behavior, data handling, and contractual protections.
  • Define and enforce minimum security and governance requirements for AI vendors, including audit rights and termination conditions.

Benefits

  • medical, dental, and vision coverage
  • life and AD&D
  • short and long-term disability coverage
  • paid time off
  • employee assistance
  • participation in a 401k program that includes company match
  • many other additional voluntary benefits
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service