AI Controls Engineer

AdobeSan Jose, CA

About The Position

Partnering with engineering, product, security, and legal teams to build, implement, and scale security controls that mitigate risks arising from AI/ML and agentic systems. Establishing and operationalizing an enterprise AI governance framework — including defining policies, translating them into actionable technical control requirements, and ensuring effective implementation across AI systems. Defining and coordinating guardrails for AI systems across inputs, outputs, and inter-agent communication (e.g., A2A, MCP etc.), ensuring safety boundaries, content governance, and misuse prevention across orchestration and integration frameworks. Crafting and enforcing robust access controls for tools, data sources, and enterprise systems accessible to AI agents, ensuring least-privilege access, secure invocation patterns, auditability, and clear segregation of duties across agent platforms and integration layers. Identifying critical control points in autonomous agentic workflows where Human-in-the-Loop (HITL) review is required to mitigate high-risk decisions or actions. Developing continuous surveillance methods and controls to ensure alignment with AI governance standards. Ensuring alignment with internal policies and external regulatory frameworks (e.g., ISO/IEC 42001, NIST AI RMF, EU AI Act). Evaluating threat models across the AI lifecycle to address risks including prompt injection, data poisoning, adversarial attacks, model compromise, etc. Developing Key Risk Indicators (KRIs) and metrics to monitor AI security posture and report trends to senior leadership and risk committees. Supporting internal audits and regulatory examinations related to AI governance and cybersecurity risk. Staying current on emerging AI technologies, agentic architectures, evolving threat landscapes, and industry guidelines.

Requirements

  • 3+ years of hands-on experience in AI governance / Responsible AI, including defining controls, risk assessments, compliance oversight, or assurance of AI/ML systems.
  • 6+ years of experience in risk management, cybersecurity, compliance, or consulting with experience in client-facing or multi-functional delivery.
  • Demonstrated experience translating AI governance requirements into technical security controls and working with engineering teams to implement and scale them.
  • Experience performing AI threat modeling across traditional ML and Generative AI systems, including agent-based architectures.
  • Familiarity with agentic AI technologies and protocols, including AI agents and autonomous workflows, Model Context Protocol (MCP), Agent-to-Agent (A2A) communication etc.
  • Familiarity with AI governance standards and regulatory frameworks such as NIST AI RMF, ISO 42001, EU AI Act, and others.
  • Experience with governance, risk, and compliance (GRC) platforms and collaboration with privacy, legal, and security teams.
  • Knowledge of access control frameworks for AI systems, particularly regarding tool invocation and agent permissions.
  • Bachelor’s degree or equivalent experience in Computer Science, Information Systems, or related field required; advanced degree or equivalent experience preferred.
  • Relevant certifications like AIGP, CISSP, CISA, CISM, Security+, ISO 42001 Implementer/Auditor, or an equivalent credential.

Responsibilities

  • Partnering with engineering, product, security, and legal teams to build, implement, and scale security controls that mitigate risks arising from AI/ML and agentic systems.
  • Establishing and operationalizing an enterprise AI governance framework — including defining policies, translating them into actionable technical control requirements, and ensuring effective implementation across AI systems.
  • Defining and coordinating guardrails for AI systems across inputs, outputs, and inter-agent communication (e.g., A2A, MCP etc.), ensuring safety boundaries, content governance, and misuse prevention across orchestration and integration frameworks.
  • Crafting and enforcing robust access controls for tools, data sources, and enterprise systems accessible to AI agents, ensuring least-privilege access, secure invocation patterns, auditability, and clear segregation of duties across agent platforms and integration layers.
  • Identifying critical control points in autonomous agentic workflows where Human-in-the-Loop (HITL) review is required to mitigate high-risk decisions or actions.
  • Developing continuous surveillance methods and controls to ensure alignment with AI governance standards.
  • Ensuring alignment with internal policies and external regulatory frameworks (e.g., ISO/IEC 42001, NIST AI RMF, EU AI Act).
  • Evaluating threat models across the AI lifecycle to address risks including prompt injection, data poisoning, adversarial attacks, model compromise, etc.
  • Developing Key Risk Indicators (KRIs) and metrics to monitor AI security posture and report trends to senior leadership and risk committees.
  • Supporting internal audits and regulatory examinations related to AI governance and cybersecurity risk.
  • Staying current on emerging AI technologies, agentic architectures, evolving threat landscapes, and industry guidelines.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service