Partnering with engineering, product, security, and legal teams to build, implement, and scale security controls that mitigate risks arising from AI/ML and agentic systems. Establishing and operationalizing an enterprise AI governance framework — including defining policies, translating them into actionable technical control requirements, and ensuring effective implementation across AI systems. Defining and coordinating guardrails for AI systems across inputs, outputs, and inter-agent communication (e.g., A2A, MCP etc.), ensuring safety boundaries, content governance, and misuse prevention across orchestration and integration frameworks. Crafting and enforcing robust access controls for tools, data sources, and enterprise systems accessible to AI agents, ensuring least-privilege access, secure invocation patterns, auditability, and clear segregation of duties across agent platforms and integration layers. Identifying critical control points in autonomous agentic workflows where Human-in-the-Loop (HITL) review is required to mitigate high-risk decisions or actions. Developing continuous surveillance methods and controls to ensure alignment with AI governance standards. Ensuring alignment with internal policies and external regulatory frameworks (e.g., ISO/IEC 42001, NIST AI RMF, EU AI Act). Evaluating threat models across the AI lifecycle to address risks including prompt injection, data poisoning, adversarial attacks, model compromise, etc. Developing Key Risk Indicators (KRIs) and metrics to monitor AI security posture and report trends to senior leadership and risk committees. Supporting internal audits and regulatory examinations related to AI governance and cybersecurity risk. Staying current on emerging AI technologies, agentic architectures, evolving threat landscapes, and industry guidelines.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
5,001-10,000 employees