About The Position

J.S. Held, a global consulting firm providing specialized technical, scientific, financial, and advisory services, is seeking an AI Security Engineer is a senior, hands‑on technical role responsible for designing, engineering, and operationalizing AI security across J.S. Held’s enterprise. This role serves as the central Cyber Security owner for all AI Security, ensuring AI technologies are securely designed, implemented, and operated across AI‑enabled third‑party applications, internal AI agents, models, MCP, RAG architectures, training and fine‑tuning pipelines, and supporting AI platforms. The role balances hands‑on engineering, solution design, and architectural leadership. While expected to influence standards, patterns, and roadmaps, this is not a purely strategic role—the engineer will actively design and enable controls. Role weighting: ~70% AI Security Engineering (primary) ~30% Data Security Engineering (secondary), with emphasis on Microsoft Purview, especially where enterprise data is used by AI systems.

Requirements

  • 8+ years of experience in cybersecurity engineering, cloud security, application security, or data security
  • Direct, hands‑on experience with Azure AI Foundry and Copilot Studio in enterprise environments
  • Strong experience securing cloud and SaaS platforms (Azure preferred)
  • Deep understanding of identity, access control, data protection, and secure application/API design
  • Proven ability to translate security requirements into practical, deployable controls

Nice To Haves

  • Experience securing generative AI, LLM‑based systems, and agentic architectures
  • Experience with Microsoft Copilot Administration, Anthropic and other AI platforms (e.g., OpenAI ecosystems)
  • Experience with Microsoft Purview (sensitivity labels/information protection, DLP, Insider Risk Management)
  • Familiarity with RAG architectures, vector databases, embeddings, and MCP integrations
  • Scripting or automation experience (e.g., Python or PowerShell) to integrate security controls into engineering workflows
  • Strong cross‑functional communication and influence skills

Responsibilities

  • AI Security Architecture & Guardrails Define and evolve the enterprise AI Security Architecture, guardrails, and security requirements aligned to business objectives. Establish secure‑by‑design patterns across AI development, deployment, and operations, including requirements for hardening, hosting, access control, monitoring, and testing.
  • Platform & Engineering Enablement (Hands‑On) Design and engineer security controls for: AI‑enabled SaaS applications Internal AI agents and automation workflows Model hosting, inference services, APIs, and orchestration layers RAG architectures, vector databases, and embeddings Model training and fine‑tuning pipelines MCP and agent‑to‑agent interaction patterns
  • AI Identity, Authentication & Authorization Extend identity and access principles to non‑human identities and autonomous agents. Treat AI agents as first‑class identities, defining authentication, authorization, lifecycle management, and revocation. Implement delegated and “on‑behalf‑of” authorization patterns to distinguish human‑initiated actions from agent‑initiated actions. Apply least‑privilege and scope‑limiting controls to prevent privilege escalation in automated and multi‑agent workflows.
  • Threat Modeling & Risk Reduction Identify and mitigate AI‑specific risks including data leakage, prompt injection, jailbreaks, model abuse, data poisoning, model extraction, and AI supply‑chain risk. Ensure appropriate security testing and validation is embedded into AI development and deployment workflows.
  • Monitoring & Incident Readiness Define logging, monitoring, and detection requirements for AI systems, models, and agent activity. Partner with SecOps to ensure AI‑related events are observable, auditable, and actionable. Support incident response and post‑incident analysis for AI‑related security events.
  • Cross‑Functional Delivery Work closely with IAM, SecOps, AppSec, GRC, IT engineering, AI platform teams, and business stakeholders to embed security controls where they belong.
  • Data Protection & Governance Design and enhance enterprise data security controls with a focus on AI‑driven data access. Implement and optimize Microsoft Purview, including data classification, sensitivity labeling, DLP, information protection, and visibility.
  • AI‑Aware Data Security Ensure data security controls are aligned to AI architectures, reducing risk of sensitive data exposure via prompts, agents, outputs, and downstream sharing. Support secure use of enterprise data in RAG pipelines, AI workflows, and training environments.
  • Multi‑Platform Data Flows Contribute to data protection strategies across collaboration platforms, cloud services, and endpoints, ensuring consistent enforcement where possible.

Benefits

  • Our flexible work environment allows employees to work remotely when needed
  • Generous Annual Leave Policy
  • Comprehensive Medical Insurance
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service