Staff AI Agentic Security Engineer

Bridgewater AssociatesNew York, NY
Hybrid

About The Position

This role is a 50/50 split between building and protecting AI systems. The Staff AI Agentic Security Engineer will be the hands-on AI leader within the Security Department, responsible for designing and implementing AI agents to modernize security operations, including automating threat detection, vulnerability triage, incident response, compliance monitoring, and developer security tooling. This involves setting the vision for an agent-powered security organization and then building it. Additionally, the role requires embedding directly with Bridgewater’s technology and investment teams to help them build and deploy their own AI agents securely by design. This is a partnership role where the engineer will bring deep architectural expertise to the teams building the future of the firm. Key aspects include designing secure deployment architectures, defining identity and authorization strategies for agents, owning AI supply chain security, and developing defenses against prompt injection and model manipulation attacks. The engineer will also implement runtime safety and governance, architect secure agent-to-agent communication, and conduct security reviews and red teaming of agentic systems.

Requirements

  • Deep understanding and pulse of the AI market — both enterprise and open-source.
  • Fluency across the full AI stack, including AI Foundations & Model Layer (LLM APIs and SDKs, RAG pipelines end to end, embedding models, chunking strategies, vector databases, retrieval patterns, fine-tuning, prompt engineering, and system prompt design).
  • Deep, hands-on experience with modern agent frameworks: LangGraph, LangChain, CrewAI, AutoGen, OpenAI Agents SDK, Google ADK, Semantic Kernel, Pydantic AI, Strands Agents, LlamaIndex, and Agno.
  • Familiarity with visual and low-code agent platforms: Dify, LangFlow, Flowise, n8n (AI Agent nodes), and their security tradeoffs.
  • Understanding of agentic coding tools and environments: Claude Code, Cursor, Windsurf, Open Interpreter, Aider, and similar — understanding how these tools interact with codebases, filesystems, and APIs, and the risks they introduce.
  • Deep understanding of Model Context Protocol (MCP) server architecture, tool registration, trust boundaries, and the emerging attack surface around MCP-based integrations.
  • Knowledge of AI Security Tooling & Defense: runtime guardrail frameworks (NVIDIA NeMo Guardrails, Meta LlamaFirewall, LLM Guard, OpenGuardrails, Guardrails AI, Rebuff, and custom detection pipelines).
  • Expertise in AI-specific attack vectors: prompt injection (direct and indirect), jailbreaking, data exfiltration via tool use, agent goal hijacking, training data poisoning, model inversion, and supply chain attacks on model weights and plugins.
  • Knowledge of AI governance and compliance standards: OWASP Top 10 for LLMs, NIST AI RMF, EU AI Act, ISO 42001 — and practical implementation of these frameworks.
  • Familiarity with AI red-teaming tools and methodologies for testing agents, models, and end-to-end agentic workflows in adversarial conditions.
  • 10+ years of experience in software engineering, security engineering or application security with demonstrated impact at a senior or staff level.
  • 3+ years of hands-on experience building, deploying, or securing AI/ML systems, including LLM-based applications and agentic workflows.
  • Proven track record of building production-grade AI agents or agent-powered tools — not just evaluating or advising on them.
  • Deep, current knowledge of the AI agent ecosystem across enterprise and open-source: frameworks, orchestration tools, model providers, RAG infrastructure, and developer tooling.
  • Demonstrated expertise in AI-specific security threats, including prompt injection defense, agent sandboxing, identity for autonomous systems, and supply chain security for AI toolchains.
  • Experience securing cloud-native applications and infrastructure (AWS, Azure, or GCP) with strong understanding of identity, networking, and data protection.
  • Expert in Python and/or TypeScript with the ability to build production-grade security tooling, agents, and automation.
  • Proven ability to work as an embedded partner with engineering and research teams — influencing through expertise and trust, not mandates.
  • Exceptional communication skills: able to translate complex AI security concepts into clear, actionable guidance for engineers, researchers, and leadership.
  • Strong judgment in balancing security risk, business velocity, and the realities of a fast-moving AI landscape.

Nice To Haves

  • Contributions to open-source AI security projects or frameworks.
  • Background in financial services or other highly regulated industries.
  • Experience red-teaming LLMs and agentic systems in adversarial settings.
  • Familiarity with AI observability and tracing tools (LangSmith, Langfuse, Helicone, Arize) for monitoring agent behavior in production.

Responsibilities

  • Design, develop, and deploy autonomous agents for threat detection, alert triage, vulnerability management, and incident response.
  • Reimagine existing security processes through the lens of agentic AI, replacing manual runbooks with intelligent agents that reason, act, and escalate.
  • Build agent-powered security copilots for engineering teams that perform real-time code review, suggest secure patterns, and catch vulnerabilities before they ship.
  • Evaluate, select, and implement the right mix of frameworks, orchestration tools, and infrastructure for the department’s agent platform.
  • Build agents that continuously validate configurations, access policies, and data handling against regulatory and internal frameworks of the agents deployed by our investment teams.
  • Stay deeply current on the AI landscape (enterprise and open-source) and translate that knowledge into real capability.
  • Design secure deployment architectures for AI agents across the firm, defining sandboxing strategies, execution boundaries, network isolation, and blast-radius controls.
  • Architect identity strategies for a world where agents act on behalf of humans, defining how agents authenticate, what permissions they hold, how credentials are scoped and rotated, and how to enforce least-privilege across multi-agent systems and MCP server integrations.
  • Own the security posture of the AI supply chain end to end, evaluating the security of agent frameworks, MCP servers, skills/plugins, model providers, embedding pipelines, vector databases, and every dependency in between.
  • Be the firm’s leading expert on prompt injection, jailbreaking, data poisoning, indirect injection via tool outputs, and agent manipulation attacks.
  • Design and deploy runtime defenses using tools like NeMo Guardrails, LlamaFirewall, LLM Guard, OpenGuardrails, Guardrails AI, and custom detection layers.
  • Build monitoring, kill switches, escalation triggers, and anomaly detection for AI agents in production.
  • Design human-in-the-loop checkpoints calibrated to risk tolerance and action severity.
  • Implement policy-as-code that governs agent behavior, tool access, data exposure, and output validation.
  • Architect trust boundaries and communication protocols for multi-agent systems — ensuring orchestration, tool use, and data sharing follow least-privilege principles and are resilient to injection and manipulation.
  • Conduct deep-dive security architecture reviews of agentic systems before they go to production.
  • Red-team LLM integrations and agent workflows to find weaknesses before adversaries do.

Benefits

  • Competitive suite of benefits
  • Opportunities that will challenge you and unlock your potential
  • Personal growth and professional development

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

No Education Listed

Number of Employees

501-1,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service