About The Position

PJT Partners is a global advisory-focused investment bank. Our team of senior professionals delivers a wide array of strategic advisory, shareholder advisory, restructuring and special situations and private fund advisory and placement services to corporations, financial sponsors, institutional investors and governments around the world. We offer a unique portfolio of advisory services designed to help our clients achieve their strategic objectives. We also provide, through PJT Park Hill, private fund advisory and fundraising services for alternative investment managers, including private equity funds, real estate funds and hedge funds. From the beginning, PJT Partners has firmly believed that having the best people is key to building an enduring franchise. Our perspective was, and remains, that a great team brings in both top tier clients and appeals to a wide-range of diverse, talented colleagues. Fostering an inclusive culture, which welcomes differing perspectives and beliefs, enables us to provide the best advice and insights to our clients. The Technology department at PJT is responsible for creating and continuously improving a robust and secure technology foundation that supports the firm's business activities. As artificial intelligence becomes deeply embedded in both internal operations and the broader vendor ecosystem, the firm faces a new and rapidly evolving risk surface. The AI Security & Risk Manager will be PJT's dedicated subject matter expert at the intersection of AI and security, helping the firm navigate this landscape with rigor and clarity. We are seeking a high-performing AI Security & Risk professional to join the Cybersecurity team. Reporting to the Head of Technology Risk, this individual will own the firm's approach to identifying, assessing, and managing risk introduced by AI — both through internal AI deployments and through vendors increasingly embedding AI into their platforms. The role requires a practitioner who can operate at both a strategic and technical level: fluent in AI architecture and threat modeling while equally capable of communicating risk clearly to senior leadership and regulators. The candidate must build strong relationships across Technology, Legal, Compliance, and the business to ensure AI risk is managed as an enterprise priority, not a silo.

Requirements

  • Bachelor's degree in Computer Science, Information Security, Data Science, or a related field; advanced degree a plus.
  • At least 7–10 years of experience in information security, technology risk, or a related field, with a minimum of 3 years focused on AI systems, machine learning security, or AI governance.
  • Deep understanding of the AI and LLM landscape, including foundation model architecture, agentic systems, RAG pipelines, and the risk implications of each.
  • Hands-on experience evaluating AI platforms and products, including the ability to assess vendor claims about model behavior, data handling, and security controls with appropriate skepticism.
  • Familiarity with AI risk frameworks and emerging standards, including NIST AI RMF, MITRE ATLAS, OWASP LLM Top 10, and ISO/IEC 42001.
  • Experience with vendor risk management in a regulated financial services environment, including contract negotiation support and third-party security assessments.
  • Knowledge of relevant regulatory frameworks including DORA, SOX, SEC cybersecurity disclosure rules, and GDPR/CCPA as they apply to AI data flows.
  • Strong technical skills sufficient to evaluate AI system architecture, API security, data pipeline design, and access control models without reliance solely on vendor documentation.
  • Excellent communication skills, with the ability to translate highly technical AI risk concepts into clear, decision-ready language for senior leadership, Legal, and Compliance.
  • Ability to work independently, manage competing priorities, and operate effectively in a fast-paced, lean team environment

Nice To Haves

  • Experience operating in a Microsoft-first environment, including familiarity with Entra ID, Azure, and M365 security tooling, is a strong plus.
  • Relevant certifications such as CISSP, CISM, CRISC, or emerging AI-focused credentials a plus.

Responsibilities

  • Own and maintain the firm's AI risk framework, covering model risk, data privacy, adversarial threats, third-party AI, and regulatory compliance.
  • Develop and enforce AI usage policies in collaboration with Legal and Compliance, including acceptable use, data classification requirements, and prompt handling standards.
  • Maintain an inventory of AI tools deployed firm-wide — both sanctioned and shadow — and assess associated risk profiles.
  • Provide regular AI risk reporting to the Head of Technology Risk and senior leadership, including emerging threat trends, vendor posture changes, and control gaps.
  • Monitor the evolving regulatory environment for AI (EU AI Act, SEC guidance, DORA, NY DFS) and advise on compliance obligations and required controls.
  • Lead security and risk assessments of vendors introducing AI capabilities into existing or new platforms, including evaluating model transparency, data handling practices, and auditability.
  • Develop and maintain a structured AI vendor evaluation framework, incorporating criteria for model governance, output reliability, data residency, and incident response obligations.
  • Partner with Procurement and Legal to ensure AI-specific provisions are reflected in vendor contracts, including data usage restrictions, model change notifications, and liability terms.
  • Maintain a tiered risk register of third-party AI integrations, with ongoing monitoring for material changes to vendor AI functionality, architecture, or ownership.
  • Engage directly with vendor security and product teams to assess AI-related controls and drive remediation of identified gaps.
  • Conduct threat modeling for AI systems and integrations, including risks from prompt injection, model inversion, training data poisoning, and adversarial inputs.
  • Evaluate AI-specific attack surfaces introduced by LLM integrations, agentic workflows, and MCP-connected data sources.
  • Collaborate with infrastructure and application teams to embed AI security controls into deployment pipelines and system design reviews.
  • Assess risks associated with AI-generated content, including deepfake vectors, synthetic phishing, and automated social engineering in the context of financial services.
  • Contribute to the firm's broader security architecture by ensuring AI components are assessed within the existing control framework.
  • Serve as the security and risk point of contact for the firm's internal AI deployments, including Claude Enterprise and any future platform integrations.
  • Evaluate data retention, access control, and logging practices for AI platforms to ensure alignment with the firm's compliance and eDiscovery obligations.
  • Provide risk assessments for proposed AI use cases across the firm, including a structured framework for approving, conditionally approving, or declining adoption.
  • Support audit and compliance reviews related to AI, including evidence collection and engagement with regulators or external assessors as required.
  • Develop and deliver AI security awareness content for technology staff and end users.

Benefits

  • discretionary bonus component
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service