Lead Cybersecurity Engineer, Data Loss Prevention & AI Governance

McGraw Hill LLC.
$136,000 - $190,000Remote

About The Position

At McGraw Hill, we are dedicated to delivering digital learning experiences that transform education for learners and educators. Our focus is on creating seamless, impactful products that truly benefit our users while supporting growth and collaboration across teams. We foster a culture that values innovation, teamwork, and a balance between career growth and personal well-being. How can you make an impact? The Cybersecurity Engineer – AI & DLP is responsible for designing and implementing data protection and governance controls across enterprise AI platforms, such as generative AI and AI-assisted development tools. This position centers on preventing data leaks, overseeing AI interactions with sensitive information, and applying security policies using DLP technologies, logging, and automated controls. The engineer will assess risks associated with AI platforms, set up inspection and monitoring systems, and create governance frameworks that ensure AI tool usage complies with organizational security, privacy, and compliance standards. This is a remote position open to applicants authorized to work for any employer within the United States.

Requirements

  • 15+ years of applicable experience.
  • Bachelor's degree in computer science, Engineering, or related field.
  • Strong communication skills and comfortability working directly with business stakeholders, vendors, and leadership.
  • Ability to present risks and recommendations to leadership.
  • Ability to translate complex identity concepts into business value.
  • Understanding the Model Context Protocol (MCP), Retrieval-Augmented Generation (RAG), and API integrations.
  • Strong knowledge of DLP technical controls, concepts, and end user computing behaviors.
  • Experience with administration of the Microsoft tool suite, particularly M365 Copilot, GitHub Copilot, Microsoft Purview.

Nice To Haves

  • In-depth knowledge of agentic AI usage and guardrails from an end user and development perspective.
  • Knowledge of infrastructure and engineering of client/server compute systems.

Responsibilities

  • Define and implement AI security controls, such as prompt filtering, response inspection, redaction, and usage monitoring, to ensure enterprise AI tools operate within approved data protection and compliance boundaries.
  • Evaluate inputs and outputs of enterprise AI tools (e.g., ChatGPT, Claude, and internal LLM platforms) to identify risks related to sensitive data exposure, prompt injection, and intellectual property leakage.
  • Design and implement technical guardrails and monitoring controls—including prompt inspection, output filtering, and DLP policies—to ensure AI usage aligns with enterprise security and data governance standards.
  • Design, implement, and operate Data Loss Prevention (DLP) controls to prevent the exposure of sensitive data across enterprise AI platforms and generative AI tools.
  • Partner with engineering, AI/data science, and Digital Workspace teams to integrate security controls into AI platforms, including prompt monitoring, data classification, and access controls.
  • Evaluate emerging AI tools, models, and AI-assisted development platforms to identify cybersecurity risks and recommend appropriate security requirements and mitigations.
  • Implement logging, monitoring, and alerting capabilities to provide visibility into how enterprise data is accessed, processed, and shared through AI systems.
  • Develop and enforce policies and technical controls that prevent the use of sensitive data (e.g., PII, credentials, proprietary content) within AI prompts, training datasets, or integrations.
  • Design and implement a Data Loss Prevention (DLP) strategy throughout all MH infrastructure systems (MS Purview, Zscaler, cloud environments). Operationalize the alert and triage standard operating procedures to protect sensitive emails, uploads, and other avenues of data loss.
  • Support the design of secure architecture for enterprise AI platforms, including controls for data handling, model access, API usage, and third-party integrations.
  • Contribute to security awareness and guidance for developers and employees on safe and responsible use of generative AI tools.

Benefits

  • The work you do at McGraw Hill will be work that matters. We are collectively building experiences that will help shape the future of education. Play your part and experience a sense of fulfilment that will inspire you to even greater heights.
  • An annual bonus plan may be provided as part of the compensation package, in addition to a full range of medical and/or other benefits, depending on the position offered.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service