The Information Security Specialist – AI Penetration Tester is responsible for conducting advanced offensive security testing across AI/ML systems, LLM integrations, GenAI platforms, and associated infrastructure. This role serves as a subject-matter expert in AI/LLM security, partnering with engineering, cyber, cloud, and architecture teams to identify vulnerabilities, improve controls, and ensure safe and compliant deployment of AI capabilities across the enterprise. AI/LLM Offensive Security & Vulnerability Testing Conduct Penetration Tests: Design and execute comprehensive penetration tests targeting AI/ML models, LLM applications, model pipelines, retrieval systems, data agents, and AI-enabled business workflows. AI/LLM Vulnerability Analysis: Identify vulnerabilities such as jailbreaking, prompt injection, model extraction, adversarial ML attacks, data poisoning, RAG bypasses, and safety guardrail circumvention. Tooling & Automation: Evaluate and develop tooling (including internal utilities and open‑source frameworks) to automate and scale AI/LLM security testing. Security Architecture, Hardening & Risk Assessment Assess Security Posture: Analyze training data governance, guardrail design, inference endpoints, system prompts, agent autonomy, model monitoring, and model‑ops pipelines. Risk Assessments: Perform security and safety risk analyses on new and existing AI/ML deployments, including cloud‑based services, APIs, model marketplaces, and third-party LLM integrations. Model Supply Chain Security: Assess AI supply chain risks, dependency integrity, and alignment with enterprise standards and regulatory obligations. Documentation, Reporting & Communication Report Findings: Deliver clear, actionable findings to both technical and non‑technical stakeholders. Produce detailed reporting including: Executive summaries Technical proof‑of‑concepts Prioritized remediation recommendations Stakeholder Engagement: Collaborate with Engineering, Data Science, Cloud, Cyber Defense, Architecture, and Risk to remediate findings and improve AI security posture. Governance, Standards & Continuous Improvement Develop Best Practices: Contribute to organization-wide AI security standards, policies, control objectives, and hardening practices. Regulatory Compliance: Ensure AI penetration testing aligns with regulatory, privacy, model safety, and internal policy requirements. Continuous Learning: Maintain deep expertise in emerging AI threats, industry frameworks, evaluation methodologies, and global safety standards. Incident Response & Audit Support Participate in AI/ML–related security incident investigations, providing subject-matter expertise on root cause analysis and exploitation methods. Support audit preparation and assist in drafting management responses, remediation plans, and risk treatment documentation.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
5,001-10,000 employees