The position involves defining and leading the security architecture strategy for AI/ML systems, including LLMs, GenAI tools, and AI-driven features. The role requires collaboration with engineering and data science teams to secure the AI/ML pipeline, which encompasses data ingestion, training, deployment, and monitoring. The candidate will develop threat models for AI systems and implement mitigations against adversarial ML, data poisoning, model theft, and prompt injection. Additionally, the role includes evaluating and advising on the secure use of third-party AI tools, APIs, and model integrations, as well as building policies, patterns, and guardrails for responsible and secure AI development in collaboration with GRC and Legal. The candidate will guide the implementation of privacy-enhancing technologies and ensure regulatory compliance, conduct risk assessments on AI use cases, and lead the remediation of identified security gaps. The position also involves designing, reviewing, and securing architectures involving Model Context Protocol (MCP) and architecting agentic AI workflows. Mentoring engineers and architects on AI security principles and staying current on the evolving AI threat landscape are also key responsibilities.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Education Level
Bachelor's degree
Number of Employees
101-250 employees