We are building an elite AI Red Team to stress-test and harden enterprise-scale AI products deployed to some of the world’s largest organizations. This is not a theoretical research role. This role sits at the intersection of adversarial machine learning, enterprise security architecture, and governance. You will lead the design and execution of structured red team engagements across multiple AI systems — and translate technical risk into enterprise-aligned assurance. If you have ever been frustrated watching AI risk findings remain stuck in a slide deck with no operational impact, this role is designed to change that. This role ensures AI security findings integrated into enterprise governance frameworks.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed