We are building a dedicated AI Red Team to rigorously test and harden enterprise-scale AI products. This role is for an adversarial machine learning specialist who thinks like an attacker, focusing on identifying vulnerabilities in LLM-driven systems, breaking model guardrails, exploiting data pathways, and stress-testing AI deployments before they reach enterprise customers. This is a hands-on technical role at the core of AI security.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Education Level
No Education Listed
Number of Employees
1-10 employees