We are building a dedicated AI Red Team to rigorously test and harden enterprise-scale AI products. We are looking for an adversarial machine learning specialist who thinks like an attacker. This role focuses on identifying vulnerabilities in LLM-driven systems, breaking model guardrails, exploiting data pathways, and stress-testing AI deployments before they reach enterprise customers. This is a hands-on technical role at the core of AI security. You will help ensure AI systems are resilient before they are deployed at scale.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed