The Opportunity We are building a dedicated AI Red Team to rigorously test and harden enterprise-scale AI products deployed to some of the world’s largest organizations. Security testing is only part of enterprise AI assurance. We are seeking an AI Risk & Responsible AI Lead to design and operationalize structured evaluation frameworks across safety, bias, robustness, explainability, and data governance. This role ensures our AI systems are secure, trustworthy, measurable, and enterprise-ready. What You’ll Do Design and implement model evaluation frameworks across AI products Develop methodologies for: Bias and fairness testing Hallucination and reliability assessment Robustness and stress testing Safety benchmarking Evaluate training data governance practices Review RAG systems for retrieval accuracy and exposure risks Establish measurable risk metrics across AI deployments Align evaluation outputs with: NIST AI Risk Management Framework ISO 27701 privacy requirements Enterprise governance standards Produce structured, executive-ready documentation Partner with product and engineering teams to integrate risk mitigation strategies This role bridges AI engineering, governance, risk quantification and enterprise accountability.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed