We are seeking an AI Security Engineer to lead the implementation, monitoring, and continuous improvement of security, governance, and trust controls for AI systems across the organization. This role will focus on operationalizing AI system security controls using the Agentic Trust Framework mapped to OWASP guidance and the NIST AI RMF, with particular emphasis on observability engineering, behavioral monitoring, policy enforcement, misuse detection, and risk-informed response. This person will serve as a bridge between Security, Engineering, Data, Platform, Compliance, and AI product teams to ensure AI systems are not only functional and performant, but also trustworthy, auditable, resilient, and aligned with enterprise governance requirements. The ideal candidate combines technical depth in AI/ML systems, strong security and monitoring instincts, and the ability to define practical controls for complex, fast-evolving agentic and generative AI environments. We expect U.S. based working hours with the majority of the team working East and Central Time Zones.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Education Level
No Education Listed