Job Summary Provides senior level support for designing, implementing, and maintaining the security framework for Molina’s artificial intelligence and machine learning systems. Serves as the subject matter expert for securing AI/ML workloads operating within Molina’s Microsoft Azure and Databricks environments. Job Duties Designs Secure AI/ML Architectures: Develops and promotes secure, resilient, and scalable architecture patterns for AI/ML solutions built on Microsoft Azure, Databricks, and other platforms. AI/ML Threat Modeling: Conducts comprehensive threat modeling and risk assessments specifically for AI/ML systems, identifying vulnerabilities related to model inversion, data poisoning, adversarial attacks, and prompt injection. Agentic AI Security: Leads the threat modeling, risk assessment, and security control design for autonomous AI agents, focusing on mitigating risks such as prompt injection, tool abuse, and uncontrolled agentic behavior. AI Tools: Establishes and enforces a comprehensive security and governance framework for our AI tools, such as Model Context Protocol (MCP) servers, ensuring the integrity, confidentiality, and availability of contextual data. MLSecOps Integration: Collaborates with our MLOps and Data Science teams to embed automated security controls ("shift-left") into the entire machine learning development lifecycle, from data ingestion to model deployment. Architectural Guidance: Maintains AI Reference Architectures and Guidance and updates the specification and publication of AI Standards and Patterns around AI technology, development, security, privacy, and observability. Strategic Advisory & Business Alignment : Consults on AI capabilities across business and technology platforms, ensuring alignment between AI architecture frameworks and the organization’s strategic goals. Offers actionable recommendations on solution design, risk mitigation, and cross-domain impacts. Partner & Collaborate: Partners with solution architects, technology leaders, business, governance, cyber security, compliance, and privacy teams to identify AI risks, documents design decisions, and provides enterprise patterns to resolve issues. Threat Research: Stays current with the latest threats and vulnerabilities targeting AI systems and develops proactive strategies to defend against them. Designs capabilities for AI Governance and Observability.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed