Define enterprise AI security architecture: develop reference architectures, guardrails, and standards for secure data pipelines, model training/inference, and AI-integrated applications across on-prem and cloud. Secure MLOps/ML platforms: architect identity, secrets management, network segmentation, and least-privilege access for feature stores, model registries, orchestration, and deployment pipelines. Data protection by design: establish controls for sensitive data ingestion, anonymization/pseudonymization, encryption (at rest/in transit), tokenization, and lineage across AI workflows. Adversarial ML defense: design controls and tests for model poisoning, evasion, model theft/exfiltration, prompt injection, jailbreaking, data leakage, and output manipulation. AI supply chain security: govern third-party models, APIs, and datasets; enforce SBOMs for AI components; evaluate provenance, licensing, and dependency risk. Policy and governance integration: translate AI security requirements into actionable standards and control evidence; align with enterprise risk, compliance, and model governance processes. Threat modeling and security testing: lead threat modeling for AI systems; design red-teaming and secure evaluation methods for models and agents; integrate chaos/resilience testing. Secure development lifecycle: embed AI-specific security checks (static/dynamic scans, IaC policy-as-code, data quality gates, bias/robustness checks) into CI/CD and change management. Runtime protection: implementing guardrails, content filters, output validation, rate limiting, anomaly detection, and monitoring for AI services and agentic workflows. Observability and incident response: define logging/telemetry (model inputs/outputs, drift, performance, safety events); integrate AI-specific playbooks into SOC operations. Zero Trust for AI: design identity-aware access, micro-segmentation, and continuous verification for data scientists, services, and agents. Privacy and ethics controls: partner with privacy and legal to operationalize consent, minimization, purpose limitation, and responsible AI guardrails, including human-in-the-loop where appropriate. Resilience and continuity: design disaster recovery, backup/restore, model reproducibility, and contingency plans for AI platforms and critical use cases. Vendor/platform assessments: evaluate cloud AI services, open-source frameworks, and commercial tools for security posture, compliance, and fit-for-purpose. Risk management: lead control testing and risk assessments for AI initiatives; document residual risks and remediation plans; support audits and regulatory queries. Reference implementations: deliver secure patterns, sample code, and automation (e.g., reusable Terraform/Policy-as-Code, secrets patterns, logging schemas) to accelerate adoption. Stakeholder leadership: partner with platform engineering, data science, enterprise architecture, cyber operations, and product teams to drive end-to-end secure outcomes. Coaching and enablement: build education and guidance for architects, data scientists, and engineers on secure AI practices, design patterns, and common pitfalls. Continuous improvement: track emerging threats, standards, and best practices; lead updates to architecture and controls; measure effectiveness via KPIs and control health.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed
Number of Employees
5,001-10,000 employees