AI Security Architect

BNY MellonPittsburgh, PA
6h$142,000 - $259,000

About The Position

BNY is seeking a AI Security Architect to lead the design, implementation, and governance of security controls for AI/ML systems across the enterprise. This role will define the target architecture and security patterns for AI-enabled products and platforms, ensuring resilient, compliant, and trustworthy AI. The ideal candidate combines deep expertise in cybersecurity and cloud with hands-on knowledge of modern AI/ML infrastructure, data protection, adversarial threat models, and secure MLOps.

Requirements

  • 12+ years in cybersecurity/enterprise security architecture with 3+ years focused on AI/ML or data platform security at scale.
  • Expertise in cloud security (AWS/Azure/GCP) including identity, secrets management, key management (KMS/HSM), network segmentation, and policy-as-code.
  • Strong knowledge of AI/ML workflows: data ingestion/feature engineering, model training/inference, MLOps tooling (model registry, orchestrators, serving).
  • Practical experience with adversarial ML concepts and defenses; familiarity with model robustness, prompt injection risks, and secure evaluation methods.
  • Proficiency in designing observability/telemetry for AI systems (e.g., logging prompts/outputs, drift/quality metrics, safety events) with SIEM/SOAR integration.
  • Hands-on with infrastructure-as-code (Terraform/CloudFormation), CI/CD, and secure SDLC practices tailored to data/ML systems.
  • Deep understanding of data protection (encryption, tokenization, anonymization), privacy by design, and secure data lifecycle management.
  • Strong stakeholder management and communication skills; ability to convert complex risks into clear architecture decisions and implementation guidance.

Nice To Haves

  • Experience architecting secure AI agents and LLM applications including guardrails, content filters, and output validation.
  • Familiarity with standards and frameworks relevant to AI and data (e.g., NIST AI RMF, cloud CIS benchmarks, OWASP for ML/LLM, privacy controls).
  • Background in model governance and risk management (e.g., testing for drift, bias, stability, and explainability) and integration with enterprise control frameworks.
  • Programming/scripting proficiency (Python preferred) for reference implementations, automation, and security tooling integrations.
  • Experience with container security, Kubernetes, service mesh, and microservices patterns in AI platforms.
  • Prior leadership in enterprise-scale transformations, enabling secure adoption of AI across multiple business lines.

Responsibilities

  • Define enterprise AI security architecture: develop reference architectures, guardrails, and standards for secure data pipelines, model training/inference, and AI-integrated applications across on-prem and cloud.
  • Secure MLOps/ML platforms: architect identity, secrets management, network segmentation, and least-privilege access for feature stores, model registries, orchestration, and deployment pipelines.
  • Data protection by design: establish controls for sensitive data ingestion, anonymization/pseudonymization, encryption (at rest/in transit), tokenization, and lineage across AI workflows.
  • Adversarial ML defense: design controls and tests for model poisoning, evasion, model theft/exfiltration, prompt injection, jailbreaking, data leakage, and output manipulation.
  • AI supply chain security: govern third-party models, APIs, and datasets; enforce SBOMs for AI components; evaluate provenance, licensing, and dependency risk.
  • Policy and governance integration: translate AI security requirements into actionable standards and control evidence; align with enterprise risk, compliance, and model governance processes.
  • Threat modeling and security testing: lead threat modeling for AI systems; design red-teaming and secure evaluation methods for models and agents; integrate chaos/resilience testing.
  • Secure development lifecycle: embed AI-specific security checks (static/dynamic scans, IaC policy-as-code, data quality gates, bias/robustness checks) into CI/CD and change management.
  • Runtime protection: implementing guardrails, content filters, output validation, rate limiting, anomaly detection, and monitoring for AI services and agentic workflows.
  • Observability and incident response: define logging/telemetry (model inputs/outputs, drift, performance, safety events); integrate AI-specific playbooks into SOC operations.
  • Zero Trust for AI: design identity-aware access, micro-segmentation, and continuous verification for data scientists, services, and agents.
  • Privacy and ethics controls: partner with privacy and legal to operationalize consent, minimization, purpose limitation, and responsible AI guardrails, including human-in-the-loop where appropriate.
  • Resilience and continuity: design disaster recovery, backup/restore, model reproducibility, and contingency plans for AI platforms and critical use cases.
  • Vendor/platform assessments: evaluate cloud AI services, open-source frameworks, and commercial tools for security posture, compliance, and fit-for-purpose.
  • Risk management: lead control testing and risk assessments for AI initiatives; document residual risks and remediation plans; support audits and regulatory queries.
  • Reference implementations: deliver secure patterns, sample code, and automation (e.g., reusable Terraform/Policy-as-Code, secrets patterns, logging schemas) to accelerate adoption.
  • Stakeholder leadership: partner with platform engineering, data science, enterprise architecture, cyber operations, and product teams to drive end-to-end secure outcomes.
  • Coaching and enablement: build education and guidance for architects, data scientists, and engineers on secure AI practices, design patterns, and common pitfalls.
  • Continuous improvement: track emerging threats, standards, and best practices; lead updates to architecture and controls; measure effectiveness via KPIs and control health.

Benefits

  • BNY offers highly competitive compensation, benefits, and wellbeing programs rooted in a strong culture of excellence and our pay-for-performance philosophy.
  • We provide access to flexible global resources and tools for your life’s journey.
  • Focus on your health, foster your personal resilience, and reach your financial goals as a valued member of our team, along with generous paid leaves, including paid volunteer time, that can support you and your family through moments that matter.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service