Sr. Staff AI Security Architect

Penn MutualPhiladelphia, PA
$175,000 - $200,000

About The Position

Job Summary The Senior Staff AI Security Architect defines and advances the enterprise security architecture for AI, generative AI, and agentic AI. Partnering with Security, Architecture, Data, Product, Legal, Risk, and Compliance teams, this role enables secure and responsible adoption of AI technologies. Setting security-by-design standards, governing AI risk across the model lifecycle, and serving as the enterprise authority for AI threat modeling and control design, this architect shapes how the organization approaches AI security at scale.

Requirements

  • 10+ years in security architecture (cloud, platform, or application security), including 5+ years designing enterprise architectures in regulated environments
  • Expertise in cloud and Zero Trust security, including IAM, API security, and service-to-service authentication
  • Working knowledge of AI/ML systems (LLMs, agents, orchestration layers, ML pipelines) and common Generative AI architectures (e.g., RAG, vector databases)
  • Proven ability to lead security architecture across complex, cross-functional initiatives and influence senior stakeholders
  • DevSecOps/MLOps security experience, including CI/CD control integration, container/Kubernetes security, and security telemetry/SIEM integration
  • Strong fundamentals in cryptography, key management (KMS/HSM), and secrets management
  • Application security background (secure coding, threat modeling, OWASP Top 10) and ability to guide engineering teams on remediation
  • Familiarity with AI risk frameworks (e.g., NIST AI RMF, OWASP Top 10 for LLMs) and privacy/data governance considerations for AI
  • Experience in highly regulated industries (financial services, insurance, healthcare, or similar)

Nice To Haves

  • Bachelor’s or master’s degree in computer science, Engineering, or related field.
  • Advanced security architecture certifications (e.g., CISSP-ISSAP, GIAC)
  • LLM/Generative AI security experience (e.g., OWASP Top 10 for LLM Apps, MITRE ATLAS) and hands-on guardrail implementation
  • LLMOps security tooling and practices (model registry governance, artifact signing/provenance, evaluation pipelines, drift monitoring)
  • Infrastructure/policy-as-code and automated security gates in CI/CD
  • Generative AI data protection (DLP, sensitive-data detection, masking/tokenization, content governance)
  • Led AI security assessments and red-/purple-team exercises for production AI systems
  • AI governance/risk program experience (model inventory, risk tiering, control mapping, exception management, audit evidence)

Responsibilities

  • AI Security Architecture & Strategy: Own enterprise AI security architecture across Generative AI platforms, AI agents, ML pipelines, and the full model lifecycle (data ingestion, training/fine-tuning, deployment, monitoring), including internal and third-party foundation models. Establish security reference architectures, patterns, and guardrails for prioritized AI use cases. Maintain the AI security roadmap and lead architecture/security design reviews; document decisions, exceptions, and compensating controls.
  • AI Threat Modeling & Risk Management: Lead AI threat modeling and abuse-case analysis (e.g., prompt injection, data poisoning, model extraction, hallucination abuse, agent misuse). Define and validate controls for AI risks (misuse/abuse, data leakage/privacy, unauthorized agent actions, supply chain/provenance). Operationalize AI security testing (red-teaming/adversarial testing). Partner with IR/SOC on AI-specific detection and response playbooks. Embed AI risk into Enterprise Risk Management (ERM) processes.
  • Secure AI Platform Enablement: Architect secure AI platform implementations (agent frameworks, orchestration layers, vector databases/embeddings, model APIs/inference gateways). Define identity, access, and authorization for humans and AI agents; ensure integration with IAM, secrets management, logging/monitoring, and SOC workflows. Establish secure RAG patterns (classification, grounding, filtering, tenant isolation, least-privilege retrieval) and agent guardrails (tool allowlists, scoped credentials, approvals, rate limits, sandboxing).
  • Governance, Standards & Compliance: Establish AI security policies, standards, and control requirements aligned to relevant frameworks and regulations (e.g., NIST AI RMF, ISO/IEC 27001/23894, SOC 2, SOX, GLBA, GDPR). Support security/architecture reviews and control validations for AI initiatives. Perform third-party/vendor risk assessments for AI services and models (data retention, model provenance, SLAs, security attestations).
  • Secure Development Lifecycle (AI-SDLC): Embed security into the AI/ML lifecycle (secure data sourcing/labeling, training/tuning, evaluation/red-teaming/validation, post-production monitoring/drift detection). Define requirements for transparency, explainability, and human-in-the-loop controls. Set MLOps/LLMOps security requirements (registry governance, signed artifacts, provenance, environment promotion/rollback). Automate controls via CI/CD and policy-as-code.
  • Leadership & Influence: Advise executive leaders on AI security strategy and risk posture. Influence decisions across product, platform, and business teams. Mentor architects, engineers, and security teams on AI security best practices. Represent the organization in vendor engagements, assessments, and relevant industry forums.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Number of Employees

501-1,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service