AI Security Engineer

Crowe LLPDallas, TX
7d$74,100 - $147,800

About The Position

The AI Security Engineer I (Senior Staff) serves as a senior technical expert responsible for securing enterprise AI and machine learning systems across their full lifecycle, including data ingestion, model training, inference pipelines, retrieval-augmented generation (RAG) systems, and generative AI applications. This role leads advanced security assessments, identifies vulnerabilities unique to AI-enabled platforms, and architects secure-by-design solutions for cloud and hybrid environments. Working closely with cybersecurity, cloud engineering, MLOps, data engineering, and AI engineering teams, the engineer designs and implements security controls that protect sensitive data, model artifacts, embeddings, and inference services from emerging threats. As a senior staff-level contributor, this role influences architectural security decisions, advances AI-specific threat detection and mitigation strategies, mentors’ engineers, and strengthens the organization’s overall AI security and responsible AI posture.

Requirements

  • 4+ years of experience in cybersecurity, cloud security, ML engineering, or DevSecOps roles.
  • Demonstrated experience securing AI/ML or generative AI systems in production environments.
  • Strong understanding of ML pipelines, model architectures, and AI system components.
  • Deep knowledge of adversarial ML attack vectors and mitigation techniques.
  • Proficiency in Python, security testing tools, and cloud security frameworks.
  • Ability to assess risk across distributed services, storage systems, inference APIs, and data pipelines.
  • Strong communication skills and sound technical judgment in security decision-making.
  • Hands-on experience with Microsoft Azure and M365 security environments.
  • Willingness to travel occasionally for cross-functional planning and collaboration.

Nice To Haves

  • Bachelor’s degree in Cybersecurity, Computer Science, Engineering, or a related technical field, or equivalent experience.
  • Master’s degree or advanced training in cybersecurity, AI, or related discipline.
  • Security and cloud certifications such as SC-100, SC-900, SC-200, SC-300, AZ-500, AI-102, or equivalent AWS certifications.
  • CISSP, CKS, or CompTIA Cloud certifications.
  • Advanced experience securing AI platforms on Azure, including Kubernetes security (RBAC, network policies) and multi-tenant GPU workloads.
  • Experience securing container pipelines using image scanning, signing, and policy enforcement.
  • Expertise with secrets management solutions (e.g., Azure Key Vault, HashiCorp Vault).
  • Experience implementing zero-trust architecture and securing CI/CD pipelines for AI systems.
  • Deep knowledge of generative AI and RAG security, including prevention of prompt injections, jailbreaks, context poisoning, and embedding leakage.
  • Experience designing safe-output rendering patterns, guardrails, and red-teaming processes for generative systems.
  • Familiarity with emerging generative AI defense techniques such as model watermarking, inference integrity checks, and output validation frameworks.

Responsibilities

  • Architecting secure deployment and operating models for AI, ML, and generative AI systems across cloud and hybrid environments.
  • Conducting advanced AI security testing, including adversarial ML attacks, prompt injection simulations, and RAG manipulation assessments.
  • Identifying and mitigating vulnerabilities in model-serving infrastructure, feature stores, embedding pipelines, and vector databases.
  • Designing guardrails, safety filters, access controls, and secure interaction patterns for LLM- and RAG-based applications.
  • Developing automated tooling to detect misconfigurations, insecure endpoints, and data exposure risks within AI pipelines.
  • Collaborating with cloud and DevOps teams to secure Kubernetes clusters, GPU workloads, and infrastructure-as-code deployments.
  • Analyzing logs, telemetry, and model outputs to detect anomalies, abuse patterns, model degradation, or malicious activity.
  • Implementing encryption, secrets management, IAM policies, and network segmentation for AI workloads.
  • Leading secure design and architecture reviews for AI features, APIs, and platform components.
  • Documenting threat models, attack surfaces, risk assessments, mitigations, and compliance artifacts.
  • Participating in AI-specific incident response, investigation, and post-incident analysis.
  • Evaluating emerging AI security technologies, including model fingerprinting, inference protection, and secure execution environments.
  • Supporting enterprise adoption of responsible AI, data protection, and regulatory compliance standards.
  • Mentoring junior engineers, ML engineers, and security practitioners on AI security best practices.
  • Contributing to cloud security posture management capabilities for AI-enabled platforms.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service