About The Position

The AI / ML Security Operations Engineer will be responsible for securing NYL’s machine learning and AI pipelines as they evolve from isolated experimentation into production, agentic, and automated decisioning systems. This role extends AppSec practices to address the unique risk created by the integration of models developed in Vertex AI into pipelines, agents, and downstream systems with real autonomy and impact. This role sits within the Application Security team at the intersection of ML engineering, platform engineering, and security, bringing deep ML platform expertise to establish controls, guardrails, and patterns that scale as AI adoption accelerates.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
  • 5+ years of application security, cloud security, or security engineering experience.
  • Hands-on experience working with ML platforms (Vertex AI, SageMaker, Azure ML, or equivalent).
  • Strong understanding of ML pipelines, MLOps workflows, CI/CD security and model lifecycle management.
  • Knowledge of application security fundamentals: authentication/authorization, supply chain security, secure APIs, secrets management.
  • Experience securing non-human identities, service accounts, and automated workflows.
  • Familiarity with AI/ML threat scenarios including data poisoning, model theft, inference abuse, prompt injection and unsafe tool invocation.
  • Experience implementing security controls in CI/CD pipelines and infrastructure-as-code environments.
  • Proficiency in Python for automation, analysis, and control enforcement.
  • Strong understanding of cloud IAM, least-privilege design, and execution isolation.

Nice To Haves

  • Experience securing agentic AI systems, orchestration frameworks, or autonomous workflows.
  • Familiarity with AI security frameworks such as MITRE ATLAS or equivalent research.
  • Experience designing governance models for ML platforms in regulated environments.
  • Background working alongside data scientists and ML engineers in production settings.
  • Exposure to model risk management, validation, or controls in financial services.
  • Experience with policy-as-code, guardrails, and enforcement at scale.

Responsibilities

  • Engineer and maintain security controls across the full ML lifecycle: data ingestion, feature pipelines, training, model registry, deployment, and execution.
  • Secure Vertex AI pipelines, notebooks, training jobs, model artifacts, and endpoints as they move from experimentation into production.
  • Define and enforce separation of environments (dev, training, staging, production) with appropriate identity, authorization, and access controls.
  • Design guardrails for agentic and automated AI use cases, including execution boundaries, tool invocation controls, and non-human identity management.
  • Protect ML supply chain integrity: feature pipelines, training data provenance, model artifacts, and model serving endpoints
  • Integrate ML security controls into existing AppSec CI/CD pipelines, SSDLC processes, and security testing frameworks.
  • Extend AppSec standards (IAM, secrets management, API security) to ML workloads
  • Establish repeatable security patterns, reference architectures, and governance models for AI and ML development.
  • Implement monitoring and detection for ML-specific security risks, and investigate security events related to AI pipelines, model usage, and agent execution.
  • Contribute to the definition of enterprise AI security standards and long-term operating models.

Benefits

  • full package of benefits for employees
  • leave programs
  • adoption assistance
  • student loan repayment programs
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service