AI Security Engineer

YipitData
$230,000 - $280,000Remote

About The Position

We are seeking an AI Security Engineer to lead the implementation, monitoring, and continuous improvement of security, governance, and trust controls for AI systems across the organization. This role will focus on operationalizing AI system security controls using the Agentic Trust Framework mapped to OWASP guidance and the NIST AI RMF, with particular emphasis on observability engineering, behavioral monitoring, policy enforcement, misuse detection, and risk-informed response. This person will serve as a bridge between Security, Engineering, Data, Platform, Compliance, and AI product teams to ensure AI systems are not only functional and performant, but also trustworthy, auditable, resilient, and aligned with enterprise governance requirements. The ideal candidate combines technical depth in AI/ML systems, strong security and monitoring instincts, and the ability to define practical controls for complex, fast-evolving agentic and generative AI environments. We expect U.S. based working hours with the majority of the team working East and Central Time Zones.

Requirements

  • 5+ years of experience in one or more of the following: security engineering, detection engineering, observability engineering, site reliability engineering, application security, ML platform engineering, or AI governance implementation.
  • Experience designing monitoring, logging, telemetry, or detection strategies for distributed systems, cloud services, or data-intensive applications.
  • Familiarity with AI/ML system architecture, including large language models, retrieval-augmented generation, inference pipelines, model APIs, and agentic workflows.
  • Experience translating governance, risk, or policy requirements into operational controls and measurable technical requirements.
  • Strong understanding of security concepts such as identity and access management, least privilege, data protection, abuse prevention, auditability, and incident response.
  • Experience investigating system behavior, identifying anomalies, and working cross-functionally to drive remediation.
  • Hold industry certifications (or equivalent experience): CISSP, CCSP, GIAC Machine Learning Engineer (GMLE)
  • Strong written communication skills, including ability to write standards, control definitions, runbooks, and leadership-facing summaries.

Nice To Haves

  • Experience with AI observability tooling, tracing frameworks, or telemetry pipelines for LLM or agent-based systems.
  • Experience implementing controls for AI safety, AI red teaming, prompt security, model misuse detection, or secure tool execution.
  • Familiarity with Microsoft security, compliance, and AI governance ecosystems.
  • Familiarity with trust and safety concepts for generative AI and autonomous systems.
  • Experience supporting internal governance, risk, privacy, or compliance review processes for AI-enabled products.
  • Experience building dashboards, alerts, and behavioral analytics for security or operational monitoring.
  • Experience working in highly regulated or audit-sensitive environments.

Responsibilities

  • Own AI behavior monitoring: Define what trustworthy and untrustworthy AI behavior looks like, and ensure it is measurable in production.
  • Own AI observability standards: Establish telemetry, tracing, logging, and alerting requirements for AI systems and agentic workflows.
  • Own control validation for agentic systems: Verify that guardrails, policy checks, access boundaries, and execution constraints are functioning as intended.
  • Own AI security event analysis: Detect, investigate, and document suspicious, unsafe, or non-compliant AI behaviors and coordinate response.
  • Own implementation support for governance frameworks: Translate governance principles into technical and operational requirements that product and platform teams can adopt.
  • Own AI trust metrics and reporting: Define KPIs, KRIs, and dashboards that show leadership whether AI systems are operating within approved trust and security boundaries.
  • Own continuous improvement of AI controls: Use incidents, testing, behavioral findings, and stakeholder feedback to strengthen control design and reduce residual risk over time.

Benefits

  • flexible work hours
  • flexible vacation
  • generous 401K match
  • parental leave
  • team events
  • wellness budget
  • learning reimbursement
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service