AI Security Engineer

Kinaxis Inc.
Hybrid

About The Position

The AI Security Engineer is a hands‑on security specialist responsible for designing, implementing, and operating security controls for AI‑enabled systems across Corporate IT and the Kinaxis Maestro SaaS ecosystem. This role serves as a recognized subject matter expert in AI security, defining AI security design considerations, risk assessment recommendations, and control implementation. The engineer will act as an escalation point for complex AI security events and misuse scenarios, lead AI threat modeling and adversarial testing efforts, and drive measurable improvements in AI detection, prevention, and monitoring capabilities. Additionally, the position contributes to the maturation of AI security frameworks, governance, and operational practices, and provides technical mentorship to teams building and operating AI‑enabled solutions.

Requirements

  • Bachelor’s degree in Information Security, Computer Science, Engineering or equivalent practical experience.
  • 6 – 8 years of experience in security engineering, application security, cloud security, or security architecture, including hands‑on work securing production systems.
  • Strong understanding of secure software development practices and modern cloud platforms.
  • Demonstrated experience securing production AI-enabled systems.
  • Excellent written and verbal communication skills, with the ability to clearly articulate complex technical information.
  • Strong analytical, communication, and prioritization skills in fast‑moving environments.
  • Continuous learning
  • Deep understanding of LLMs, agents, RAG pipelines, model serving, and MLOps
  • Strong grasp of AI-specific threats (prompt injection, jailbreaks, model inversion, poisoning, data leakage)
  • Experience deploying AI security defenses (LLM firewalls, policy engines, input/output validation, DLP, monitoring)
  • Experience building secure-by-design patterns and defense-in-depth for AI systems
  • Ability to define telemetry, logging, and detection strategies for AI systems
  • Ability to design and implement security controls across AI tools, platforms, and delivery pipelines
  • Hands-on experience performing AI/ML threat modeling
  • Ability to translate AI risks into actionable controls and engineering requirements
  • Experience testing AI systems against adversarial attacks and abuse scenarios

Nice To Haves

  • CISSP
  • CAISP
  • CSSLP
  • SABSA
  • Cloud Provider Security Certifications
  • NIST AI RMF Training or ISO/IEC 42001 Lead Implementer

Responsibilities

  • Design and implement end‑to‑end security guardrails across the AI lifecycle, including data ingestion, training, evaluation, deployment, and runtime monitoring.
  • Develop secure‑by‑default patterns for AI enabled applications.
  • Implement controls for agentic workflows, including tool permissioning, action constraints, auditability, and blast‑radius reduction.
  • Define and enforce secure configuration baselines for AI services such as cloud AI platforms, model gateways, vector databases, and model runtimes
  • Lead AI security design reviews, conduct threat modeling, and risk assessments for AI-enabled systems.
  • Identify AI-specific risks and translate findings into prioritized mitigation plans, updated standards, and actionable engineering guidance.
  • Monitor emerging AI threats, vulnerabilities, and research, incorporating relevant insights into security practices, documentation, and team enablement.
  • Plan and execute targeted adversarial testing against AI enabled applications and workflows.
  • Develop repeatable test cases to evaluate resistance against misuse, data leakage, and unsafe output.
  • Partner with internal offensive security teams and external assessors to validate resilience before launch and during major changes.
  • Evaluate, deploy, and operate AI security controls.
  • Define logging and telemetry requirements for AI enabled systems.
  • Ensure AI security events are integrated into centralized monitoring and response workflows.
  • Serve as a subject‑matter expert for AI security, advising product and engineering teams on secure design choices and risk trade-offs.
  • Contribute to the evolution of AI security standards and governance practices.
  • Collaborate with engineering leaders to embed AI security requirements into CI/CD and MLOps pipelines, aligned with secure SDLC practices.
  • Serve as an escalation point for complex AI security investigations and abuse scenarios.

Benefits

  • Flexible vacation and Kinaxis Days (company-wide days off)
  • Flexible work options
  • Physical and mental well-being programs
  • Regularly scheduled virtual fitness classes
  • Mentorship programs, training, and career development
  • Recognition programs and referral rewards
  • Hackathons
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service