About The Position

We are building an elite AI Red Team to stress-test and harden enterprise-scale AI products deployed to some of the world’s largest organizations. This is not a theoretical research role. This role sits at the intersection of adversarial machine learning, enterprise security architecture, and governance. You will lead the design and execution of structured red team engagements across multiple AI systems — and translate technical risk into enterprise-aligned assurance. If you have ever been frustrated watching AI risk findings remain stuck in a slide deck with no operational impact, this role is designed to change that. This role ensures AI security findings integrated into enterprise governance frameworks.

Requirements

  • Strong understanding of adversarial machine learning
  • Experience red teaming LLM or AI systems
  • Deep familiarity with AI deployment architectures (RAG, APIs, vector DBs, fine-tuning pipelines)
  • Strong Python proficiency
  • Experience working within ISO 27001 environments
  • Practical knowledge of SOC 2 Trust Service Criteria
  • Understanding of ISO 27701 privacy extensions
  • Familiarity with ISO 27017 cloud security controls
  • Ability to map technical findings to control frameworks
  • Ability to produce clear, structured, audit-friendly documentation
  • Comfortable presenting technical risk to executive audiences
  • Strong written and verbal communication skills
  • Systems thinker
  • Curious and adversarial in mindset
  • Comfortable identifying uncomfortable truths
  • Autonomous and fast-moving
  • Enterprise-aware, not just technically strong
  • Able to operate independently under executive leadership
  • You understand that security is about both breaking systems and integrating findings into operational and compliance posture.

Responsibilities

  • Design and lead adversarial testing of LLM and AI-driven systems
  • Conduct threat modelling across model, infrastructure and data layers
  • Execute and oversee testing for: Prompt injection
  • Jailbreaking
  • Model exploitation
  • Data leakage / extraction
  • RAG system manipulation
  • Translate findings into structured, audit-ready documentation
  • Map vulnerabilities and remediation pathways to: ISO 27001 controls
  • SOC 2 Trust Service Criteria
  • ISO 27701 privacy controls
  • ISO 27017 cloud security controls
  • Partner closely with engineering, security, and compliance functions
  • Present findings clearly to executive leadership

Benefits

  • Comprehensive Private Medical Coverage
  • Support for Mental Health Expenses
  • Life Insurance Options
  • Attractive Compensation Package
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service