About The Position

We are building a dedicated AI Red Team to rigorously test and harden enterprise-scale AI products. We are looking for an adversarial machine learning specialist who thinks like an attacker. This role focuses on identifying vulnerabilities in LLM-driven systems, breaking model guardrails, exploiting data pathways, and stress-testing AI deployments before they reach enterprise customers. This is a hands-on technical role at the core of AI security.

Requirements

  • Strong experience in adversarial ML or AI security research
  • Experience working with LLM-based systems (OpenAI, Anthropic, open-source models, etc.)
  • Deep understanding of: Prompt injection techniques, Model jailbreak methodologies, AI system exploitation vectors
  • Strong Python skills
  • Experience building custom attack tooling or experimentation frameworks
  • Familiarity with: RAG architectures, Vector databases, Model fine-tuning workflows, API-based model deployments
  • Understanding of model safety mechanisms and guardrails

Nice To Haves

  • Background in cybersecurity or penetration testing
  • Familiarity with OWASP LLM Top 10
  • Experience working in enterprise environments

Responsibilities

  • Conduct adversarial testing across LLM and AI-based systems
  • Execute real-world attack simulations, including: Prompt injection, Jailbreaking and guardrail bypass, Data exfiltration attempts, Model inversion and evasion techniques, RAG manipulation
  • Develop scripts and tooling to automate attack scenarios
  • Analyse model behaviour under adversarial pressure
  • Identify systemic vulnerabilities in: APIs, Embedding pipelines, Vector databases, Fine-tuned model implementations
  • Collaborate with engineering teams to validate remediation
  • Document findings clearly and concisely
  • Help ensure AI systems are resilient before they are deployed at scale

Benefits

  • Comprehensive Private Medical Coverage
  • Support for Mental Health Expenses
  • Life Insurance Options
  • Attractive Compensation Package
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service