Coding - Adversarial Prompt Expert

Reinforce Labs, Inc.
22h

About The Position

We are seeking an Adversarial Prompt Security Specialist with strong technical instincts and coding proficiency to join our Trust & Safety team. In this role, you will use your knowledge of LLM behavior and scripting skills to probe, bypass, and stress-test safety systems. Your focus will be on discovering vulnerabilities—crafting prompt injection sequences, writing scripts to automate exploit attempts, manipulating API interactions, and identifying novel attack vectors that evade existing safeguards. This is a hands-on offensive testing role that rewards creativity, persistence, and an attacker’s mindset over formal engineering credentials.

Requirements

  • Proficiency in Python scripting, with the ability to write functional scripts for task automation, API interaction, and data manipulation. Formal software engineering training is not required.
  • Demonstrated experience in adversarial prompt engineering, jailbreak development, or LLM red-teaming—whether in a professional, academic, independent research, or community context (e.g., bug bounties, CTFs, responsible disclosure).
  • Working familiarity with LLM APIs (e.g., OpenAI, Anthropic, open-source model endpoints) and a practical understanding of how large language models process input, generate output, and enforce safety constraints.
  • Knowledge of common LLM attack vectors, including direct and indirect prompt injection, payload encoding and obfuscation, context window manipulation, system prompt leakage, and role-play exploitation.
  • Strong written communication skills, with the ability to produce clear vulnerability reports that include reproduction steps, severity context, and mitigation recommendations.

Nice To Haves

  • Background in cybersecurity, penetration testing, or application security—formal or self-taught. Relevant certifications (e.g., OSCP, CEH) are valued but not required.
  • Familiarity with AI safety evaluation frameworks such as the OWASP Top 10 for LLM Applications, NIST AI RMF, or MITRE ATLAS.
  • Understanding of LLM alignment techniques (e.g., RLHF, constitutional AI) and their known failure modes and exploitable edge cases.
  • Experience with multi-modal model testing (vision, code generation, tool use) and awareness of cross-modal attack surfaces.
  • Proficiency in additional scripting or programming languages (e.g., JavaScript, Bash, Go) that expand testing capabilities.

Responsibilities

  • Code-Assisted Adversarial Probing: Write and execute scripts (primarily Python) to systematically test LLM safety boundaries. This includes automating prompt injection chains, encoding and obfuscating payloads, manipulating conversation context through API calls, and iterating on attack strategies programmatically rather than relying solely on manual interaction.
  • Jailbreak Discovery and Development: Design multi-step jailbreak sequences that exploit model behavior through technical means, such as token-level manipulation, system prompt extraction, role-play escalation, instruction hierarchy subversion, and context window exploitation. Identify bypass vectors that circumvent safety classifiers and content filters.
  • Cross-Vector Exploitation: Test attack surfaces that span code generation, tool use, multi-turn conversation, and multi-modal inputs. Explore how code-mediated interactions—such as requesting the model to write, execute, or interpret code—can be leveraged to bypass safety controls that apply to natural language interactions.
  • Vulnerability Documentation: Document discovered vulnerabilities with clear severity assessments, step-by-step reproduction instructions, and sample exploit code. Provide context on why a given bypass is dangerous and recommend potential mitigations for the alignment and engineering teams.
  • Attack Landscape Monitoring: Stay current with emerging adversarial techniques from the AI security research community, open-source exploit repositories, academic publications, and real-world misuse patterns. Adapt and apply novel methods to internal testing workflows.
  • Safety Policy Input: Provide technical feedback to content policy and safety classification teams based on observed model behaviors. Flag gaps between intended safety enforcement and actual model output, particularly in edge cases involving code generation, indirect prompt injection, and agentic tool-use scenarios.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service