LLM Security Evaluation Expert

SilverEdge Government SolutionsColumbia, MD
6d

About The Position

Overview SilverEdge Government Solutions is seeking a highly skilled LLM Security Evaluation Expert to join our team. In this role, you will be responsible for rigorously testing the security and integrity of Large Language Models (LLMs). Your primary focus will be on designing and executing sophisticated adversarial prompt attacks to identify potential vulnerabilities, assess the model's resistance to exploitation, and ensure it maintains consistent, secure behavior. This is a critical role in safeguarding our AI systems and ensuring they operate responsibly. Adversarial Prompt Design & Execution: Develop and implement a comprehensive suite of adversarial prompts, ranging from basic to more sophisticated, targeting known and potential LLM vulnerabilities. Craft prompts specifically designed to: Bypass security filters and content moderation policies. Induce the LLM to reveal sensitive, confidential, or proprietary information. Manipulate the LLM's output to generate harmful, biased, or unintended content. Test for prompt injection, jailbreaking, and other emerging attack vectors. Vulnerability Assessment & Analysis: Systematically test LLMs against the designed adversarial prompts. Analyze LLM responses to identify successful exploits, security weaknesses, and patterns of failure.

Requirements

  • Strong knowledge of how LLMs work, including their architecture, training processes, capabilities, and inherent limitations.
  • Familiarity with prominent LLM families (e.g., GPT series, Claude, Llama, PaLM) and their common characteristics.
  • Proven experience in crafting and refining prompts to elicit specific behaviors or bypass restrictions in LLMs.
  • Demonstrable understanding of techniques like jailbreaking, prompt injection, role-playing attacks, and exploiting model biases.
  • Strong understanding of cybersecurity principles and common attack vectors, particularly as they apply to AI/ML systems.
  • Ability to think like an attacker and anticipate potential exploits.
  • Excellent ability to analyze complex systems, identify subtle vulnerabilities, and systematically test hypotheses.
  • Clear and concise written and verbal communication skills, with the ability to document technical findings thoroughly.
  • Understanding of the ethical implications of AI security and commitment to responsible testing practices.

Nice To Haves

  • Prior experience in AI red teaming, penetration testing of AI/ML systems, or a dedicated LLM security research role.
  • Familiarity with specific LLM security evaluation frameworks or benchmarks (e.g., those developed by NIST, Stanford HELM, or other research institutions).
  • Knowledge of common LLM fine-tuning and alignment techniques (e.g., RLHF) and how they might impact security.
  • Contributions to the AI security community (e.g., research papers, open-source tools, conference presentations).
  • Offensive Security Certified Professional (OSCP)
  • Certified Ethical Hacker (CEH)

Responsibilities

  • Rigorously testing the security and integrity of Large Language Models (LLMs)
  • Designing and executing sophisticated adversarial prompt attacks to identify potential vulnerabilities
  • Assessing the model's resistance to exploitation
  • Ensuring it maintains consistent, secure behavior
  • Developing and implementing a comprehensive suite of adversarial prompts, ranging from basic to more sophisticated, targeting known and potential LLM vulnerabilities
  • Crafting prompts specifically designed to: Bypass security filters and content moderation policies. Induce the LLM to reveal sensitive, confidential, or proprietary information. Manipulate the LLM's output to generate harmful, biased, or unintended content. Test for prompt injection, jailbreaking, and other emerging attack vectors.
  • Systematically test LLMs against the designed adversarial prompts.
  • Analyze LLM responses to identify successful exploits, security weaknesses, and patterns of failure.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service