Overview SilverEdge Government Solutions is seeking a highly skilled LLM Security Evaluation Expert to join our team. In this role, you will be responsible for rigorously testing the security and integrity of Large Language Models (LLMs). Your primary focus will be on designing and executing sophisticated adversarial prompt attacks to identify potential vulnerabilities, assess the model's resistance to exploitation, and ensure it maintains consistent, secure behavior. This is a critical role in safeguarding our AI systems and ensuring they operate responsibly. Adversarial Prompt Design & Execution: Develop and implement a comprehensive suite of adversarial prompts, ranging from basic to more sophisticated, targeting known and potential LLM vulnerabilities. Craft prompts specifically designed to: Bypass security filters and content moderation policies. Induce the LLM to reveal sensitive, confidential, or proprietary information. Manipulate the LLM's output to generate harmful, biased, or unintended content. Test for prompt injection, jailbreaking, and other emerging attack vectors. Vulnerability Assessment & Analysis: Systematically test LLMs against the designed adversarial prompts. Analyze LLM responses to identify successful exploits, security weaknesses, and patterns of failure.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed