Undergraduate/Graduate (Summer) Intern - Generative AI for Cybersecurity

National Renewable Energy LaboratoryGolden, CO
13d

About The Position

NLR’s internship program aims to bridge gaps in education, research, and public service to create career pathways for students to deliver collaborative research opportunities. This is a summer program that provides students with a unique introduction to hands-on research in the cybersecurity for clean energy industry. The student intern will work with a wide variety of experts working in emerging field of cybersecurity and artificial intelligence in critical infrastructure like energy systems. Depending on the student intern’s expertise and project’s need the student might be involved in projects relevant to advanced adoption/application of AI (especially large language models/ foundation models for critical infrastructure cybersecurity) in grid cybersecurity. As a Generative AI for Cybersecurity Intern, the student will be at the forefront of securing the next generation of artificial intelligence and adopting this practice for grid security enhancement. The student’s work will involve creatively finding and understanding vulnerabilities in AI systems to help us build more robust and secure technologies. The student’s duties would include: Develop and execute novel attack scenarios targeting generative AI models. Utilize LLMs to generate realistic adversarial inputs for security testing and system evaluation. Employ Explainable AI (XAI) techniques to analyze and document the root causes of model vulnerabilities. Research emerging threats and attack vectors in the field of AI security. Collaborate with our cybersecurity and AI development teams to report findings and recommend mitigation strategies. Document your testing methodologies, results, and insights in clear and concise reports. The internship will include exposure to cybersecurity subject matter experts across the DOE lab complex, academia, and others. This internship offers a unique opportunity to gain hands-on experience in the cutting-edge field of offensive AI, providing you with highly sought-after skills at the intersection of generative AI and cybersecurity. You will not only learn to think like an adversary but also contribute to the critical mission of building more trustworthy and secure AI systems.

Requirements

  • Minimum of a 3.0 cumulative grade point average.
  • Undergraduate: Must be enrolled as a full-time student in a bachelor’s degree program from an accredited institution.
  • Post Undergraduate: Earned a bachelor’s degree within the past 12 months. Eligible for an internship period of up to one year.
  • Graduate: Must be enrolled as a full-time student in a master’s degree program from an accredited institution.
  • Post Graduate: Earned a master’s degree within the past 12 months. Eligible for an internship period of up to one year.
  • Graduate + PhD: Completed master’s degree and enrolled as PhD student from an accredited institution.
  • Applicants must be currently pursuing or have completed a Bachelor's, Master's, or Ph.D. degree in a relevant technical field. Acceptable fields of study include: Computer Science Cybersecurity Information Security Artificial Intelligence Computer Engineering Software Engineering Applied Mathematics
  • Strong programming proficiency in Python.
  • Hands-on experience with generative AI models, particularly Large Language Models (LLMs), through methods like API integration, prompt engineering, or fine-tuning.
  • Experience with common machine learning libraries and platforms (e.g., PyTorch, TensorFlow, Hugging Face).
  • Familiarity with offensive cybersecurity concepts and methodologies. Experience from Capture The Flag (CTF) competitions, penetration testing coursework, or personal projects is highly relevant.
  • Demonstrable research or substantial project experience in generative AI is required.
  • Prior research or project work in adversarial machine learning, AI security, or "AI Red Teaming" is strongly preferred and will be a key differentiator for applicants.

Nice To Haves

  • Research paper(s) published in a top-tier cybersecurity or AI conference (e.g., USENIX Security, IEEE S&P, CCS, NeurIPS, ICML, ICLR, AAAI) is a significant plus.
  • Experience with Explainable AI (XAI) libraries or techniques (e.g., LIME, SHAP) to interpret and analyze model behavior.
  • Knowledge of cloud computing platforms (AWS, Azure, GCP) and their machine learning services.
  • Contributions to open-source cybersecurity or AI projects.

Responsibilities

  • Develop and execute novel attack scenarios targeting generative AI models.
  • Utilize LLMs to generate realistic adversarial inputs for security testing and system evaluation.
  • Employ Explainable AI (XAI) techniques to analyze and document the root causes of model vulnerabilities.
  • Research emerging threats and attack vectors in the field of AI security.
  • Collaborate with our cybersecurity and AI development teams to report findings and recommend mitigation strategies.
  • Document your testing methodologies, results, and insights in clear and concise reports.

Benefits

  • Benefits include medical, dental, and vision insurance; 403(b) Employee Savings Plan with employer match; and sick leave (where required by law).
  • NLR employees may be eligible for, but are not guaranteed, performance-, merit-, and achievement- based awards that include a monetary component.
  • Some positions may be eligible for relocation expense reimbursement.
  • Internships projected to be less than 20 hours per week are not eligible for medical, dental, or vision benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service