Cybersecurity Applied Scientist, PhD Intern - Summer 2026

VisaAshburn, VA
1d$54 - $62Hybrid

About The Position

Visa’s Internship Program provides an immersive, 12-week journey where you’ll work on impactful projects that drive Visa’s mission forward. As a Visa intern, you’ll build valuable connections across the organization, sharpen your communication and business acumen, and gain hands-on experience in a dynamic, global environment. Throughout the program, you’ll have exclusive access to interactive workshops and learning sessions designed to deepen your expertise, expand your industry knowledge, and elevate your professional skillset. You won’t just be learning, you’ll be contributing, collaborating, and innovating every step of the way. In addition to professional development, you’ll enjoy a variety of intern social events that foster community, connection, and fun throughout the summer. The experience culminates in an exciting final presentation, where you’ll showcase your project achievements, share key insights, and present your recommendations to Visa’s leaders and stakeholders. This is your chance to demonstrate your business impact, highlight your personal growth, and align your work with Visa’s vision for the future. The Cybersecurity AI Center of Excellence drives research and development of next‑generation AI‑enhanced security capabilities to protect large‑scale, mission‑critical systems. Our teams work at the intersection of artificial intelligence, adversarial resilience, and cybersecurity operations—developing technologies that safeguard billions of transactions worldwide. This intern role offers the opportunity to contribute to cutting‑edge AI security research, consistent with the scope outlined in prior PhD cybersecurity internships. Some job duties and projects could include: AI enablement in cybersecurity — leveraging frontier AI techniques to improve threat detection, authentication, incident response, and security automation. Security of AI agents and AI‑native systems — designing evaluations, identifying vulnerabilities, and exploring defenses for modern agentic AI frameworks. Conduct applied research on AI‑enabled security capabilities (e.g., agent‑based threat modeling, AI‑driven detection systems, security analytics). Investigate the security of AI agents, including adversarial behaviors, misuse risks, attack vectors, and mitigation techniques. Experiment with and evaluate the latest AI agent frameworks. Develop research prototypes, tools, and experimental pipelines. Collaborate with cross‑functional security, product, and engineering teams; present findings regularly. Contribute to internal tech reports, demos, or publications. You will collaborate with world‑class cybersecurity, AI, and engineering teams to design experiments, develop prototypes, analyze large‑scale data, and present insights to technical stakeholders.

Requirements

  • Students pursuing a Ph.D. with a graduation date in December 2026 or later
  • Strong communications skills, specifically, the absence of repeated grammatical or typographical errors, clear and concise written and spoken communications that demonstrate professional judgment.

Nice To Haves

  • Strong understanding of modern AI research, including foundation models, agents, and AI security.
  • Hands‑on familiarity with next‑generation AI agent frameworks (MCP, A2A, LangChain, CrewAI, etc.).
  • Working knowledge of Python, Java, and/or JavaScript.
  • Strong understanding of Linux systems and computer networking fundamentals.
  • Familiarity with major cloud environments (AWS, GCP, Azure, OCI, etc.).
  • Research experience in cybersecurity, AI security, adversarial machine learning, or systems security (consistent with themes from internal PhD research JDs).
  • Experience with agent orchestration, LLM‑based automation, or distributed systems.
  • Experience with computer use agents and the underlying mechanisms.
  • Experience designing experiments or evaluations for AI systems.
  • Publication record in AI/ML, security, systems, or related fields.
  • Excellent written and verbal communication skills.
  • Demonstrated ability to think outside the box and innovate.
  • The ability to take on challenges and address problems head-on
  • Strong ability to collaborate
  • Highly driven, resourceful and results oriented
  • Good team player and excellent interpersonal skills
  • Good analytical and problem-solving skills
  • Demonstrated ability to lead and navigate through ambiguity

Responsibilities

  • Conduct applied research on AI‑enabled security capabilities (e.g., agent‑based threat modeling, AI‑driven detection systems, security analytics).
  • Investigate the security of AI agents, including adversarial behaviors, misuse risks, attack vectors, and mitigation techniques.
  • Experiment with and evaluate the latest AI agent frameworks.
  • Develop research prototypes, tools, and experimental pipelines.
  • Collaborate with cross‑functional security, product, and engineering teams; present findings regularly.
  • Contribute to internal tech reports, demos, or publications.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service