Senior AI Security Researcher

NVIDIADurham, NC

About The Position

NVIDIA is looking for a Senior AI Security Researcher to help define how frontier AI systems, agentic applications, and AI-enabled security automation are tested, attacked, defended, and safely deployed. You will build new methods, tools, evaluations, and proofs of concept that help NVIDIA understand and reduce security risk across AI models, AI platforms, autonomous agents, cloud services, developer tooling, and accelerated computing systems! We are looking for a researcher who can move fluidly from open-ended research questions to application within working systems: someone who can discover novel failure modes, build rigorous evaluation harnesses, prototype adversarial and defensive techniques, and turn findings into practical mitigations for engineering teams. The right person may come from AI security, ML security, malware data science, cyber-defense research, adversarial ML, LLM security, offensive security, threat hunting, or applied security research at scale!

Requirements

  • 12+ years of experience in AI security, cybersecurity research, applied ML research, offensive security, cyber defense, or related technical fields.
  • Demonstrated record of original research and practical impact, such as deployed security ML systems, AI-security evaluations, CVEs, patents, publications, conference talks, open-source tools, production mitigations, or funded research programs.
  • Hands-on ability to build working research systems in Python and modern ML/data tooling such as PyTorch, JAX, TensorFlow, scikit-learn, Pandas, NumPy, Spark, BigQuery, or comparable platforms.
  • Experience with one or more AI-security areas: LLM security, adversarial ML, model evaluation, agent security, prompt injection, model backdoors, data poisoning, model abuse, secure RAG, synthetic data, or AI-enabled security automation.
  • Strong cybersecurity foundation, including threat modeling, adversary simulation, exploit or vulnerability research, malware analysis, network defense, threat hunting, detection engineering, digital forensics, secure code review, or incident-response automation.
  • Ability to work across ambiguous research problems and practical product constraints, translating findings into prioritized recommendations and measurable security outcomes.
  • Bachelor's degree or equivalent experience in Computer Science, Machine Learning, Cybersecurity or a related field.
  • Experience leading AI-security research for major models, AI platforms, security products, or large-scale production systems.
  • A track record of building security ML systems that operate at real-world scale.

Nice To Haves

  • Published work or public technical leadership in AI security, malware data science, adversarial ML, LLM security, cyber-defense automation, or offensive AI.
  • Experience developing benchmarks, challenge datasets, red-team tools, evaluation suites, or simulation environments for AI and security systems.
  • Deep knowledge of attacker tradecraft, including living-off-the-land techniques, supply-chain abuse, application-layer AI attacks, data exfiltration, and abuse of autonomous tooling.
  • Experience with low-level systems security.
  • History of mentoring researchers, winning or leading research programs, filing patents, publishing papers, or speaking at major security and AI venues.

Responsibilities

  • Develop and answer open-ended AI security research questions that helps NVIDIA understand, measure, and reduce risk in frontier models, agentic systems, AI platforms, and AI-enabled products.
  • Develop practical methods, prototypes, evaluations, or tools that reveal how AI systems can fail under adversarial conditions and how those risks can be mitigated.
  • Explore a range of AI security problems, such as LLM and agent security, adversarial testing, model evaluation, cyber-defense automation, vulnerability discovery, secure deployment, or autonomous response.
  • Translate research into usable outcomes for engineering and security teams, including proof-of-concept demonstrations, benchmarks, technical guidance, mitigations, and secure-by-design recommendations.
  • Collaborate across offensive security, product security, AI research, platform, cloud, and infrastructure teams to connect research insights with NVIDIA's highest-impact security priorities.
  • Help shape NVIDIA's AI-security research strategy by mentoring others, identifying emerging risks, and building repeatable practices for evaluating and defending AI systems.

Benefits

  • equity
  • benefits
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service