About The Position

SandboxAQ is a high-growth company delivering AI solutions that address some of the world's greatest challenges. The company’s Large Quantitative Models (LQMs) power advances in life sciences, financial services, navigation, cybersecurity, and other sectors. We are a global team that is tech-focused and includes experts in AI, chemistry, cybersecurity, physics, mathematics, medicine, engineering, and other specialties. The company emerged from Alphabet Inc. as an independent, growth capital-backed company in 2022, funded by leading investors and supported by a braintrust of industry leaders. At SandboxAQ, we’ve cultivated an environment that encourages creativity, collaboration, and impact. By investing deeply in our people, we’re building a thriving, global workforce poised to tackle the world's epic challenges. Join us to advance your career in pursuit of an inspiring mission, in a community of like-minded people who value entrepreneurialism, ownership, and transformative impact. About the Role The SandboxAQ Cybersecurity R&D team is looking for an AI Security Researcher to help us build the future of AI security where the world’s most advanced AI systems are tested, protected and hardened against the next generation of threats. A successful candidate thrives at the intersection of machine learning, security and software engineering. You’ll lead investigations into how AI systems can fail, and build the tools, rules and frameworks that keep them secure. This is a hands-on role where you’ll break things, fix them and then harden them for good. You’ll have extensive freedom to explore, collaborate, publish, and deploy, shaping the field of AI security from both the offensive and defensive sides. We’re looking for somebody with the curiosity of a researcher, the rigor of an engineer, and the creativity of a hacker. You will be part of a diverse team consisting of ML experts, cryptographers, mathematicians, and physicists, where they will play a key role in efficient and effective enablement of the cutting-edge technologies being developed at SandboxAQ. We’re not another security vendor chasing patch cycles - we want to make an impact, and we want to do it fast.

Requirements

  • PhD or Masters in Computer Science or related field with a focus on Machine Learning or Cybersecurity
  • Deep expertise in AI/ML, security research, or both - with a proven ability to find and fix real vulnerabilities
  • Hands-on experience with at least one of the following: Adversarial LLM red teaming, model extraction or prompt injection, data poisoning or evasion attacks, secure model deployment or sandboxing, detection and monitoring for AI misuse
  • Strong programming skills in Python, and relevant ML and/or agentic frameworks

Nice To Haves

  • Experience contributing to open source projects
  • Experience in the broader cybersecurity domain is a plus, but not essential

Responsibilities

  • Conduct original research into vulnerabilities, exploits, and adversarial behaviors in LLMs, LQMs, agents, and related AI frameworks
  • Build and operationalize AI security frameworks, evaluations and red teaming tools, and defensive mechanisms to protect models and data
  • Partner with engineering and product teams to integrate your findings into real-world systems
  • Lead or contribute to responsible disclosure and research publications that advance the state of the art
  • Stay on the edge of the latest in AI interpretability, alignment, and adversarial robustness, and use that knowledge to make AI safer for everyone

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

Number of Employees

101-250 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service