Machine Learning Security Researcher

Trail of Bits
87d$175,000 - $300,000

About The Position

Trail of Bits seeks a Machine Learning Security Researcher within our growing AI Assurance team. This role involves conducting cutting-edge security research on machine learning systems deployed by the world's most sophisticated AI organizations. The position focuses on identifying novel attack vectors, failure modes, and security vulnerabilities in state-of-the-art ML systems—from training pipelines and model architectures to deployment infrastructure and inference systems. You will work directly with leading AI labs and frontier model developers to ensure their systems are robust against emerging threats. This is a research role that requires deep AI/ML expertise, with no application security background necessary. The role involves contributing to the broader AI/ML security research community through tool development, threat modeling frameworks, and publications, while helping to define what secure AI development looks like at the frontier.

Requirements

  • PhD-level expertise (completed, near completion, or equivalent research experience) in machine learning, deep learning, or related fields with demonstrated research contributions.
  • Strong understanding of adversarial machine learning, including familiarity with attack paradigms such as evasion attacks, poisoning attacks, model inversion, membership inference, backdoor attacks, or prompt injection/jailbreaking techniques.
  • Extensive hands-on experience with modern ML frameworks (PyTorch, JAX, TensorFlow), transformer architectures, training methodologies, and the full ML development lifecycle from data pipelines to deployment.
  • Track record of high-quality research demonstrated through publications, preprints, open-source contributions, or other artifacts that the ML community recognizes.
  • Strong software engineering skills in Python and at least one systems language (C/C++, Rust, or similar), with experience building research prototypes and tooling.
  • Demonstrated ability to quickly learn new domains, identify security-critical edge cases, and think adversarially about complex systems without needing an explicit application security background.
  • Ability to distill complex AI/ML security research into clear, actionable recommendations for technical and executive audiences, and present findings to sophisticated clients who are themselves AI/ML experts.

Nice To Haves

  • Familiarity with CUDA programming, GPU optimization, or ML systems performance is a plus.
  • Publications at top-tier ML conferences (NeurIPS, ICML, ICLR) or security venues (USENIX Security, S&P, CCS) are valued but not required.

Responsibilities

  • Conduct original security research on cutting-edge machine learning systems, identifying novel attack vectors including adversarial examples, model poisoning, data extraction attacks, and jailbreaks for large language models and other foundation models.
  • Work directly with top-tier AI organizations (frontier labs, leading AI companies) to assess the security posture of their most advanced ML systems, providing expertise that matches their internal research capabilities.
  • Design and build novel security testing frameworks, evaluation methodologies, and open-source tools specifically for AI/ML security research—including adversarial robustness testing, model extraction detection, and automated vulnerability discovery systems.
  • Develop comprehensive threat models for emerging AI/ML deployment patterns, anticipate future attack vectors, and establish security frameworks that can scale with rapidly evolving AI capabilities.
  • Publish findings, present at security and AI/ML conferences, and contribute to the broader AI/ML security research discourse through papers, blog posts, and open-source contributions.
  • Bridge AI/ML research and security engineering, translating complex adversarial AI/ML concepts to diverse stakeholders and working closely with Trail of Bits' broader security research teams.

Benefits

  • Competitive salary complemented by performance-based bonuses.
  • Fully company-paid insurance packages, including health, dental, vision, disability, and life.
  • A solid 401(k) plan with a 5% match of your base salary.
  • 20 days of paid vacation with flexibility for more, adhering to jurisdictional regulations.
  • 4 months of parental leave to cherish the arrival of new family members.
  • $10,000 in relocation assistance for those interested in moving to NYC.
  • $1,000 Working-from-Home stipend to create a comfortable and productive home office.
  • Annual $750 Learning & Development stipend for continuous personal and professional growth.
  • Company-sponsored all-team celebrations, including travel and accommodation, to foster community and recognize achievements.
  • Philanthropic contribution matching up to $2,000 annually.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service