Research Scientist, Tech Lead Manager, Model Threat Mitigation

DeepMindMountain View, CA
44d$248,000 - $349,000

About The Position

Artificial Intelligence could be one of humanity's most useful inventions. At Google DeepMind, we're a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. About Us Model distillation is a key innovation enabling the acceleration of AI, turning large general models into small and specialized models used across the industry. However, distillation techniques can also be used to steal critical model capabilities, representing a significant threat to the intellectual property and integrity of our foundational models. The Role As the Tech Lead Manager for Model Threat Mitigation, you will lead the workstream dedicated to protecting Google DeepMind's most valuable AI assets. You will grow and manage the current team of Research Scientists, Research Engineers, and Software Engineers responsible for both detecting unauthorized distillation and mitigating these threats. This is a unique opportunity to define the comprehensive defense strategy for our models. You will oversee the full lifecycle of defense from researching novel detection systems and identifying capability theft, to deploying core mitigations and contributing to the model training.

Requirements

  • Ph.D. in Computer Science or a related quantitative field, or a B.S./M.S. in a similar field with 5+ years of relevant industry experience.
  • 2+ years of experience in technical leadership or people management, managing research scientists or software engineers.
  • Demonstrated research or product expertise in a field related to adversarial ML, model security, model evaluation, pre-training, post-training, or distillation.
  • Experience designing and implementing large-scale ML systems and/or counter-abuse infrastructure.

Nice To Haves

  • Deep technical expertise in one or more of the following: model distillation, model stealing, Reinforcement Learning, Supervised Fine-Tuning, embeddings analysis, or security.
  • Experience managing research teams with a focus on publication or applying novel research to production environments.
  • Experience interacting with leadership and cross-functional partners (e.g. Legal, Product).
  • Strong software engineering skills and experience with ML frameworks like JAX, PyTorch, or TensorFlow.
  • Current or prior US security clearance.

Responsibilities

  • Lead and Manage the Team: Build, manage, and mentor a diverse team of researchers and engineers. Foster a culture of technical excellence and creative problem-solving in a high-stakes, adversarial environment.
  • Define Technical Strategy: Set the research and engineering roadmap for the Distillation Defense workstream, communicating this to stakeholders and your team.
  • Drive Impact & Policy: Lead the effort to set protective policies for Core Model Capabilities. Work with leadership to define acceptable risk levels and trade-offs between model defensibility and performance.
  • Cross-Functional Collaboration: Partner closely with Product, Legal, and other key teams across GDM and Google to ensure defenses are integrated into the Gemini ecosystem and to influence the broader standard for responsible AI defense.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Manager

Industry

Publishing Industries

Education Level

Ph.D. or professional degree

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service