Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. About Us Model distillation is a key innovation enabling the acceleration of AI, turning large general models into small and specialized models used across the industry. However, distillation techniques can also be used to steal critical model capabilities, representing a significant threat to the intellectual property and integrity of our foundational models. The Role As part of the Security & Privacy Research Team at Google DeepMind, you will take on a holistic role in securing our AI assets. You will both identify unauthorized distillation attempts and actively harden our models against distillation. This is a unique opportunity to contribute to the full lifecycle of defense for the Gemini family of models. You will be at the forefront detecting threats in the wild and building resilience into our models. About You We are looking for a creative and rigorous research scientist, research engineer, or software engineer who is passionate about trailblazing the critical field of model defense. You thrive on ambiguity and are comfortable working across the spectrum of security—from thinking like an adversary to building proactive protections. You are driven to build robust systems that protect the future of AI development.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
Ph.D. or professional degree
Number of Employees
1,001-5,000 employees