Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. About Us Model distillation is a key innovation enabling the acceleration of AI, turning large general models into small and specialized models used across the industry. However, distillation techniques can also be used to steal critical model capabilities, representing a significant threat to the intellectual property and integrity of our foundational models. The Role As part of the Security & Privacy Research Team at Google DeepMind, you will build our first line of distillation and security defenses: a state-of-the-art detection system. You will research, design, and train novel classifiers to identify distillation and other security threats in real time. This is a unique opportunity to build the core detection system that protects Gemini. You will be at the forefront of defining how we monitor and protect GDM models by building the core systems that identify sophisticated threats. About You We are looking for a research scientist, research engineer, or software engineer who is passionate about building the detection systems that protect the future of AI. You are an expert in machine learning, with a deep focus on designing, training, and evaluating classifiers. You thrive on ambiguity and enjoy the challenge of iterating to achieve state-of-the-art detection capabilities.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
Ph.D. or professional degree
Number of Employees
1,001-5,000 employees