This role requires a passion for advancing information literacy through AI & machine learning, focusing on assessing media trustworthiness (images, audio, and video) and exploring concepts like authenticity, provenance, and context. Key responsibilities include formulating metrics, simulations, rapid prototyping of ML techniques, exploratory data analysis, collaborating with product teams to drive research, and developing tools and frameworks to accelerate research. A public example of research work is Backstory. Artificial Intelligence could be one of humanity's most useful inventions. At Google DeepMind, we're a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. We're a dedicated scientific community, committed to "solving intelligence" and ensuring our technology is used for widespread public benefit. We've built a supportive and inclusive environment where collaboration is encouraged and learning is shared freely. We don't set limits based on what others think is possible or impossible. We drive ourselves and inspire each other to push boundaries and achieve ambitious goals. To succeed in this role, you will need to be passionate about advancing information literacy using machine learning and other computational techniques. You'll join an interdisciplinary team of domain experts, ML researchers, and engineers to conduct cutting-edge research and advance the next generation of multimodal AI assistants that help co-investigation and deliberation. Relevant domains may include, but are not limited to, determining media authenticity, context discovery, and open source intelligence investigations. A public example of recent work is Backstory.