About The Position

We are seeking strong Research Scientists with expertise in AI research and experience in interdisciplinary sociotechnical modeling to join a multimodal safety research effort within Google DeepMind's Frontier AI unit. This role requires a passion for understanding and modeling the interactions between AI and society, a strong awareness of the AI alignment and safety landscape, and a penchant for developing novel ideas, methods, interfaces, and tools. This is a unique opportunity to contribute to impactful research and advance Google DeepMind's mission towards Artificial General Intelligence (AGI). Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence and ultimately achieve Artificial General Intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. We’re a dedicated scientific community, committed to “solving intelligence” and ensuring our technology is used for widespread public benefit. We’ve built a supportive and inclusive environment where collaboration is encouraged and learning is shared freely. We don’t set limits based on what others think is possible or impossible. We drive ourselves and inspire each other to push boundaries and achieve ambitious goals. Our team is a part of Google DeepMind's Frontier AI unit. We have a mission to advance the frontiers of safety and inclusion in multimodal AI, build new capabilities into Google's flagship models and products, and break new ground on AI alignment. We approach alignment research with an ecosystem view, partnering across the development and deployment cycle and grounding our work in real-world impact on global users and communities. We are a research team with a mandate to invest in longer term bets and explore innovative approaches that deliver breakthrough improvements to models (Gemini) and products. Our work is at the frontier of augmented oversight for multimodal AI, and our research advances have directly impacted multiple versions of Gemini and Nano Banana models. We are seeking strong Research Scientists with expertise in AI research and experience in interdisciplinary sociotechnical modeling, to lead new breakthrough research directions. You will join a team working to supercharge exploration, assessment, and steering of evolving AI behaviors, with a focus on subjective and creative tasks. You will tackle the underlying research questions to improve collaborative specification of alignment objectives and assessment of adherence to desired behaviors. You will research new methods to enable AI agents to monitor real-world social context and dynamically evaluate and evolve system behaviors over long time-horizons. You’ll develop new paradigms for human+AI rating that considers systemic behaviors, adapts to human feedback, and proactively seeks context. Research Scientists at Google DeepMind lead the development of novel tools and algorithms to solve Artificial General Intelligence. Joining from top academic or industrial labs, they collaborate across fields to tackle fundamental AI questions using expertise in deep learning, computer vision, and generative architectures. This role requires independent judgment to navigate complex, ambiguous problems and explore diverse technical avenues. Your work will drive breakthroughs within GDM, Google products, and the AI alignment community.

Requirements

  • PhD degree in Computer Science, Machine Learning, or a related technical field.
  • Strong publication record in top machine learning conferences (e.g., NeurIPS, CVPR, ICML, ICLR, ICCV, ECCV).
  • Demonstrated hands-on experience in developing multimodal AI models and systems,
  • Strong programming skills in Python and experience with at least one major deep learning framework (e.g., JAX/Flax/Gemax)
  • Experience conducting independent research and development, including experimental design, implementation, and analysis.

Nice To Haves

  • Proven expertise in working with and tuning large-scale vision language models.
  • Experience prototyping with VLMs with modern prompting strategies
  • Experience finetuning and post-training LLMs using RL
  • Experience with developing agentic AI solutions to complex problems
  • Excited to collaborate across orgs and disciplines to leverage diverse perspectives and expertise and find creative solutions.
  • Interest and a strong awareness of the AI alignment / safety / responsibility / fairness landscape
  • Experience with Generative AI techniques and architectures.
  • Familiarity with Reinforcement Learning or alignment methods.
  • Experience with multimodal learning, integrating information from different data types (e.g., vision, audio, text).

Responsibilities

  • Research: Generate new ideas, keep up with the state of the art in the field, discuss research directions with other researchers.
  • Execute: Design, rapidly implement, and rigorously evaluate cutting-edge ideas, methods, interfaces, and tools to explore new sociotechnical AI systems.
  • Communicate: Report and present research findings and developments clearly and efficiently both internally and externally, verbally and in writing.
  • Collaborate: Suggest and engage in inter- and intra- team collaborations to meet ambitious research goals, while also driving significant individual contributions.
  • Drive technical projects: Take ownership of substantial technical projects, from ideation and design to implementation and evaluation, often involving cross-functional collaboration.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service