Chemist (FTC - 12 Month Fixed Term Contract)

DeepMindMountain View, CA
11h

About The Position

As a Chemist in the Responsible Development & Innovation (ReDI) team at Google DeepMind, you will be a principal architect of the safety protocols governing the intersection of Large Language Models (LLMs) and the chemical sciences. You will design and execute rigorous safety evaluations and inform mitigation strategies that ensure our frontier models accelerate scientific discovery without compromising global security. This role is pivotal in deciding when and how our most advanced AI systems are released to the world. About us Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. The role We are seeking a PhD-level Chemist with post-doctoral or equivalent experience in organic synthesis. You will serve as a technical authority on our red-teaming efforts, simulating adversarial scenarios to identify where AI models might inadvertently provide actionable information on the synthesis and/or weaponisation of known or potential Chemical Warfare Agents (CWAs). You will apply your knowledge of chemistry to devise evaluation methodologies (e.g. red-teaming, knowledge elicitation studies, etc.) and contribute to building and running these evaluations on new models. You will analyse the results from evaluations, communicate them clearly to advise and inform decision-makers on the safety of our AI systems, and use them to refine our harm frameworks and inform our mitigation strategies. In this role, you will work closely with other Subject-Matter Experts (SMEs) in the chemical, biological, radiological and nuclear domains, Research Engineers and Research Scientists focused on developing AI systems, as well as experts in AI ethics and policy.

Requirements

  • Chemistry Expertise: PhD in synthetic organic chemistry with at least two years post-doctoral or equivalent experience.
  • Publication Record: Proven experience publishing as a first author in high-impact general science or chemistry-specific journals, and presenting work at international chemistry conferences. Classified or internal reporting experience will be considered in lieu of public records for candidates from roles in national security.
  • Security Domain Expertise: Comprehensive understanding of the Chemical Weapons Convention (CWC) and other national and international CWA agreements/treaties, chemical defence protocols, and the landscape of dual-use research in the chemical domain.
  • Systems Thinking: The ability to translate high-level chemical risks into technical requirements for AI safety.
  • Communication Excellence: A proven ability in distilling complex technical findings into clear, actionable advice for non-specialist stakeholders.

Nice To Haves

  • Knowledge of CWA defence, including synthesis, detection, and countermeasures.
  • Direct experience with CBRNE mitigation, non-proliferation, or relevant international security stakeholders.
  • Familiarity with the machine learning lifecycle and AI Safety Frameworks.
  • Experience using and/or developing computational chemistry tools (e.g., AlphaFold, retrosynthesis engines, etc.).
  • Working knowledge of the Frontier Safety Framework (FSF), Critical Capability Levels (CCLs), and similar documents published by other leading AI labs.
  • Understanding of Google DeepMind AI research output (e.g., AlphaFold, GNoME, WeatherNext, etc.), and AI products (e.g., Gemini, Nano Banana, Genie, etc.).
  • Passion for the ethical deployment of frontier technologies and AI policy.

Responsibilities

  • Architect of Safety Evaluations: Build rigorous, scalable frameworks to evaluate model proficiency in overcoming key bottlenecks in CWA precursor acquisition, chemical synthesis, and weaponisation.
  • Strategic Advisory: Analyse evaluation results to brief executive decision-makers on model safety, directly influencing deployment "Go/No-Go" decisions.
  • Harm Framework Innovation: Refine our internal safety taxonomies to account for emergent risks at the intersection of general AI and specialist models like AlphaFold.
  • Collaborative Mitigation: Partner with Research Engineers to revise mitigation strategies and refine harm frameworks for identified chemical risks. Work with other SMEs in the chemical, biological, radiological, nuclear, and conventional explosive domains to build a unified defence against CBRNE-related risks.
  • External Engagement: Stay abreast of global chemical security trends and international non-proliferation policy through engagement with external international, governmental, and non-governmental organisations.

Benefits

  • bonus
  • equity
  • benefits

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service