AI Ethics and Safety Policy Researcher

DeepMindMountain View, CA
2d

About The Position

We are looking for an AI Ethics and Safety Policy Researcher to join our Responsible Development & Innovation (ReDI) team at Google DeepMind (GDM). In this role, you will be responsible for proactively identifying, researching, and addressing emerging AI ethics and safety challenges. Such risks relate to new AI capabilities and modalities, including but not limited to persuasion, social intelligence, personalisation, agentics, and robotics. You will conduct novel research and partner with internal and external experts to develop, adapt and implement practical guidelines and policies which mitigate against emerging risks. These guidelines and policies will ensure that GDM develops and deploys its technology in a way that is aligned with the company's AI Principles. Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. As an AI Ethics and Safety Policy Researcher, your focus will be identifying, deeply understanding and mitigating emerging AI risks. You should expect your outputs to take various forms, depending on the topic or need. This may include: original research papers or other publications on emerging AI ethics and safety issues, ideal model behaviour policies that inform model development and steer evaluations, guidelines for research or governance teams to follow when developing or deploying technology; and artifacts, processes, or coordination mechanisms needed to best support the creation and implementation of those guidelines and policies at GDM and beyond.

Requirements

  • A PhD, or equivalent experience, in a relevant field, such as AI ethics or safety, computer science, social sciences, or public policy
  • Proven expertise in AI ethics, AI policy or a related field
  • Demonstrable track record of implementing policies
  • Strong research and writing skills, evidenced by publications in top journal and conference proceedings
  • Experience working within interdisciplinary teams
  • Ability to communicate complex concepts and ideas simply for a range of collaborators
  • Ability to think critically and creatively about complex ethical issues

Responsibilities

  • Systematically identify risks associated with emerging and proliferating AI capabilities
  • Conduct original research on identified challenges, gathering information from a variety of sources, including external and internal experts, academic literature, and industry reports
  • Design and build operational frameworks for mitigating model risks, converting them into standardized artefacts such as universal training datasets and evaluation protocols
  • Collaborate with model development teams to help them adopt and apply these frameworks, guiding them in defining project-specific metrics and criteria for significant results
  • Communicate findings and recommendations to stakeholders, including researchers, engineers, product managers, and executives
  • Support the teams across GDM in interpreting the frameworks and ensuring that training and evaluation data as appropriate.
  • Work closely with relevant across the organisation to align and update the frameworks to ensure their continued relevance in a rapidly changing environment

Benefits

  • bonus
  • equity
  • benefits

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service