Research Scientist, Agentic Safety

DeepMindMountain View, CA
51d

About The Position

Accelerate research in strategic projects that enable trustworthy, robust and reliable Agentic systems with a group of research scientists and engineers on a mission-driven team. Together, you will apply ML and other computational techniques to a wide range of challenging problems. About Us We're a dedicated scientific community, committed to "solving intelligence" and ensuring our technology is used for widespread public benefit. We've built a supportive and inclusive environment where collaboration is encouraged and learning is shared freely. We don't set limits based on what others think is possible or impossible. We drive ourselves and inspire each other to push boundaries and achieve ambitious goals The Role As a Research Scientist in Strategic Initiatives, you will use your machine learning expertise to collaborate with other machine learning scientists and engineers within our strategic initiatives programs. Your primary focus will be on building technologies to make AI agents safer. AI agents are increasingly used in sensitive contexts with powerful capabilities, having abilities to access personal data, confidential enterprise data and code, interact with third party applications or websites, or write and execute code in order to fulfil user tasks. Ensuring that such agents are reliable, secure and trustworthy is a large scientific and engineering challenge, with huge potential impact. In this role, you will serve this mission by proposing and evaluating novel approaches to agentic safety, building prototype implementations and production grade systems to validate and ship your ideas, in collaboration with a team of researchers and engineers from SSI, and the rest of Google and GDM.

Requirements

  • PhD in computer science, security or related field, or equivalent practical experience
  • Passion for accelerating the development of safe agents using innovative technologies, demonstrated via a portfolio of prior projects (github repos, papers, blog posts)
  • Strong programming experience.
  • Demonstrated record of python implementations of LLM pipelines.
  • Strong AI and Machine Learning background

Nice To Haves

  • Experience in applying machine learning techniques to problems surrounding scalable, robust and trustworthy deployments of models.
  • Experience with GenAI language models, programming languages, compilers, formal methods, and/or private storage solutions.
  • Demonstrated success in creative problem solving for scalable teams and systems
  • A real passion for AI!

Responsibilities

  • Invent and implement novel recipes for making agents safer, involving both improving models that power the agents, as well as systems that are built around the agents
  • Develop strategies to hill-climb leaderboards and debug possible performance and safety issues in frontier agents
  • Integrate novel agentic technologies into research & production grade prototypes
  • Work with product teams to gather research requirements and consult on the deployment of research-based solutions to help deliver value incrementally
  • Amplify the impact by generalizing solutions into reusable libraries and frameworks for safer AI agents across Google, and by sharing knowledge through design docs, open source, or external blog posts

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Publishing Industries

Education Level

Ph.D. or professional degree

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service