About The Position

Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. In this role, you will embrace an attacker mindset to leverage adversarial ML techniques, simulating sophisticated threats against Google's artificial intelligence (AI) systems. Your mission: identify and exploit vulnerabilities in ML-based products to enhance Alphabet's ability to detect, respond to, and thwart real-world attacks. You will develop and execute realistic adversarial scenarios, from emulating known threats to devising novel attack vectors, always within our rules of engagement. You will analyze attack impacts, collaborate with blue teams to strengthen defenses, and partner with product teams on robust defense-in-depth controls. You'll also raise awareness of adversarial threats across security, abuse, and privacy domains, communicating business impact to leadership. We are looking for someone who loves both breaking and building secure systems.

Requirements

  • Bachelor’s degree or equivalent practical experience.
  • 2 years of experience with software development in one or more programming languages, or 1 year of experience with an advanced degree.
  • 1 year of experience with machine learning algorithms, architecture or infrastructure.

Nice To Haves

  • PhD in computer science, mathematics, or a related technical field, or equivalent practical experience.
  • Experience in conducting Red Team exercises.
  • Experience with a wide variety of machine learning technologies and applications, spanning audio, video, and text.
  • Experience in security engineering, computer and network security, authentication, security protocols, and applied cryptography.
  • Experience in research.
  • Excellent people-management and communication skills.

Responsibilities

  • Plan, lead, and execute realistic ML Red Team exercises where you step into the role of an attacker targeting ML deployments in our products.
  • Design and build tools and infrastructure to support our exercises.
  • Design controls and improvements to sharpen our capabilities to defend against attackers in close cooperation with the teams responsible for implementing them.
  • Document and present results to a variety of target audiences, ranging from highly technical engineers over non-technical subject matter experts to executive leadership.
  • Collaborate closely with product teams to support them identify and implement mitigations against successful attacks on ML deployments.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service