Principal AI Safety Program Manager

MicrosoftRedmond, WA
14h

About The Position

Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. Artificial Intelligence has the potential to change the world around us, but we must act ethically along the way. At Microsoft, we are committed to the advancement of AI driven by ethical principles. We are looking for a Principal AI Safety Program Manager to join us and to create strategies on improving our approach to AI Security & Safety to deliver on that promise. Are you passionate about security and technology in society? This may be a great opportunity for you! Who we are We are the Artificial Generative Intelligence Security (AeGIS) team, and we are charged with ensuring justified confidence in the safety of Microsoft’s generative AI products. This encompasses providing an infrastructure for AI safety & security; serving as a coordination point for all things AI incident response; researching the quickly evolving threat landscape; red teaming AI systems for failures; and empowering Microsoft with this knowledge. We partner closely with product engineering teams to mitigate and address the full range of threats that face AI services – from traditional security risks to novel security threats like indirect prompt injection and entirely AI-native threats like the manufacture of NCII or the use of AI to run automated scams. We are a mission-driven team intent on delivering trustworthy AI and response processes when it does not live up to those standards. We are always learning. Insatiably curious. We lean into uncertainty, take risks, and learn quickly from our mistakes. We build on each other’s ideas, because we are better together. We are motivated every day to empower others to do and achieve more through our technology and innovation. Together we make a difference for all of our customers, from end users to Fortune 50 enterprises. Our team has people from a wide variety of backgrounds, previous work histories and life experiences, and we are eager to maintain and grow that diversity. Our diversity of backgrounds and experiences enables us to create innovative solutions for our customers. Our culture is highly inclusive, collaborative and customer focused. What we do While some aspects of security & safety can be formalized in software or process, many things require thinking and experience – things like threat modeling, identifying the right places and ways to mitigate risks, and building response strategies. In the world of AI security, this requires an awareness and understanding of threats and risks far beyond those from traditional security; you don’t just need to worry about an access control failure, you need to worry about the user of your system having an abusive partner who’s spying on them. The Empowering Microsoft team within AeGIS is charged with continually distilling our understanding of AI security & safety into training, documentation, methodologies and tools that empower the people designing, building, testing, and using systems to do so securely & safely. While the team’s top priority is to train Microsoft’s own teams, we provide these resources to Microsoft’s customers and the world at large. For us, AI Security & Safety is not about compliance, it’s about trust. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Requirements

  • Bachelor's Degree AND 6+ years experience in engineering, product/technical program management, data analysis, or product development OR equivalent experience.
  • 3+ years of experience managing cross-functional and/or cross-team projects.
  • Candidates must be able to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:
  • Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.

Nice To Haves

  • Bachelor's Degree AND 12+ years experience engineering, product/technical program management, data analysis, or product development OR equivalent experience.
  • 8+ years of experience managing cross-functional and/or cross-team projects.
  • 1+ year(s) of experience reading and/or writing code (e.g., sample documentation, product demos).
  • 5+ years product experience in any of the safety disciplines in computer science (abuse, security, privacy, etc)
  • 3+ years of experience managing cross-functional and/or cross-team projects.
  • 5+ years experience in assessing systems for practical security, privacy, or safety flaws and helping teams mitigate identified risks
  • 3+ years experience with a socio-technical safety space (e.g. online safety, privacy)
  • 2+ years experience using AI to build tools and/or agents AND 1+ year(s) of experience reading and/or writing code (e.g., sample documentation, product demos).

Responsibilities

  • Identify patterns of AI safety risk as well as best practices from a broad spectrum of technical and other sources and distill those down to their essential pieces.
  • Transform those essential pieces into content that can be communicated to a range of partner teams and audiences so that they understand the need for what needs to be addressed (for patterns that require mitigation) or how to incorporate into their work (for best practices).
  • Designing methodologies for teams to build safely and effectively, with a clear eye towards those being directly useful and applicable by real teams,
  • Partnering with AI creation platforms that target non-pro AI builders to incorporate methodologies that allow non-pro AI builders to understand the risks of what their building and empowering them to make informed risk tradeoffs.
  • Ideating and prototyping tools that help both pro- and non-pro AI builders understand the risks of what they’re building throughout its development – from ideation to deployment.
  • Work with our education and training team to develop content in a range of formats (presentations, interactive workshops, labs and whitepapers) to bring the knowledge of how to build AI safety and securely to a wide audience
  • Build collaborative relationships with other stakeholder teams working on Responsible AI to scale out AI Safety methodologies
  • Build collaborative relationships with other security teams to scale out AI security methodologies.
  • Help define new policies and procedures (or changes to existing ones) that ensure that customers can have justified trust in Microsoft’s AI services.
  • Embody our Culture and Values
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service