Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. Are you a red teamer who is looking to break into the AI field? Do you want to find AI failures in Microsoft’s largest AI systems impacting millions of users? Join Microsoft’s AI Red Team where you'll emulate work alongside security and AI hacking experts to proactively test for failures in Microsoft’s big AI systems. We are looking for an AI Security and Safety Researcher to join our team. As a red teamer dedicated to making AI security better and helping our customers expand with our AI systems, you'll apply the newest AI security, frontier harms, and safety research to emulate adversarial hacking on Microsoft’s Ai models, systems, products, and features. You will be advising product teams on how to mitigate risks before technology reaches our customers. Particularly, we are looking for a practitioner with deep academic or practical experience in AI frontier harms, such as cyber, autonomy, and loss of control of AI systems. In addition to that experience, we want AI-obsessed hacker-mindsets to come join our team. Our team is an interdisciplinary group of red teamers, adversarial Machine Learning (ML) researchers, Safety & Responsible AI experts, AI researchers, and software developers with the mission of proactively finding failures across all of Microsoft’s Ai portfolio. In this role, you will red team AI models, such as our Phi series and MAI models, and applications, including Bing Copilot, Security Copilot, Github Copilot, Office Copilot and Windows Copilot. This work is sprint based, working with AI Safety, Security, and Product Development teams, to run operations that aim to find safety and security risks before they happen. Our reporting and findings directly inform internal key business decision leadership. This a fast moving team with multiple roles and responsibilities within the AI Security and Safety space; people who love to provide agile, practical insights and who enjoy jumping in to solve ambiguous problems excel in this role. More about our approach to AI Red Teaming: https://www.microsoft.com/en-us/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-safer-ai/ Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
5,001-10,000 employees