About The Position

Microsoft Sentinel Platform NEXT R&D labs is the strategic incubation engine behind the next generation of AI-native security products. We are looking to hire a Principal AI Security Researcher who thrives in a bottoms-up, fast-paced, highly technical environment. The Sentinel Platform team will be building cloud solutions meeting scales that few companies in the industry are required to support, that leverage state-of-the-art technologies to deliver holistic protection to a planet scale user base. Our team blends scientific rigor, curiosity, and customer obsession to deliver life-changing innovations that protect millions of users and organizations by building the next generation of Artificial Intelligence (AI)-native security products. We pursue long horizon bets while landing near term impact, taking ideas from zero-to-one (0→1) prototypes to Minimum Viable Products (MVPs) and then one-to-many (1→N) platform integration across Microsoft Defender, Sentinel, Entra, Intune, and Purview. Our culture blends ambition and scientific rigor with curiosity, humility, and customer obsession; we invest in new knowledge, collaborate across worldclass scientists and engineers, and tackle the immense challenge of protecting millions of customers. As a Principal AI Security Researcher, you will be the cybersecurity expert in our product-focused applied research and development (R&D) team, which focuses in artificial intelligence and machine learning and drives innovation from concept to production. You will work on a wide range of AI/ML challenges for cybersecurity, including, but not limited to, system design, collaborating with world-class scientists and engineers to deliver robust, evaluating our AI models and system’s outputs, scalable, and responsible AI systems for security applications.

Requirements

  • Doctorate in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 3+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection OR Master's Degree in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 4+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection OR Bachelor's Degree in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 6+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection OR equivalent experience.
  • Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.
  • Microsoft Cloud Background Check: This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter

Nice To Haves

  • Doctorate in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 5+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection OR Master's Degree in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 8+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection OR Bachelor's Degree in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 12+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection OR equivalent experience.
  • 5+ years of experience in cybersecurity, AI, software development lifecycle, large-scale computing, modeling, and/or anomaly detection.
  • 5+ years of professional experience in security operations, pen-testing, researching cyber threats, understanding attacker methodology, tools, and infrastructure.
  • Demonstrated autonomy and success driving zero-to-one (0→1) initiatives
  • ML background and hands-on experience
  • Experience with ML lifecycle: model training, fine-tuning, evaluation, continuous monitoring, and more.
  • Coding ability in one or more languages (e.g., Python, C#, C++, Rust, JavaScript/TypeScript).
  • Familiarity and previous work in the field of cybersecurity (e.g., threat detection/response, SIEM/SOAR, identity, endpoint, cloud security) and familiarity with analyst workflows.

Responsibilities

  • Security AI Research: be the security expert to our AI-focused team, helping evaluate our systems on real data, improve system inputs, triage and investigate AI-based findings, leverage AI and security experience to incubate and transform our products, educate applied scientists in cybersecurity.
  • Collaboration: Partner with engineering, product, and research teams to translate scientific advances into robust, scalable, and production-ready solutions.
  • AI/ML Research: design, development, and analysis of novel AI and machine learning models and algorithms for security and enterprise-scale applications.
  • Experimentation & Evaluation: Design and execute AI experiments, simulations, and evaluations to validate models and system performance, ensuring measurable improvements.
  • Customer Impact: Engage with enterprise customers and field teams to co-design solutions, gather feedback, and iterate quickly based on real-world telemetry and outcomes.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service