AI Research Engineer II

MicrosoftRedmond, WA

About The Position

Microsoft Security is dedicated to making the world a safer place by providing end-to-end, AI-powered protection across identities, devices, applications, and data. The Identity Security organization within Microsoft Security focuses on protecting billions of users from account compromise, fraud, and abuse across Microsoft Entra ID and Microsoft Account (MSA) ecosystems. Our team is central to this mission, developing large-scale AI systems and intelligent agents to detect, investigate, and prevent malicious activity in authentication and identity flows. We are significantly investing in next-generation AI capabilities, including GenAI-powered agents, retrieval systems, and intelligent decisioning platforms, to modernize security workflows. These systems enable real-time reasoning over identity signals, automated investigation of risky activity, and intelligent responses to emerging threats. We are seeking an AI Research Engineer II to contribute to this effort, working at the intersection of applied research and engineering to design and build production AI systems that integrate large language models, traditional machine learning, and agent-based architectures. This role involves translating cutting-edge AI concepts into real-world systems that power security detections and automated workflows at a global scale.

Requirements

  • Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 2+ years related experience (e.g., statistics, predictive analytics, research) OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 1+ year(s) related experience (e.g., statistics, predictive analytics, research) OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field OR equivalent experience.
  • Ability to meet Microsoft, customer and/or government security screening requirements.
  • Microsoft Cloud Background Check: This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.

Nice To Haves

  • Experience building AI agents or multi-agent systems
  • Experience translating research ideas into production systems
  • Experience deploying GenAI systems in production environments
  • Familiarity with: Azure OpenAI, Azure AI Foundry, or similar platforms
  • Familiarity with Agent frameworks or orchestration systems
  • Experience designing evaluation frameworks and metrics for AI systems
  • Knowledge of: AI safety and responsible AI practices
  • Knowledge of: Prompt injection and security risks in LLM systems
  • Experience working on security, identity, fraud, or risk-based systems

Responsibilities

  • Design and build GenAI-powered agents that reason over signals, retrieve knowledge, and take intelligent actions to support complex workflows.
  • Build AI systems with RAG, prompt engineering, and tool-using agents that handle multi-step workflows.
  • Build and integrate agent frameworks and AI services that enable reasoning, planning, and task execution.
  • Apply traditional machine learning techniques (e.g., classification, anomaly detection, ranking) alongside LLM-based systems to improve overall system performance.
  • Prototype and iterate on novel AI approaches, translating emerging research into practical production systems.
  • Implement context engineering, structured outputs, and guardrails to improve system reliability and consistency.
  • Develop and deploy production-grade AI systems, ensuring scalability, performance, and resilience.
  • Design and execute evaluation frameworks to measure: Groundedness, Relevance and correctness, Model and system performance (precision, recall, etc.).
  • Continuously improve systems through offline experimentation and production monitoring.
  • Prototype and deliver advanced capabilities such as: Multi-agent orchestration, Stateful reasoning and memory, Context-aware decision systems.
  • Collaborate with partner teams and researchers to bring new AI capabilities into production.
  • Incorporate safe and responsible AI practices, including mitigating hallucinations, misuse, and prompt injection risks.

Benefits

  • Certain roles may be eligible for benefits and other compensation.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service