About The Position

Copilot Security is at the core of Microsoft’s mission to deliver trusted, human‑centered AI experiences. We make security and resilience intrinsic to every Copilot interaction—across devices, platforms, and ecosystems. Our work spans secure identity flows, defenses against emerging threats such as prompt injection, and privacy‑first systems that scale globally across Microsoft Copilot surfaces. As Copilot enters a new era of agentic AI, where systems reason, plan, and act on behalf of users, security can no longer be static or rules‑based. We are building adaptive, learning‑driven defenses that bring judgment, context, and “security common sense” directly into model behavior and agentic workflows. We are seeking a Senior Machine Learning Engineer to tackle some of the hardest problems at the intersection of applied ML, AI security, and agentic systems. This is a hands‑on role focused on designing, training, evaluating, and shipping ML‑powered defenses that protect Copilot users from threats such as prompt injection, adversarial manipulation, unsafe delegation, and abuse of agentic workflows. You will work closely with security engineers, applied scientists, and product teams to translate emerging threat patterns into production ML systems—from detection models and policy learners to evaluation frameworks that measure real‑world robustness. Your work will directly shape how Copilot reasons safely, applies guardrails, and earns user trust at global scale. Agentic AI introduces fundamentally new security risks: indirect prompt injection, cross‑tool privilege escalation, unsafe reasoning chains, and subtle information‑flow failures. Your work will help define how AI systems develop and apply security judgment, enabling Copilot to act safely and responsibly while still unlocking powerful new capabilities for hundreds of millions of users worldwide. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50- mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.

Requirements

  • Bachelor's Degree in Computer Science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.
  • 4+ years of hands‑on experience building and shipping machine learning systems in production.
  • Solid foundation in ML fundamentals, including classification, anomaly detection, representation learning, and model evaluation.
  • Proficiency in Python and experience with modern ML frameworks (e.g., PyTorch, JAX, TensorFlow).
  • Experience designing end‑to‑end ML pipelines: data collection, training, evaluation, deployment, and monitoring.
  • Ability to reason about adversarial behavior, threat models, and failure modes in AI/ML systems.

Nice To Haves

  • Master's Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor's Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.
  • Experience working on AI safety, trust, or security‑adjacent ML problems, including prompt injection, abuse detection, or adversarial ML.
  • Familiarity with agentic or LLM‑based systems, including tool calling, multi‑step reasoning, or orchestration flows.
  • Experience building ML evaluation and observability systems for real‑world AI behavior (e.g., adversarial testing, red‑team loops, robustness metrics).
  • Exposure to distributed ML systems, large‑scale data processing, or model serving in cloud environments.
  • Ability to clearly communicate complex ML and security concepts to engineering and non‑ML stakeholders.

Responsibilities

  • Design, train, and deploy ML‑based defenses for threats such as prompt injection, adversarial inputs, and abuse of agentic workflows.
  • Develop adaptive detection and policy models that learn from evolving attacker behavior rather than relying solely on static rules or signatures.
  • Build and own evaluation frameworks for AI security, including adversarial testing, red‑teaming support, and continuous robustness measurement across real Copilot scenarios.
  • Partner with security and engineering teams to integrate ML defenses into secure orchestration frameworks that govern agent delegation, tool calling, and action execution.
  • Apply ML to encode security “common sense” and judgment into AI responses, balancing usefulness, safety, and user intent.
  • Monitor and analyze telemetry to improve model performance, reduce false positives/negatives, and guide iterative defense improvements.
  • Collaborate cross‑functionally with product, privacy, and AI platform teams to land agentic security patterns across Copilot and MAI.
  • Document and share applied ML security techniques, helping establish best practices for secure agentic AI across Microsoft.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service