Software Engineer

MicrosoftRedmond, WA
7d

About The Position

The Copilot Security Team is chartered with securing Microsoft’s agentic and autonomous AI systems. We build the adversarial testing, mitigations, telemetry, guardrails, and evaluation capabilities that reduce real‑world risk across Copilot and emerging AI agents. Our mission: increase safety, resilience, and trustworthiness by proactively identifying weaknesses and engineering durable defenses at scale. As a Software Engineer (IC3), you will design and implement foundational components that harden Copilot’s agentic systems against jailbreaks, prompt injection, toolchain misuse, unsafe autonomy, and other emerging XPIA‑class attack vectors. You will contribute to both adversarial evaluations and the Agentic Security Platform—the shared services, pipelines, and instrumentation that enable reproducible, auditable security evaluation across Microsoft. This role is ideal for engineers who are passionate about secure-by-design engineering, love building well‑constructed systems, and want to help define the future of responsible AI. Why Join the Copilot Security Team? Join a team at the center of Microsoft’s most critical AI safety work. You will build systems that shape how Copilot—and future autonomous AI—behave in the world. Your work will directly reduce real‑world risk, improve product safety, and influence Microsoft-wide engineering standards for agentic AI. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Requirements

  • Bachelor's Degree in Computer Science or related technical field AND 2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.
  • Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.

Nice To Haves

  • Master's Degree in Computer Science or related technical field with proven experience coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor's Degree in Computer Science or related technical field AND 2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.
  • 2+ years of engineering experience working on production systems, services, or frameworks.
  • Experience with AI/ML workflows, agentic systems, or evaluation frameworks (SEVAL/CARES, safety scorecards, dataset generation, etc.).
  • Understanding of adversarial ML concepts (jailbreaks, prompt injection, toolchain misuse) or threat‑driven engineering.
  • Experience with telemetry/observability stacks (Kusto, OpenTelemetry, metrics/logging pipelines).
  • Solid collaboration, communication, and documentation skills; ability to partner with PM/TPM, applied science, and security teams.
  • Passion for security, reliability, and responsible deployment of AI systems.
  • Experience building software systems or services, including debugging, testing, or CI/CD practices.
  • Familiarity with distributed systems, RESTful APIs, or cloud‑based services (Azure preferred).

Responsibilities

  • Design, build, and maintain high‑quality services and libraries that support adversarial testing, security evaluations, and risk measurement for agentic and autonomous AI systems.
  • Implement adversarial test harnesses (e.g., jailbreak, prompt injection, toolchain misuse) and integrate them into shared evaluation pipelines such as SEVAL/CARES.
  • Develop telemetry, instrumentation, and observability features that improve detection coverage, reproducibility, and security‑relevant data collection.
  • Build CI/CD hooks, governance features, and safety-criteria validation that support safe rollout and evaluation of Copilot capabilities.
  • Work closely with applied scientists, red teamers, and partner engineering teams to translate top risks into testable requirements and secure engineering patterns.
  • Contribute to the Agentic Security Platform—shared services that support evaluation at scale across Copilot and foundation‑model–powered applications.
  • Participate in design reviews, threat modeling sessions, and code reviews with a focus on reliability, security, and defense‑in-depth.
  • Write clear technical documentation, sample code, and best‑practice guidance consumed by internal product teams.
  • Ensure systems meet expectations for availability, reliability, performance, and scalability.
  • Drive continuous improvement by automating workflows, reducing operational toil, and improving development velocity for the broader ecosystem.
  • Build reusable components and practices that teams across Microsoft can adopt to improve safety and evaluation consistency.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service