Software Engineer CoreAI

MicrosoftRedmond, WA
12h

About The Position

Core AI is at the forefront of Microsoft’s mission to redefine how software is built and experienced. We are responsible for building the foundational platforms, services, programming models, and developer experiences that power the next generation of applications using Generative AI. Our work enables developers and enterprises to harness the full potential of AI to create intelligent, adaptive, and transformative software. In this role, you will help in driving production‑critical model quality and API Functionality across AI serving pipelines powering Copilot and Azure OpenAI Foundry. At Microsoft, our mission is to empower every person and every organization on the planet to achieve more. We live this mission every day through a culture that embraces a growth mindset, values diverse perspectives, and encourages continuous learning. We believe in creating an environment where individuals bring their best selves to work, collaborate openly, and build technology that makes a meaningful impact. Join us and help shape the future of the world. This role is targeting an immediate start date.

Requirements

  • Bachelor's Degree in Computer Science, or related technical discipline with proven experience coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.
  • Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years

Nice To Haves

  • Dependable expertise in solving complex technical challenges in one or more domains such as distributed systems, AI/ML infrastructure, developer platforms, or cloud services.
  • Commited to operate with high ownership and autonomy in fast‑moving, ambiguous environments.
  • Proven experience with ML model evaluation, quality metrics, or testing frameworks for AI systems.
  • Proven experience building CI/CD validation pipelines, large‑scale test automation, or data‑driven quality dashboards.

Responsibilities

  • Designs, implement, and operate model quality evaluation for advanced AI and agentic systems.
  • Validates API correctness, reliability, and contract stability as model capabilities evolve, including tool use, agents, and multimodal workflows.
  • Builds automated test harnesses, quality gates, and telemetry pipelines to detect regressions across model versions and deployments.
  • Partners closely with infrastructure, safety, and downstream product teams to ensure evaluation signals directly inform release readiness.
  • Drives clarity in ambiguous problem spaces by identifying gaps and proposing new evaluation methodologies as AI capabilities scale.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service