About The Position

The CoreAI organization at Microsoft builds the end-to-end Artificial Intelligence (AI) stack and is core to Azure AI innovation and differentiation, as well as all of Microsoft’s flagship products, from GitHub, to Office, Teams, and Xbox. We are the team building Responsible AI, Azure OpenAI, Model as a Service, Azure Machine Learning (ML), Cognitive Services, and the global Azure AI infrastructure for running the largest AI workloads on the planet. We are hiring a Principal Applied Science Manager to join our team! We ensure Microsoft ships AI systems that are safe, secure, and trustworthy—empowering every person and organization on the planet to achieve more. Our team sits at the intersection of cutting-edge AI research and planet-scale production systems, powering technologies like GitHub Copilot and Azure OpenAI.

Requirements

  • Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 8+ years related experience (e.g., statistics, predictive analytics, research) OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 6+ years related experience (e.g., statistics, predictive analytics, research) OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 5+ years related experience (e.g., statistics, predictive analytics, research) OR equivalent experience.
  • 3+ years of people management experience.
  • Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.

Nice To Haves

  • Experience and familiarity with large language models (LLMs).
  • Experience with and a solid foundation in large distributed systems, algorithms, and software engineering principles.
  • 5+ years of experience in a research/ML engineering or an applied research scientist position, ideally with a focus on AI safety.
  • Have hands-on experience with deep learning and transformer-based models
  • Excel at problem-solving and analytics, with a proactive approach to challenges
  • Thrive in fast-moving environments where priorities shift and definitions evolve.
  • Enjoy taking ownership end-to-end and learning whatever is necessary to get results.
  • Comfortable working independently while thriving in cross-team collaborations.
  • Understand methods for training and fine-tuning LLMs, including distillation, supervised fine-tuning, and policy optimization.

Responsibilities

  • Lead the development and deployment of novel fine-tuning techniques that leverage synthetic data generation, preference modeling, and advanced training pipelines (e.g., SFT, RL-based methods, hybrid approaches) to improve model alignment at scale.
  • Apply these techniques to train models with stronger alignment properties, including honesty, character, harmlessness, and robustness to misuse, distribution shift, and adversarial inputs.
  • Design, build, and maintain rigorous evaluation frameworks to measure alignment and safety properties across capabilities, failure modes, and deployment contexts, including offline benchmarks, automated probes, and human-in-the-loop evaluations.
  • Collaborate cross-functionally with research, product, infrastructure, and deployment teams to translate alignment improvements into production-ready models, balancing safety, capability, latency, and cost considerations.
  • Establish scalable processes and tooling to automate data generation, training, evaluation, and analysis workflows, enabling the team to iterate rapidly and operate reliably as model size, complexity, and deployment surface grow.
  • Provide technical leadership and people management for a team of applied scientists and engineers, setting clear technical direction, prioritizing high-impact work, and mentoring team members in both research rigor and production excellence.
  • Influence the broader safety and alignment strategy by identifying emerging risks, proposing new mitigation approaches, and contributing to long-term roadmaps for responsible model development.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Principal

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service