The Security Models Training team builds and operates the large-scale AI training and adaptation engines that power Microsoft Security products, turning cutting-edge research into dependable, production-ready capabilities. As a Principal Applied Scientist - Security AI Models, you will lead end-to-end model development for security scenarios, including privacy-aware data curation, continual pretraining, task-focused fine-tuning, reinforcement learning, and rigorous evaluation. You will drive training efficiency on distributed GPU systems, deepen model reasoning and tool-use skills, and embed responsible AI and compliance into every stage of the workflow. The role is hands-on and impact-focused, partnering closely with engineering and product to translate innovations into shipped experiences, designing objective benchmarks and quality gates, and mentoring scientists and engineers to scale results across globally distributed teams. You will combine strong coding and experimentation with a systems mindset to accelerate iteration cycles, improve throughput and reliability, and help shape the next generation of secure, trustworthy AI for our customers.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Number of Employees
5,001-10,000 employees