Principal Machine Learning Platform Engineer (Prisma AIRS)

Palo Alto NetworksSanta Clara, CA
29d

About The Position

With Prisma AIRS, Palo Alto Networks is building the world's most comprehensive AI security platform. Organizations are increasingly building complex ecosystems of AI models, applications, and agents, creating dynamic new attack surfaces with risks that traditional security approaches cannot address. In response, Prisma AIRS delivers model security, posture management, AI red teaming, and runtime protection. Our customers can confidently deploy AI-driven innovation while ensuring a formidable security posture from development through runtime. As a Principal Machine Learning Inference Engineer, you will serve as a technical authority and visionary for the Prisma AIRS team. You will be responsible for the architectural design and long-term strategy of our AI platform - ML inference. Beyond individual contribution, you will lead complex technical projects, mentor senior engineers, and set the standard for performance, scalability, and engineering excellence across the organization. Your decisions will have a profound and lasting impact on our ability to deliver cutting-edge AI security solutions at a massive scale.

Requirements

  • BS/MS or Ph.D. in Computer Science, a related technical field, or equivalent practical experience.
  • Extensive professional experience in software engineering with a deep focus on MLOps, ML systems, or productionizing machine learning models at scale.
  • Expert-level programming skills in Python are required; experience in a systems language like Go, Java, or C++ is nice to have.
  • Deep, hands-on experience designing and building large-scale distributed systems on a major cloud platform (GCP, AWS, Azure, or OCI).
  • Proven track record of leading the architecture of complex ML systems and MLOps pipelines using technologies like Kubernetes and Docker.
  • Mastery of ML frameworks (TensorFlow, PyTorch) and extensive experience with advanced inference optimization tools (ONNX, TensorRT).
  • Demonstrated expertise with modern LLM inference engines (e.g., vLLM, SGLang, TensorRT-LLM) is required. Open-source contributions in these areas are a significant plus.

Nice To Haves

  • A strong understanding of popular model architectures (e.g., Transformers, CNNs, GNNs) is a significant plus.
  • Experience with low-level performance optimization, such as custom CUDA kernel development or using Triton Language, is a plus.
  • Experience with data infrastructure technologies (e.g., Kafka, Spark, Flink) is great to have.
  • Familiarity with CI/CD pipelines and automation tools (e.g., Jenkins, GitLab CI, Tekton) is a plus.

Responsibilities

  • Architect and Design: Lead the architectural design of a highly scalable, low-latency, and resilient ML inference platform capable of serving a diverse range of models for real-time security applications.
  • Technical Leadership: Provide technical leadership and mentorship to the team, driving best practices in MLOps, software engineering, and system design.
  • Strategic Optimization: Drive the strategy for model and system performance, guiding research and implementation of advanced optimization techniques like custom kernels, hardware acceleration, and novel serving frameworks.
  • Set The Standard: Establish and enforce engineering standards for automated model deployment, robust monitoring, and operational excellence for all production ML systems.
  • Cross-Functional Vision: Act as a key technical liaison to other principal engineers, architects, and product leaders to shape the future of the Prisma AIRS platform and ensure end-to-end system cohesion.
  • Solve the Hardest Problems: Tackle the most ambiguous and challenging technical problems in large-scale inference, from mitigating novel security threats to achieving unprecedented performance goals.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Principal

Education Level

Ph.D. or professional degree

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service