:Principal AI Test Engineer - Prisma Access

Palo Alto NetworksSanta Clara, CA
10dOnsite

About The Position

We are seeking an AI-savvy Principal QA Test Engineer to transform how we test and validate Prisma Access Add-on Services (Remote Browser Isolation, Application Acceleration, Application Security, and Privileged Remote Access). As a technical leader, you will design and implement autonomous QA workflows that leverage AI to achieve unprecedented test coverage, efficiency, and defect detection. You will take ownership of quality outcomes, mentor teams, and drive innovation in how we approach testing at scale.

Requirements

  • 10+ years of experience in QA/Test Automation Engineering with demonstrated impact on product quality and team practices
  • 3+ years of hands-on experience with AI/ML technologies, including LLMs, prompt engineering, and AI-assisted development workflows
  • Proven track record of building autonomous testing systems or AI-powered quality engineering tools
  • Deep expertise in cybersecurity, cloud networking, or distributed systems testing
  • Strong proficiency in Python and/or Go for test automation and AI workflow development
  • Experience with LLM frameworks (LangChain, LlamaIndex, AutoGen) and AI model integration
  • Expertise in REST API testing, web UI automation (Selenium, Playwright, Puppeteer), and cloud-native application testing
  • Hands-on experience with AI-powered test generation, intelligent test selection, and autonomous test execution
  • Experience building RAG pipelines, vector databases, and knowledge graphs for test intelligence
  • Strong understanding of prompt engineering, few-shot learning, and fine-tuning for testing use cases
  • Proficiency with observability platforms (Prometheus, Grafana, Splunk) and log analysis using AI
  • Experience with cloud providers (AWS, Azure, GCP) and infrastructure-as-code (Terraform, CloudFormation)
  • Knowledge of microservices architecture, distributed systems testing, and performance optimization
  • Experience with test management systems (TestRail) and defect tracking (JIRA)

Nice To Haves

  • Experience with browser security solutions: enterprise browsers, remote browser isolation, browser extensions
  • Background in building AI agents for software testing or autonomous DevOps workflows
  • Experience with multi-agent systems and orchestration frameworks
  • Knowledge of security testing, penetration testing, or vulnerability assessment automation
  • Contributions to open-source testing frameworks or AI/ML testing tools
  • Experience measuring and improving quality metrics (DCE, CFR, ADDR, MTTR, defect escape rate)

Responsibilities

  • AI-Driven Testing & Autonomous Workflows Design and implement autonomous QA workflows using AI agents and LLM-based testing frameworks to achieve continuous, intelligent test execution
  • Build AI-powered test generation systems that automatically create comprehensive test suites from requirements, specifications, and production telemetry
  • Develop intelligent test oracles using LLMs to validate complex system behaviors, API responses, and user experiences beyond traditional assertions
  • Create AI-assisted defect prediction and prevention systems that proactively identify reliability and security risks before they reach production
  • Implement agentic testing workflows that autonomously explore application state spaces, identify edge cases, and generate regression tests
  • Quality Engineering & Measurable Outcomes Own and drive key quality metrics: Defect Containment Effectiveness (DCE >95%), Customer Found Regression (CFR <5%), and Automation Defect Detection Ratio (ADDR >30%)
  • Lead root cause analysis (RCA) for production incidents and customer-found defects, implementing durable fixes that reduce MTTR and defect escape rates
  • Drive systematic quality improvements through data-driven insights, reducing vulnerability remediation time and production incident frequency
  • Participate in system design to ensure quality, observability, and testability are built-in throughout the Prisma Access feature development lifecycle
  • Test Infrastructure & Platform Development Develop and enhance AI-augmented test infrastructure that enables scalable, flexible, and context-aware testing reflecting real-world deployment scenarios
  • Build RAG (Retrieval-Augmented Generation) pipelines for test knowledge bases, enabling intelligent test selection and prioritization
  • Design shared testing platforms and patterns for multi-dimensional testing: functional, scale, performance, resiliency, security, and chaos engineering
  • Integrate AI models and prompts into CI/CD pipelines for continuous quality assessment and intelligent test orchestration
  • Technical Leadership & Collaboration Provide technical leadership in browser security, cloud orchestration, distributed systems, and AI-assisted quality engineering
  • Mentor and upskill team members on AI/ML testing techniques, prompt engineering for test automation, and modern quality practices
  • Collaborate with Development, SRE, Product Management, and Technical Marketing to align quality strategy with business outcomes
  • Lead design discussions and articulate technical trade-offs clearly to cross-functional stakeholders
  • Continuous Learning & Innovation Stay current with AI/ML advancements and translate them into practical testing innovations (e.g., agentic workflows, multimodal testing, AI-powered observability)
  • Experiment with emerging AI testing tools and frameworks; share findings and drive team adoption of proven practices
  • Leverage customer deployment data and telemetry to enhance test strategies and improve CFD efficacy
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service