:Principal AI Test Engineer - Prisma Access
Palo Alto Networks - Santa Clara, CA
Apply NowJob Description
Our Mission At Palo Alto Networksu00ae, weu2019re united by a shared missionu2014to protect our digital way of life. We thrive at the intersection of innovation and impact, solving real-world problems with cutting-edge technology and bold thinking. Here, everyone has a voice, and every idea counts. If youu2019re ready to do the most meaningful work of your career alongside people who are just as passionate as you are, youu2019re in the right place. Who We Are In order to be the cybersecurity partner of choice, we must trailblaze the path and shape the future of our industry. This is something our employees work at each day and is defined by our values: Disruption, Collaboration, Execution, Integrity, and Inclusion. We weave AI into the fabric of everything we do and use it to augment the impact every individual can have. If you are passionate about solving real-world problems and ideating beside the best and the brightest, we invite you to join us We believe collaboration thrives in person. Thatu2019s why most of our teams work from the office full time, with flexibility when itu2019s needed. This model supports real-time problem-solving, stronger relationships, and the kind of precision that drives great outcomes. Job Summary Your Career We are seeking an AI-savvy Principal QA Test Engineer to transform how we test and validate Prisma Access Add-on Services (Remote Browser Isolation, Application Acceleration, Application Security, and Privileged Remote Access). As a technical leader, you will design and implement autonomous QA workflows that leverage AI to achieve unprecedented test coverage, efficiency, and defect detection. You will take ownership of quality outcomes, mentor teams, and drive innovation in how we approach testing at scale. YOUR IMPACT AI-Driven Testing & Autonomous Workflows Design and implement autonomous QA workflows using AI agents and LLM-based testing frameworks to achieve continuous, intelligent test execution Build AI-powered test generation systems that automatically create comprehensive test suites from requirements, specifications, and production telemetry Develop intelligent test oracles using LLMs to validate complex system behaviors, API responses, and user experiences beyond traditional assertions Create AI-assisted defect prediction and prevention systems that proactively identify reliability and security risks before they reach production Implement agentic testing workflows that autonomously explore application state spaces, identify edge cases, and generate regression tests Quality Engineering & Measurable Outcomes Own and drive key quality metrics: Defect Containment Effectiveness (DCE >95%), Customer Found Regression (CFR 30%) Lead root cause analysis (RCA) for production incidents and customer-found defects, implementing durable fixes that reduce MTTR and defect escape rates Drive systematic quality improvements through data-driven insights, reducing vulnerability remediation time and production incident frequency Participate in system design to ensure quality, observability, and testability are built-in throughout the Prisma Access feature development lifecycle Test Infrastructure & Platform Development Develop and enhance AI-augmented test infrastructure that enables scalable, flexible, and context-aware testing reflecting real-world deployment scenarios Build RAG (Retrieval-Augmented Generation) pipelines for test knowledge bases, enabling intelligent test selection and prioritization Design shared testing platforms and patterns for multi-dimensional testing: functional, scale, performance, resiliency, security, and chaos engineering Integrate AI models and prompts into CI/CD pipelines for continuous quality assessment and intelligent test orchestration Technical Leadership & Collaboration Provide technical leadership in browser security, cloud orchestration, distributed systems, and AI-assisted quality engineering Mentor and upskill team members on AI/ML testing techniques, prompt engineering for test automation, and modern quality practices Collaborate with Development, SRE, Product Management, and Technical Marketing to align quality strategy with business outcomes Lead design discussions and articulate technical trade-offs clearly to cross-functional stakeholders Continuous Learning & Innovation Stay current with AI/ML advancements and translate them into practical testing innovations (e.g., agentic workflows, multimodal testing, AI-powered observability) Experiment with emerging AI testing tools and frameworks; share findings and drive team adoption of proven practices Leverage customer deployment data and telemetry to enhance test strategies and improve CFD efficacy Qualifications Your Experience 10+ years of experience in QA/Test Automation Engineering with demonstrated impact on product quality and team practices 3+ years of hands-on experience with AI/ML technologies, including LLMs, prompt engineering, and AI-assisted development workflows Proven track record of building autonomous testing systems or AI-powered quality engineering tools Deep expertise in cybersecurity, cloud networking, or distributed systems testing Strong proficiency in Python and/or Go for test automation and AI workflow development Experience with LLM frameworks (LangChain, LlamaIndex, AutoGen) and AI model integration Expertise in REST API testing, web UI automation (Selenium, Playwright, Puppeteer), and cloud-native application testing Technical Skills Hands-on experience with AI-powered test generation, intelligent test selection, and autonomous test execution Experience building RAG pipelines, vector databases, and knowledge graphs for test intelligence Strong understanding of prompt engineering, few-shot learning, and fine-tuning for testing use cases Proficiency with observability platforms (Prometheus, Grafana, Splunk) and log analysis using AI Experience with cloud providers (AWS, Azure, GCP) and infrastructure-as-code (Terraform, CloudFormation) Knowledge of microservices architecture, distributed systems testing, and performance optimization Experience with test management systems (TestRail) and defect tracking (JIRA) Preferred Experience Experience with browser security solutions: enterprise browsers, remote browser isolation, browser extensions Background in building AI agents for software testing or autonomous DevOps workflows Experience with multi-agent systems and orchestration frameworks Knowledge of security testing, penetration testing, or vulnerability assessment automation Contributions to open-source testing frameworks or AI/ML testing tools Experience measuring and improving quality metrics (DCE, CFR, ADDR, MTTR, defect escape rate) Education M.S./B.S. degree in Computer Science, Electrical Engineering, or equivalent military experience required EXPECTATIONS AT PRINCIPAL LEVEL Scope & Complexity Given ambiguous goals (
Created: 2026-03-20