We are seeking a Software Test Engineer to join PNNL’s TestOps team, helping assure the quality, reliability, and performance of innovative systems spanning agentic AI platforms, large-scale data orchestration, and real-time intelligence processing. This is an excellent opportunity for mid-career test engineers to apply modern QA and full-stack test engineering practices to mission-critical national security applications, strengthening expertise in end-to-end system validation across APIs, data pipelines, and production-like environments. Who You Are You’re a motivated test engineer with experience developing test strategies and test artifacts (test plans, test cases, and test reports) and building/maintaining test automation, with strong fundamentals in software engineering and QA best practices. You’re comfortable partnering across the full development lifecycle to translate requirements into measurable acceptance criteria and comprehensive test coverage. You’re detail-oriented and analytical, with strong debugging skills, and you communicate clearly with both technical and non-technical stakeholders. What You’ll Do Test Strategy, Planning, and Coverage Design, develop, document, execute, and debug test strategies for new and existing software systems, applications, and hardware/software interfaces, applying QA best practices Collaborate with cross-functional teams across the full development lifecycle to analyze user needs and requirements; translate requirements into test plans, test cases, traceability, and acceptance criteria Ensure comprehensive functional, integration, system, regression, and performance coverage using risk-based approaches and clear release criteria Produce high-quality test reports and quality summaries that communicate coverage, results, and risk Test Automation, Manual Validation, and CI/CD Build and maintain automated and manual test solutions across API, UI, integration, end-to-end, and regression layers Implement automated tests using Cypress.io, Playwright, or similar frameworks; reduce flakiness and improve reliability Integrate test tooling and automated tests into CI/CD pipelines (e.g., GitLab or GitHub), including reporting and quality gates Validate end-to-end workflows and integrations across APIs, databases, pipelines, and services using SQL and/or GraphQL where appropriate AI/ML and Data-Intensive System Validation Validate models, data, and end-to-end workflows using data/model validation plus integration, E2E, and regression testing, including handling non-deterministic outputs Assess AI quality attributes such as accuracy, precision/recall, relevance, bias/fairness, robustness/consistency, and verify guardrails/safety/explainability expectations Evaluate data quality signals including completeness, correctness, representativeness, drift, and label quality Partner with engineers to define and automate AI evaluation and regression approaches that fit mission needs Work with AI agents/skills and MCP servers to support test automation workflows and system validation Platform, Cloud, and Reliability Readiness Contribute to quality practices for cloud and containerized deployments by applying a general understanding of cloud concepts (e.g., AWS/Azure services) and common container tooling (e.g., Docker/Podman and Kubernetes fundamentals) Use observability (logs/metrics/traces) to debug failures, validate monitoring, and improve system testability Support performance testing and reliability validation (latency, scalability, stability) for mission-critical services Stakeholder Partnership and Continuous Improvement Partner with end users and stakeholders to prototype, configure, refine, verify, and troubleshoot systems to meet intended use Identify and evaluate new testing tools, technologies, and methods to improve quality, reliability, and test efficiency through continuous improvement Collaboration & Professional Growth Collaborate effectively with software engineers, DevOps/platform teams, data scientists, and stakeholders across the full development and release lifecycle Communicate clearly in writing and verbally by documenting test plans, test results, defects; articulate technical risks and quality status in team discussions Participate actively in code reviews, test strategy/design discussions, and continuous improvement efforts, with openness to constructive feedback and a willingness to learn best practices Incorporate feedback from defects and incidents to improve test coverage, automation reliability, and overall system quality through peer collaboration, self-study, and hands-on learning National Interest Project Examples Detect and prevent smuggling of drugs and contraband at ports of entry [Link] Develop large data pipelines to thwart funding for terrorists, nuclear proliferators, drug cartels, and rogue leaders [Link] Applying big data solutions to national security problems [Link] Applying image classification for nuclear forensics analysis [Link] Develop capabilities for scalable geospatial analytics [Link] This position is based in Richland, WA or Seattle, WA and requires an onsite presence Monday through Thursday, with Friday as required by business needs.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level