Software Test Engineer II

Pacific Northwest National LaboratoryRichland, WA
Onsite

About The Position

We are seeking a Software Test Engineer to join PNNL’s TestOps team, helping assure the quality, reliability, and performance of innovative systems spanning agentic AI platforms, large-scale data orchestration, and real-time intelligence processing. This is an excellent opportunity for mid-career test engineers to apply modern QA and full-stack test engineering practices to mission-critical national security applications, strengthening expertise in end-to-end system validation across APIs, data pipelines, and production-like environments. Who You Are You’re a motivated test engineer with experience developing test strategies and test artifacts (test plans, test cases, and test reports) and building/maintaining test automation, with strong fundamentals in software engineering and QA best practices. You’re comfortable partnering across the full development lifecycle to translate requirements into measurable acceptance criteria and comprehensive test coverage. You’re detail-oriented and analytical, with strong debugging skills, and you communicate clearly with both technical and non-technical stakeholders. What You’ll Do Test Strategy, Planning, and Coverage Design, develop, document, execute, and debug test strategies for new and existing software systems, applications, and hardware/software interfaces, applying QA best practices Collaborate with cross-functional teams across the full development lifecycle to analyze user needs and requirements; translate requirements into test plans, test cases, traceability, and acceptance criteria Ensure comprehensive functional, integration, system, regression, and performance coverage using risk-based approaches and clear release criteria Produce high-quality test reports and quality summaries that communicate coverage, results, and risk Test Automation, Manual Validation, and CI/CD Build and maintain automated and manual test solutions across API, UI, integration, end-to-end, and regression layers Implement automated tests using Cypress.io, Playwright, or similar frameworks; reduce flakiness and improve reliability Integrate test tooling and automated tests into CI/CD pipelines (e.g., GitLab or GitHub), including reporting and quality gates Validate end-to-end workflows and integrations across APIs, databases, pipelines, and services using SQL and/or GraphQL where appropriate AI/ML and Data-Intensive System Validation Validate models, data, and end-to-end workflows using data/model validation plus integration, E2E, and regression testing, including handling non-deterministic outputs Assess AI quality attributes such as accuracy, precision/recall, relevance, bias/fairness, robustness/consistency, and verify guardrails/safety/explainability expectations Evaluate data quality signals including completeness, correctness, representativeness, drift, and label quality Partner with engineers to define and automate AI evaluation and regression approaches that fit mission needs Work with AI agents/skills and MCP servers to support test automation workflows and system validation Platform, Cloud, and Reliability Readiness Contribute to quality practices for cloud and containerized deployments by applying a general understanding of cloud concepts (e.g., AWS/Azure services) and common container tooling (e.g., Docker/Podman and Kubernetes fundamentals) Use observability (logs/metrics/traces) to debug failures, validate monitoring, and improve system testability Support performance testing and reliability validation (latency, scalability, stability) for mission-critical services Stakeholder Partnership and Continuous Improvement Partner with end users and stakeholders to prototype, configure, refine, verify, and troubleshoot systems to meet intended use Identify and evaluate new testing tools, technologies, and methods to improve quality, reliability, and test efficiency through continuous improvement Collaboration & Professional Growth Collaborate effectively with software engineers, DevOps/platform teams, data scientists, and stakeholders across the full development and release lifecycle Communicate clearly in writing and verbally by documenting test plans, test results, defects; articulate technical risks and quality status in team discussions Participate actively in code reviews, test strategy/design discussions, and continuous improvement efforts, with openness to constructive feedback and a willingness to learn best practices Incorporate feedback from defects and incidents to improve test coverage, automation reliability, and overall system quality through peer collaboration, self-study, and hands-on learning National Interest Project Examples Detect and prevent smuggling of drugs and contraband at ports of entry [Link] Develop large data pipelines to thwart funding for terrorists, nuclear proliferators, drug cartels, and rogue leaders [Link] Applying big data solutions to national security problems [Link] Applying image classification for nuclear forensics analysis [Link] Develop capabilities for scalable geospatial analytics [Link] This position is based in Richland, WA or Seattle, WA and requires an onsite presence Monday through Thursday, with Friday as required by business needs.

Requirements

  • PhD -OR- MS/MA -OR- BS/BA and 2 years of relevant experience
  • U.S. Citizenship
  • Ability to obtain and maintain a federal security clearance (Q/SCI)
  • Background Investigation: Applicants selected will be subject to a Federal background investigation and must meet eligibility requirements for access to classified matter in accordance with 10 CFR 710, Appendix B.
  • Drug Testing: All Security Clearance positions are Testing Designated Positions, which means that the applicant selected for hire is subject to pre-employment drug testing, and post-employment random drug testing. In addition, applicants must be able to demonstrate non-use of illegal drugs, including marijuana, for the 12 consecutive months preceding completion of the requisite Questionnaire for National Security Positions (QNSP).
  • Applicants will be considered ineligible for security clearance processing by the U.S. Department of Energy if non-use of illegal drugs, including marijuana, for 12 months cannot be demonstrated.
  • The candidate selected for this position will be subject to pre-employment and random drug testing for illegal drugs, including marijuana, consistent with the Controlled Substances Act and the PNNL Workplace Substance Abuse Program.
  • New employees must successfully complete the applicable tier of federal background investigation post hire and receive a favorable federal adjudication.
  • All tiers of investigation include a declaration of illegal drug activities, including use, supply, possession, or manufacture within the last 1 to 7 years (depending on the applicable tier of investigation). Illegal drug activities include marijuana and cannabis derivatives, which are still considered illegal under federal law, regardless of state laws.
  • If you have not resided in the U.S. for three consecutive years, you are not eligible for the PIV credential and instead will need to obtain a favorable Local Site Specific Only (LSSO) Federal risk determination to maintain employment.
  • If you are offered a position at PNNL and currently have any affiliation with the government of one of these countries, you will be required to disclose this information and recuse yourself of that affiliation or receive approval from DOE and Battelle prior to your first day of employment.

Nice To Haves

  • Degree in Computer Science, Software Engineering, or a related field
  • Experience implementing automated tests using Cypress.io, Playwright, or similar testing frameworks
  • Experience using AI-assisted development tools within an IDE, such as VS Code, to write automated tests and troubleshoot issues
  • Experience in JavaScript and Python programming languages
  • Knowledgeable in using SQL or GraphQL
  • Experience developing software test plans, test cases, and test reports
  • Knowledge of software engineering best practices and software development lifecycles
  • Experience with DevOps and MLOps, including automated tests within CI/CD processes such GitLab or GitHub
  • 1+ years of experience using AI tools (i.e. Cline, Roo Code, etc.) within an IDE to write automated tests and/or troubleshoot issues
  • Familiarity with AI models such as Claude, Co-Pilot. Knowledgeable in using MCP servers, AI skills, and AI agents
  • Experience in validating models, data, and end-to-end workflows/integrations (APIs, databases, pipelines) using data/model validation plus integration, E2E, and regression testing, including handling non-deterministic outputs and real-world/edge/failure scenarios
  • Experience in assessing AI quality attributes (accuracy, precision/recall, relevance, bias/fairness, robustness/consistency), data quality (completeness, correctness, representativeness, drift, label quality), plus safety/explain ability/guardrails and performance (latency, scalability, reliability)
  • Experience with cloud computing (AWS/Azure)
  • Familiar with containerization using Docker/Podman/Kubernetes
  • Strong analytical and troubleshooting skills with attention to detail

Responsibilities

  • Design, develop, document, execute, and debug test strategies for new and existing software systems, applications, and hardware/software interfaces, applying QA best practices
  • Collaborate with cross-functional teams across the full development lifecycle to analyze user needs and requirements; translate requirements into test plans, test cases, traceability, and acceptance criteria
  • Ensure comprehensive functional, integration, system, regression, and performance coverage using risk-based approaches and clear release criteria
  • Produce high-quality test reports and quality summaries that communicate coverage, results, and risk
  • Build and maintain automated and manual test solutions across API, UI, integration, end-to-end, and regression layers
  • Implement automated tests using Cypress.io, Playwright, or similar frameworks; reduce flakiness and improve reliability
  • Integrate test tooling and automated tests into CI/CD pipelines (e.g., GitLab or GitHub), including reporting and quality gates
  • Validate end-to-end workflows and integrations across APIs, databases, pipelines, and services using SQL and/or GraphQL where appropriate
  • Validate models, data, and end-to-end workflows using data/model validation plus integration, E2E, and regression testing, including handling non-deterministic outputs
  • Assess AI quality attributes such as accuracy, precision/recall, relevance, bias/fairness, robustness/consistency, and verify guardrails/safety/explainability expectations
  • Evaluate data quality signals including completeness, correctness, representativeness, drift, and label quality
  • Partner with engineers to define and automate AI evaluation and regression approaches that fit mission needs
  • Work with AI agents/skills and MCP servers to support test automation workflows and system validation
  • Contribute to quality practices for cloud and containerized deployments by applying a general understanding of cloud concepts (e.g., AWS/Azure services) and common container tooling (e.g., Docker/Podman and Kubernetes fundamentals)
  • Use observability (logs/metrics/traces) to debug failures, validate monitoring, and improve system testability
  • Support performance testing and reliability validation (latency, scalability, stability) for mission-critical services
  • Partner with end users and stakeholders to prototype, configure, refine, verify, and troubleshoot systems to meet intended use
  • Identify and evaluate new testing tools, technologies, and methods to improve quality, reliability, and test efficiency through continuous improvement
  • Collaborate effectively with software engineers, DevOps/platform teams, data scientists, and stakeholders across the full development and release lifecycle
  • Communicate clearly in writing and verbally by documenting test plans, test results, defects; articulate technical risks and quality status in team discussions
  • Participate actively in code reviews, test strategy/design discussions, and continuous improvement efforts, with openness to constructive feedback and a willingness to learn best practices
  • Incorporate feedback from defects and incidents to improve test coverage, automation reliability, and overall system quality through peer collaboration, self-study, and hands-on learning

Benefits

  • medical insurance
  • dental insurance
  • vision insurance
  • robust telehealth care options
  • several mental health benefits
  • free wellness coaching
  • health savings account
  • flexible spending accounts
  • basic life insurance
  • disability insurance
  • employee assistance program
  • business travel insurance
  • tuition assistance
  • relocation
  • backup childcare
  • legal benefits
  • supplemental parental bonding leave
  • surrogacy and adoption assistance
  • fertility support
  • company-funded pension plan
  • 401 (k) savings plan with company match
  • 120 vacation hours per year
  • ten paid holidays per year
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service