Software Engineer II

Pacific Northwest National LaboratoryRichland, WA
$109,000 - $163,600Onsite

About The Position

At PNNL, our core capabilities are divided among major departments that we refer to as Directorates within the Lab, focused on a specific area of scientific research or other function, with its own leadership team and dedicated budget. Our Science & Technology directorates include National Security, Earth and Biological Sciences, Physical and Computational Sciences, and Energy and Environment. In addition, we have an Environmental Molecular Sciences Laboratory, a Department of Energy, Office of Science user facility housed on the PNNL campus. The National Security Directorate (NSD) drives science-based, mission-focused solutions to take on complex, real-world threats to our nation and the world. The AI and Data Analytics Division, part of NSD, combines profound domain expertise and creative integration of advanced hardware and software to deliver computational solutions that address complex data and analytic challenges. Working in multidisciplinary teams, we connect foundational research to engineering to operations, providing the tools to innovate quickly and field results faster. Our strengths are integrated across the data analytics lifecycle, from data acquisition and management to analysis and decision support.

Requirements

  • Working proficiency in Python with foundational knowledge of at least one additional programming language (C#/.NET, Go, C++) and eagerness to expand language skills
  • Understanding of core software engineering principles including version control with Git (branching, commits, pull requests), basic automated testing (unit tests), and code quality practices (linting, formatting, code review participation)
  • Familiarity with CI/CD concepts and willingness to learn DevOps practices including build automation, deployment pipelines, and continuous integration workflows
  • Foundational knowledge of data structures (arrays, lists, dictionaries, trees), algorithms (searching, sorting, recursion), and willingness to learn and apply AI assist tools (e.g., GitHub Copilot, Claude, Cursor) to accelerate learning, improve code quality, and build problem-solving skills
  • Foundational knowledge of machine learning concepts including supervised/unsupervised learning, model training, and evaluation metrics with exposure to frameworks such as PyTorch, TensorFlow, or scikit-learn
  • Basic understanding of the machine learning lifecycle including data preparation, model development, evaluation, and awareness of deployment and monitoring practices
  • Exposure to or willingness to learn about large language model (LLM) applications, prompt engineering, and agent-based frameworks (LangChain, LlamaIndex) with ability to support AI/ML feature development
  • Interest in applying ML concepts to real-world problems with eagerness to grow expertise through hands-on project work and mentorship
  • Basic knowledge of cloud computing principles and familiarity with services within AWS, Azure, or GCP environments (compute, storage, networking fundamentals)
  • Exposure to containerization concepts (Docker) with willingness to learn orchestration technologies (Kubernetes) and Infrastructure as Code practices (Terraform, CloudFormation)
  • Understanding of RESTful API principles including HTTP methods, status codes, JSON data exchange, and basic microservice architecture concepts
  • Foundational knowledge of database systems including relational databases (PostgreSQL, MySQL) and/or NoSQL options (MongoDB, DynamoDB) with understanding of when to use each
  • Awareness of cloud-native data pipeline concepts and ETL/ELT principles with exposure to services such as AWS S3, Lambda, Glue or equivalent Azure/GCP services
  • Basic knowledge of cloud-based data storage systems (S3, PostgreSQL, MongoDB) and understanding of different storage paradigms (object storage, relational, document-based)
  • Foundational understanding of distributed computing concepts and exposure to frameworks like Spark, Kafka, or Ray with willingness to learn streaming architectures and parallel processing
  • Knowledge of common data formats (JSON, CSV, Parquet) with basic understanding of schema design principles, data validation, and data quality considerations
  • Ability to collaborate effectively within cross-functional teams including senior engineers, data scientists, and product stakeholders while actively seeking mentorship and learning opportunities
  • Developing communication skills to articulate technical challenges and solutions through clear documentation, team discussions, and willingness to ask clarifying questions
  • Enthusiastic participation in code reviews with openness to constructive feedback, eagerness to learn best practices, and growing ability to provide helpful code review comments
  • Demonstrated ability to incorporate feedback, learn from mistakes, and continuously improve technical skills through peer collaboration, self-study, and hands-on experience
  • U.S. Citizenship
  • Background Investigation: Applicants selected will be subject to a Federal background investigation and must meet eligibility requirements for access to classified matter in accordance with 10 CFR 710, Appendix B.
  • Drug Testing: All Security Clearance positions are Testing Designated Positions, which means that the applicant selected for hire is subject to pre-employment drug testing, and post-employment random drug testing. In addition, applicants must be able to demonstrate non-use of illegal drugs, including marijuana, for the 12 consecutive months preceding completion of the requisite Questionnaire for National Security Positions (QNSP).
  • Note: Applicants will be considered ineligible for security clearance processing by the U.S. Department of Energy if non-use of illegal drugs, including marijuana, for 12 months cannot be demonstrated.
  • This position is a Testing Designated Position (TDP). The candidate selected for this position will be subject to pre-employment and random drug testing for illegal drugs, including marijuana, consistent with the Controlled Substances Act and the PNNL Workplace Substance Abuse Program.
  • Minimum Qualifications: PhD -OR- MS/MA -OR- BS/BA and 2 years of relevant experience

Nice To Haves

  • Degree in computer science, software engineering, or related field
  • 2+ years of professional software development experience or relevant internship experience building production-quality software
  • Demonstrated technical contributions through personal projects, open-source contributions, academic projects, or internships showing practical application of software engineering skills
  • Exposure to data processing, ETL pipelines, or analytics systems through coursework, personal projects, or professional experience
  • Experience with any cloud platform (AWS, Azure, GCP) through certifications, coursework, or hands-on projects
  • Programming experience beyond academic settings including hackathons, coding competitions, or personal software projects
  • Strong problem-solving abilities demonstrated through technical challenges, algorithms practice, or project troubleshooting
  • Demonstrated self-directed learning and technical initiative through personal projects, GitHub repositories, open-source contributions, technical blog posts, online course completion, hackathon participation, or active engagement in technical communities showcasing curiosity and motivation beyond academic requirements

Responsibilities

  • Develop components of agentic AI systems and LLM-based applications
  • Implement features using frameworks like LangChain, LlamaIndex, or similar tools
  • Build and maintain ML pipelines, data preprocessing workflows, and model deployment infrastructure
  • Create utilities and tools that support AI/ML development and operations
  • Work with multi-modal data including text, structured data, and sensor information
  • Build data pipelines for large-scale ETL, transformation, and analytics workflows
  • Implement streaming data processors and event-driven components
  • Develop microservices and APIs within distributed architectures handling high-throughput workloads
  • Deploy containerized applications using Docker and Kubernetes
  • Contribute to CI/CD pipelines and automated testing frameworks
  • Write clean, well-tested code following established best practices
  • Implement monitoring, logging, and observability for applications
  • Build developer tooling and documentation to support team productivity
  • Contribute to system performance optimization and debugging efforts
  • Support deployments in cloud and secure environments
  • Work on small tasks and project elements, progressing to independent ownership
  • Collaborate with cross-functional teams including data scientists, researchers, and senior engineers
  • Participate in code reviews, design discussions, and technical planning
  • Mentor junior staff and students when opportunities arise
  • Contribute technical content to proposals and project documentation
  • Present your work at team meetings and technical forums

Benefits

  • health insurance
  • dental insurance
  • vision insurance
  • robust telehealth care options
  • several mental health benefits
  • free wellness coaching
  • health savings account
  • flexible spending accounts
  • basic life insurance
  • disability insurance
  • employee assistance program
  • business travel insurance
  • tuition assistance
  • relocation
  • backup childcare
  • legal benefits
  • supplemental parental bonding leave
  • surrogacy and adoption assistance
  • fertility support
  • company-funded pension plan
  • 401 (k) savings plan with company match
  • 120 vacation hours per year
  • ten paid holidays per year
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service