Oak Ridge National Laboratory-posted 2 months ago
Full-time • Senior
Oak Ridge, TN
5,001-10,000 employees
Professional, Scientific, and Technical Services

We are hiring a Senior Linux HPC Storage Engineer to design, operate and maintain clusters, servers, and workstations storage supporting services where science happens at ORNL! This position resides in the Emerging Technologies & Computing team in the Research Computing group in the Information Technology Services Directorate at Oak Ridge National Laboratory (ORNL). The Emerging Technology Computational Group facilitates ORNL goals through HPC systems engineering, integration, and support for the research community at ORNL. By providing design, deployment, optimization, monitoring, and tooling support across multiple clustered infrastructures, we facilitate Lab-wide R&D projects. Our HPC clusters range in scope from just a handful of nodes to over fifty-thousand cores. We partner with ORNL research organizations to enable research excellence and delivery. We work with other clustered computing and HPC groups to help research programs identify the best solutions for their needs. When we build our customer's environments, our team collaborates to design, implement, and maintain the systems from inception to retirement.

  • Architect, deploy, and manage large-scale HPC storage systems, including parallel file systems such as Lustre, GPFS/Spectrum Scale, BeeGFS and WEKA.
  • Design, implement, and operate large-scale Ceph storage clusters for HPC and research workloads, delivering reliable, high-performance object, block, and file storage services.
  • Ensure the availability, performance, scalability, and security of production storage environments.
  • Administer and optimize enterprise storage platforms such as Qumulo and NetApp in support of HPC and research workloads.
  • Design, deploy, and maintain archival storage solutions including Spectra Logic BlackPearl and large-scale tape libraries to ensure long-term data preservation and accessibility.
  • Integrate high-performance, enterprise, and archival storage layers into cohesive tiered storage architectures that balance cost, scalability, and performance for diverse scientific workflows.
  • Leverage automation and monitoring solutions to minimize day-to-day maintenance while identifying opportunities to optimize system performance and management.
  • Collaborate with researchers and technical POCs to support large data workflows and optimize I/O performance for scientific workloads.
  • Automate storage provisioning, monitoring, and maintenance using scripting and configuration management tools.
  • Diagnose and resolve complex storage and I/O-related issues in high-throughput, low-latency HPC environments.
  • Evaluate emerging storage technologies (NVMe, object storage, hierarchical storage management, burst buffers) and contribute to strategic planning for future HPC systems.
  • Work with 24/7 operations staff to streamline monitoring and troubleshooting, significantly reducing the need for off-hours support.
  • Deliver ORNL's mission by aligning behaviors, priorities, and interactions with our core values of Impact, Integrity, Teamwork, Safety, and Service.
  • A BS degree in computer science, computer engineering, information technology, information systems, science, engineering, business, or a related discipline and a minimum of eight (8) to twelve (12) years of aligned professional experience is required for consideration.
  • Five (5) or more years managing UNIX/Linux systems.
  • Demonstrated experience managing HPC storage and large-scale enterprise storage systems.
  • Three (3) or more years working with configuration management and automation tools such as Git, Jenkins, Ansible, or Puppet.
  • Proficiency with at least one scripting language (Bash, Python, Perl, etc.).
  • Strong Linux administration and advanced troubleshooting experience.
  • Experience supporting large data systems and/or HPC scientific workloads.
  • Strong desire to innovate and evaluate new technologies for HPC and storage environments.
  • Collaborative approach and ability to become a trusted advisor to research teams.
  • Active DOE Q, DoD Top Secret, or TS/SCI clearance is strongly preferred.
  • Solid understanding of multiple operating systems and HPC cluster technologies.
  • Experience with Rocky/CentOS/RHEL, Ubuntu, VMware.
  • Understanding of HPC job schedulers (SLURM) and user support workflows.
  • Experience with container technologies in HPC environments.
  • Experience with multiple system deployment mechanisms (Warewulf, PXEboot, Cobbler, Bright).
  • Experience with GPU clusters (NVIDIA, AMD) for AI/ML and scientific workloads.
  • Deep expertise with high-performance parallel file systems (Lustre, GPFS/Spectrum Scale, BeeGFS, WEKA).
  • Knowledge of storage networking (Infiniband, NVMe-oF, SAN/NAS architectures).
  • Familiarity with RAID, ZFS, and object storage technologies.
  • Strong background in performance monitoring, benchmarking, and I/O optimization.
  • Experience with monitoring systems such as Grafana, CheckMK, Nagios, Zabbix, Ganglia.
  • Previous experience working in a government, scientific, or other highly technical environment.
  • Strong documentation skills and ability to prepare web-based documentation.
  • Prescription Drug Plan
  • Dental Plan
  • Vision Plan
  • 401(k) Retirement Plan
  • Contributory Pension Plan
  • Life Insurance
  • Disability Benefits
  • Generous Vacation and Holidays
  • Parental Leave
  • Legal Insurance with Identity Theft Protection
  • Employee Assistance Plan
  • Flexible Spending Accounts
  • Health Savings Accounts
  • Wellness Programs
  • Educational Assistance
  • Relocation Assistance
  • Employee Discounts
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service