Senior/Principal Data Scientist- Artificial Intelligence- Hybrid

Sandia CorporationAlbuquerque, NM
55dHybrid

About The Position

Sandia's artificial intelligence (AI) team is building the U.S. Department of Energy's (DOE) next-generation AI Platform, an integrated scientific AI capability that delivers rapid, high-impact solutions for national security, science, and applied energy missions. The Platform is based on three pillars: Models, Infrastructure, and Data. You will join the Data Pillar team to design, implement, and operate Sandia's AI-ready, zero-trust data ecosystem. Your work will transform raw simulation outputs, sensor and facility logs, experimental records, and production data into governed, provenance-tracked, and access-controlled datasets that power AI models, autonomous agents, and mission workflows across DOE's HPC, cloud, and edge environments. We anticipate multiple hires for the Data Pillar that collectively span the set of responsibilities and skills described below. Likewise, new hires will be expected to work in conjunction with existing Sandia staff and teams from other DOE laboratories to deliver on this ambitious, fast-paced project. Importantly, we anticipate that while AI Platform development will leverage existing AI and data science tools extensively, success will also require considerable innovation and problem solving to address the unique needs of DOE applications. If this sounds like an exciting challenge to you, we look forward to reading your application! You will be part of a multi-disciplinary, mission-focused team delivering foundational data capabilities for transformative AI systems in national security, energy, and critical materials. Occasional travel may be required. If you are passionate about building the data backbone for next-generation AI at scale, we want to hear from you. The selected applicant can work a combination of onsite and offsite work. The selected applicant must live within a reasonable distance for commuting to the assigned work location when necessary.

Requirements

  • Bachelor's degree in Computer Science, Data Science, Statistics, or a related STEM field, plus five (5) years of directly relevant experience, or an equivalent combination of education and experience
  • Ability to acquire and maintain a DOE Q clearance

Nice To Haves

  • Graduate degree (M.S. or Ph.D.) with a significant data research component where an independent research project was a graduation requirement (e.g., independent project, thesis, or dissertation).
  • Experience in developing software for enterprise and national security applications.
  • Experience acquiring, preparing, and analyzing real world data
  • Demonstrated software development skills and familiarity with modern software development practices.
  • Proven ability to work and communicate effectively in a collaborative and interdisciplinary team environment, guiding technical decisions and mentoring junior staff.
  • Graduate degree in Data Science, Informatics, Statistics, or a related STEM field with a significant data research component
  • Background in AI-mediated data curation: automated annotation, feature extraction, and dataset certification
  • Hands-on knowledge of data security and zero-trust principles, including secure enclaves, attribute-based access control, and data masking or differential privacy
  • Familiarity with FAIR (Findable, Accessible, Interoperable, Reusable) data practicesCurating and managing scientific or engineering datasets
  • Data architecture for HPC and edge-computing environments
  • Advanced data fusion techniques for heterogeneous and streaming data sources
  • Building data pipelines for feature stores, experiment tracking, and model drift monitoring
  • Designing and enforcing data policies for classified, export-controlled, or proprietary data
  • Collaborating on public¿private partnerships or multi-lab federated data efforts
  • Demonstrated expertise in building and maintaining production data pipelines (ETL/ELT) and data warehouses or data lakes
  • Proficiency in programming languages such as Python, SQL, and experience with frameworks like Apache Spark or Dask
  • Familiarity with cloud platforms (AWS, Azure, or GCP) and container orchestration (Kubernetes)

Responsibilities

  • Build and operate an AI-Ready Lakehouse
  • Design and maintain a federated data lakehouse with full provenance/versioning, attribute-based access control, license/consent automation, and agent telemetry services
  • Implement automated, AI-mediated ingestion pipelines for heterogeneous sources (HPC simulation outputs, experimental instruments, robotics, sensor streams, satellite imagery, production logs).
  • Enforce Data Security & Assurance
  • Develop a Data Health & Threat program: dataset fingerprinting, watermarking, poisoning/anomaly detection, red-team sampling, and reproducible training manifests
  • Configure secure enclaves and egress processes for CUI, Restricted Data, and other sensitive corpora with attestation and differential-privacy where required
  • Define and Implement Data Governance
  • Establish FAIR-compliant metadata standards, data catalogs, and controlled-vocabulary ontologies
  • Automate lineage tracking, quality checks, schema validation, and leak controls at record-level granularity
  • Instrument AI Workflows with Standardized Telemetry
  • Deploy Agent Trace Schema (ATS) and Agent Run Record (ARR) frameworks to log tool calls, decision graphs, human hand-offs, and environment observations
  • Treat agent-generated artifacts (plans, memory, configurations) as first-class data objects
  • Collaborate Across Pillars
  • Work with Models and Interfaces teams to integrate data services into training, evaluation, and inference pipelines
  • Partner with Infrastructure engineers to optimize data movement, tiered storage, and high-bandwidth networking (ESnet) between HPC, cloud, and edge
  • Engage domain scientists and mission leads for agile deterrence, energy grid, and critical minerals use cases to curate problem-specific datasets
  • Support Continuous Acquisition & Benchmarking
  • Design edge-to-scale data acquisition systems with robotics and instrument integration
  • Develop data/AI benchmarks: datasets, tools, and metrics for pipeline performance, model evaluation, and mission KPIs
  • Author an AI-mediated parser for a new experimental instrument, automatically extracting and cataloging metadata
  • Implement an attribute-based policy that blocks unapproved data combinations in a classified enclave
  • Prototype a streaming pipeline that feeds live sensor data from a nuclear facility into an HPC training queue
  • Develop a dashboard that alerts on data drift, pipeline failures, or anomalous records
  • Collaborate with MLOps engineers to version datasets alongside model artifacts in CI/CD

Benefits

  • Generous vacation, strong medical and other benefits, competitive 401k, learning opportunities, relocation assistance and amenities aimed at creating a solid work/life balance
  • Flexible work arrangements for many positions include 9/80 (work 80 hours every two weeks, with every other Friday off) and 4/10 (work 4 ten-hour days each week) compressed workweeks, part-time work, and telecommuting (a mix of onsite work and working from home)

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Industry

National Security and International Affairs

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service