About The Position

PsiQuantum’s mission is to build the first useful quantum computers—machines capable of delivering the breakthroughs the field has long promised. Since our founding in 2016, our singular focus has been to build and deploy million-qubit, fault-tolerant quantum systems. Quantum computers harness the laws of quantum mechanics to solve problems that even the most advanced supercomputers or AI systems will never reach. Their impact will span energy, pharmaceuticals, finance, agriculture, transportation, materials, and other foundational industries. Our architecture and approach is based on silicon photonics. By leveraging the advanced semiconductor manufacturing industry—including partners like GlobalFoundries—we use the same high-volume processes that already produce billions of chips for telecom and consumer electronics. Photonics offers natural advantages for scale: photons don’t feel heat, are immune to electromagnetic interference, and integrate with existing cryogenic cooling and standard fiber-optic infrastructure. In 2024, PsiQuantum announced government-funded projects to support the build-out of our first utility-scale quantum computers in Brisbane, Australia, and Chicago, Illinois. These initiatives reflect a growing recognition that quantum computing will be strategically and economically defining—and that now is the time to scale. PsiQuantum also develops the algorithms and software needed to make these systems commercially valuable. Our application, software, and industry teams work directly with leading Fortune 500 companies—including Lockheed Martin, Mercedes-Benz, Boehringer Ingelheim, and Mitsubishi Chemical—to prepare quantum solutions for real-world impact. Quantum computing is not an extension of classical computing. It represents a fundamental shift—and a path to mastering challenges that cannot be solved any other way. The potential is enormous, and we have a clear path to make it real. Come join us. Job Summary: This intern will support a team building and operationalizing a scalable scientific computing workflow that connects domain-specific data preparation, algorithmic processing, and downstream analysis. The intern will contribute to improving reliability, portability across compute environments, and usability through automation, validation, and measurement tooling, working closely with researchers and engineers.

Requirements

  • Enrolled in a BS/MS/PhD in CS, EE, Physics, Chemistry, Applied Math, or related field.
  • Strong Python software engineering: clean APIs, testing, packaging, logging, configuration management.
  • Familiarity with quantum computing applications/algorithms (resource estimation, simulation, or related) sufficient to validate computational outputs.
  • Solid scientific computing foundations (linear algebra, numerical methods, data handling).
  • Experience with quantum chemistry workflows (molecular geometries, integrals formats, active space concepts, common solver outputs).

Nice To Haves

  • Experience with HPC environments (batch schedulers, containers/modules, scaling, reproducible runs).
  • Experience designing benchmark harnesses and performance profiling.
  • Experience with cloud object storage and data pipelines (S3-style storage, provenance/metadata, artifact versioning).

Responsibilities

  • Contribute to hardening a chemistry-to-quantum resource estimation pipeline: robustness, scalability, reproducibility, and “push-button” usability.
  • Implement and validate stable interfaces/data contracts between core modules (inputs/outputs/metadata/schema checks).
  • Help automate execution of a predefined benchmark suite across both cloud GPU environments and internal HPC clusters (job submission, retries, artifact collection, deterministic configs).
  • Support expansion of an end-to-end scientific computation workflow by implementing integration layers, validation, and test coverage for new upstream inputs, additional computational backends, and alternative algorithmic paths within a single, consistent execution framework.
  • Implement a representative application workflow that exercises the pipeline across a structured set of related inputs (parameter sweeps), automates repeated executions, aggregates outputs into analysis-ready artifacts, and documents key assumptions, approximations, and major sources of error/uncertainty.
  • Implement secure, automated sharing of HPC outputs to cloud object storage (e.g., S3) for downstream analysis and collaboration.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service