Intern, Software Engineering

PsiQuantumPalo Alto, CA
5h

About The Position

PsiQuantum’s mission is to build the first useful quantum computers—machines capable of delivering the breakthroughs the field has long promised. Since our founding in 2016, our singular focus has been to build and deploy million-qubit, fault-tolerant quantum systems. Quantum computers harness the laws of quantum mechanics to solve problems that even the most advanced supercomputers or AI systems will never reach. Their impact will span energy, pharmaceuticals, finance, agriculture, transportation, materials, and other foundational industries. Our architecture and approach is based on silicon photonics. By leveraging the advanced semiconductor manufacturing industry—including partners like GlobalFoundries—we use the same high-volume processes that already produce billions of chips for telecom and consumer electronics. Photonics offers natural advantages for scale: photons don’t feel heat, are immune to electromagnetic interference, and integrate with existing cryogenic cooling and standard fiber-optic infrastructure. In 2024, PsiQuantum announced government-funded projects to support the build-out of our first utility-scale quantum computers in Brisbane, Australia, and Chicago, Illinois. These initiatives reflect a growing recognition that quantum computing will be strategically and economically defining—and that now is the time to scale. PsiQuantum also develops the algorithms and software needed to make these systems commercially valuable. Our application, software, and industry teams work directly with leading Fortune 500 companies—including Lockheed Martin, Mercedes-Benz, Boehringer Ingelheim, and Mitsubishi Chemical—to prepare quantum solutions for real-world impact. Quantum computing is not an extension of classical computing. It represents a fundamental shift—and a path to mastering challenges that cannot be solved any other way. The potential is enormous, and we have a clear path to make it real. Come join us. Job Summary: This is an internship opportunity under the Quantum Applications Software Architecture team. The intern will work with other software engineers across the company and department to build productionalized software solutions. They will help build and harden an internal automation platform that lets researchers run quantum-application workloads end-to-end, with reproducible lineage and centralized storage. A major part of the internship is improving the reliability, governance, and usability of the data/compute platform (Databricks + AWS-backed storage), while also contributing to benchmarking and algorithm-adjacent components of the quantum workflow.

Requirements

  • Currently pursuing BS/MS/PhD in Computer Science, Engineering, Physics, Math, or related field.
  • Strong Python skills, including clean API design, OOP, packaging, and testing.
  • Working knowledge of distributed/remote compute concepts (jobs, clusters, queues) and cloud storage fundamentals.
  • Familiarity with quantum computing applications/algorithms (resource estimation, quantum simulation, or adjacent numerical methods); comfort with linear algebra and scientific computing.

Nice To Haves

  • Databricks experience (Jobs API/SDK, workspace organization, catalog/permissions patterns).
  • AWS experience (S3, IAM/credentials handling, boto3-style tooling, cost monitoring patterns).
  • Experience with scientific Python stacks and GPU compute (JAX/NumPy/SciPy; performance profiling).
  • Exposure to quantum chemistry workflows, tensor factorization, or benchmark-driven research engineering.

Responsibilities

  • Data platform governance and reliability
  • Implement governance patterns for datasets and run outputs (naming, lineage, access boundaries, catalog/volume organization).
  • Improve dataset upload validation and guardrails to prevent accidental modification of unrelated storage paths and to enforce consistent metadata and file structure
  • Monitoring, usage, and cost visibility
  • Build monitoring and reporting for compute usage and cost drivers (job frequency, runtime, GPU utilization proxies, storage growth, auto-termination effectiveness).
  • Deliver dashboards that make platform health and spend understandable to both engineers and researchers.
  • Job orchestration and software quality
  • Refactor existing job setup / submission scripts into a maintainable, testable OOP design (clear interfaces, configuration objects, reusable clients).
  • Improve workflow parameter handling for the two-stage pipeline (tensor factorization stage and QRE stage) and standardize outputs for downstream analysis.
  • Developer Experience simplification
  • Reduce onboarding friction by abstracting authentication and setup into a single, ergonomic path (e.g., a CLI/Python entry point that validates auth, environment, and required dependencies).
  • Replace “follow the guide manually” with automation: preflight checks, actionable errors, and self-serve setup validation.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service