Who we are and what we do

The Quantum Performance Laboratory (QPL) is a research and development (R&D) group within Sandia National Laboratories.  We develop and deploy cutting-edge techniques for assessing the performance of quantum computing hardware, serving the needs of the U.S. government, industry, and academia.  Our research produces:

  • insight into the failure mechanisms of real-world quantum computing processors,
  • well-motivated metrics of low- and high-level performance,
  • predictive models of multi-qubit quantum processors, and
  • concrete, tested protocols for evaluating as-built experimental processors.

We develop, maintain and support the open source pyGSTi software package, which provides an extensive suite of cutting edge tools and algorithms for evaluating individual qubits and many-qubit processors.   

We collaborate with a wide range of partners in industry and academia to develop new performance assessment tools and apply them to newly developed quantum computing platforms, and we publish our research results in scientific journals including Nature Communications, Physical Review X, and Physical Review Letters.  See our events and engagement page for more details.


In addition to our R&D capabilities, the QPL also provides quantum hardware assessment capabilities directly to the Department of Energy (DOE) and the U.S. Government.

 

QPL Announcements

July, 2020: The Quantum Performance Lab is excited to announce that Kevin Young has received a prestigious DOE Early Career award for proposed research in “Quantum Performance Enhancement”.  Dr. Young and the QPL plan to leverage our work on characterizing quantum devices to develop innovative techniques for debugging, tuning, and calibrating quantum hardware in situ.

July, 2020: The Quantum Performance Lab is growing!  We are seeking to hire postdoctoral scholars. If you have (or are about to get) a PhD in a STEM field related to quantum information science, and are excited about inventing new techniques to measure or improve the performance of quantum computing hardware, we’d love to hear from you.

 

QPL research products

Gate set tomography

The QPL is the primary developer of gate set tomography (GST). GST is the only tool for comprehensive, self-calibrating, and high-precision reconstruction of a full set of quantum logic gates. It is the cornerstone of our software PyGSTi. To read more about GST see our article on using GST to characterize and inform improvements to an ion-trap qubit at Sandia.

GST was originally a tool for characterizing one- or two-qubit gate sets. But we are actively extending and adapting GST to solve many important problems in quantum computer characterizing. We are extending GST to more qubits, to physics-based error models, to gate sets containing mid-circuit measurements, and to time-resolved tracking of quantum gate. Many of these cutting-edge techniques are available in pyGSTi.

Benchmarking quantum computers

There are many aspects of quantum computer performance, from the rate that errors occur to programmability and connectivity constraints. We research and develop methods for benchmarking quantum computers, to measure the diverse aspects of quantum computer performance.

Randomized benchmarking (RB) is a widely-used method for measuring the average performance of a set of quantum gates. At the QPL, we developed direct randomized benchmarking, which is a technique that can be scaled to more qubits than the industry-standard RB method. Since then, we’ve developed an even more streamlined RB method that can use used to benchmark hundreds of qubits, as well as time-resolved RB.

The QPL introduced volumetric benchmarking, which is a framework for diverse and informative benchmarking of quantum computers. Recently, we’ve used this framework to demonstrate scalable benchmarking of real quantum computing hardware.

Diagnosing complex sources of error

Real quantum computers can suffer from many types of complex errors or noise that do not fit into the framework underlying most quantum computing characterization techniques. At the QPL, we develop theory to understand these subtle and complex errors, and techniques to diagnose them.

Quantum computers often suffer from unwanted interactions between different parts of the quantum computer — which is known as “crosstalk”. We have developed a theory for understanding crosstalk errors, and a suite of techniques for diagnosing these errors — including a low-cost crosstalk detection technique, a method (“idle tomography”) for characterizing an important class of crosstalk, and a simple and fast method for quantifying two-qubit gate crosstalk using direct randomized benchmarking. These methods are all available in pyGSTi.

Quantum computers should be stable over time, but the performance of quantum computers typically drifts or abruptly changes. At the QPL we develop techniques for detecting and characterizing drift. We’ve developed general-purpose instability detecting routines, and time-resolved GST, RB and Ramsey spectroscopy. To read more about these technique, see our article on characterizing and eliminating drift in an ion-trap qubit at Sandia, and our article that demonstrates both crosstalk and drift detection on one of IBM’s superconducting qubit quantum computers.

PyGSTi software package

We develop and maintain the open-source pyGSTi software package containing all of our methods for assessing the performance of quantum computers. It is a mature Python package providing powerful tools for simulation, tomography, benchmarking, data analysis, robust reporting and data visualization. It has extensive documentation and tutorials, we have written an article describing PyGSTi. Overall, we have put considerable effort into making PyGSTi user-friendly.

PyGSTi was developed as the reference implementation for gate set tomography (GST), but it now contains a vast array of protocols for characterizing noise and errors in qubits, and quantifying their performance. PyGSTi now contains techniques for holistic testing of 100s of qubits.

PyGSTi has been used by groups around the world to test and characterize many types of quantum computing hardware. For some interesting uses of PyGSTi see:

• M. Geller, Rigorous measurement error correction, arXiv:2002.01471 (2020).
• L. Govia et al., Bootstrapping quantum process tomography via a perturbative ansatz, Nature Comm. 11, 1048 (2020).
• M. Joshi et al., Quantum information scrambling in a trapped-ion quantum simulator with tunable range interactions, arXiv:2001.02176 (2020).
• S. Hong et al., Demonstration of a parametrically activated entangling gate protected from flux noise, Phys. Rev. A 101, 012302 (2020).
• T. Proctor et al., Direct randomized benchmarking for multiqubit devices, Phys. Rev. Lett. 123, 030503 (2019).
• K. Rudinger et al., Probing context-dependent errors in quantum processors, Phys. Rev. X. 9, 021045 (2019).
• Y. Chen et al., Detector tomography on IBM quantum computers and migration of an imperfect measurement, Phys. Rev. A 100, 052315 (2019).
• T. Proctor et al., Detecting, tracking and eliminating drift in quantum information processors, arXiv:1907.13608 (2019)
• M. Sarovar et al., Detecting crosstalk errors in quantum information processors, arXiv:1908.09855 (2019)
• T. Scholten et al., Classifying single-qubit noise using machine learning, arXiv:1908.11762 (2019)
• G. A. K. White et al., Performance optimization for drift-robust fidelity improvement of two-qubit gates, Preprint: arXiv:1911.12096 (2019)
• S. Mavadia et al., Experimental quantum verification in the presence of temporally correlated noise, npj Quant. Inf. 4, 7 (2018).
• M. Ware et al., Experimental demonstration of Pauli-frame randomization on a superconducting qubit, arXiv:1803.01818 (2018)
• T. Proctor et al., What randomized benchmarking actually measures, Phys. Rev. Lett. 119, 130502 (2017).
• R. Blume-Kohout et al., Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography, Nature Comm. 14485 (2017)
• M. A. Rol et al., Restless tuneup of high-fidelity qubit gates, Phys. Rev. Applied 7, 041001 (2017).
• K. Rudinger et al., Experimental demonstration of a cheap and accurate phase estimation, Phys. Rev. Lett. 118, 190502 (2017).
• J. P. Dehollain et al., Optimization of a solid-state election spin qubit using gate set tomography, New J. Phys. 18 103018 (2016).

QPL people

Robin Blume-Kohout is the founder of the QPL, and he is an expert in quantum computer characterization and benchmarking.

Kevin Young co-leads the QPL. He is an expert in the characterization, benchmarking, and modeling of quantum computers

Erik Nielsen is the lead developer of PyGSTi, and he is an expert in GST and model-based characterization.

Kenny Rudinger is an expert in quantum computer characterization and in applying cutting-edge performance assessment techniques to real quantum computing hardware.

Timothy Proctor leads the QPL's research into quantum computer benchmarking methods, and he is an expert in time-resolved characterization of quantum computers.

Selected QPL Publications

Contact Information

The QPL is distributed across multiple Sandia locations, but our primary home is Sandia's main campus in Albuquerque, NM.  Inquiries can be directed by email to qpl@sandia.gov.

If you're interested in working at the QPL, see our job opportunities.