The Quantum Performance Lab is, first and foremost, a research organization. Our goal is to extend the frontiers of understanding performance of quantum computers and quantum computing components — e.g. qubits, gates, logical components and subroutines, and fully integrated quantum computing systems.
We pursue this goal through mathematical theory, numerical analysis, creation of new algorithms and software, and experimental tests and demonstrations in real-world quantum computing systems. We publish our research in journals and conferences, but we also implement (and test) our research in the pyGSTi open-source software.
Gate set tomography (GST) is a widely used technique for comprehensive, self-calibrating, and high-precision reconstruction of a full set of quantum logic gates. QPL researchers developed GST around 2012, in parallel with researchers at IBM, and we continue to extend and deploy GST today. It is the cornerstone of our pyGSTi software. Our 2017 article demonstrating the use of GST to characterize a trapped-ion qubit is the canonical reference for high-precision long-circuit GST. A more recent survey paper by QPL experts provides a comprehensive treatment of GST.
GST began — and is still primarily known as — a tool for characterizing one- or two-qubit gate sets. But we are actively extending and adapting GST to solve a variety of problems in quantum computer characterization. We are extending GST to more qubits, to physics-based error models, to gate sets containing mid-circuit measurements, and to time-resolved tracking of quantum gates. Many of these cutting-edge techniques are available in pyGSTi.
Quantum computer performance is not simple! It has many facets, corresponding to the various factors that can (and do) limit a quantum computer’s ability to perform tasks. These include error rates, speed and latency, programmability and connectivity, and availability of qubits. To measure these diverse aspects, we research and develop methods for benchmarking quantum computers.
Randomized benchmarking (RB) is a widely-used method for measuring the average performance of a set of quantum gates. QPL scientists developed direct randomized benchmarking to enable measuring the performance of more qubits (at once) than the industry-standard RB method. Subsequently, we developed an even more streamlined RB method that can be used to benchmark hundreds of qubits, and introduced time-resolved RB.
The QPL introduced volumetric benchmarking, which is a framework for diverse and informative benchmarking of quantum computers. We used this framework to demonstrate scalable benchmarking of real quantum computing hardware, and to help construct the first commercially-focused benchmarking suite.
Real quantum computers can suffer from many types of complex errors or noise. Usually, these are either condensed into “error rates”, or modeled using quantum process matrices. But real-world faults often don’t fit into these constrained frameworks. A prominent theme of QPL research is the development of theories to understand these subtle and complex errors, and techniques to diagnose them.
Quantum computers often suffer from “crosstalk” — unwanted interactions between different qubits. QPL researchers developed a theory for understanding crosstalk errors, and a suite of techniques for diagnosing them. These include a low-cost crosstalk detection technique, a technique called idle tomography for detailed characterization of many crosstalk errors, a fast and simple method for quantifying two-qubit gate crosstalk using direct randomized benchmarking, and a more in-depth technique called simultaneous GST. These methods are all available in pyGSTi.
Ideally, quantum computers should be stable. But in practice, their performance drifts or changes over time. QPL researchers developed methods for detecting and characterizing drift. These include broad-spectrum tests for detecting instability; time-resolved versions of GST, RB and Ramsey spectroscopy; and techniques for quantifying deviation from time-stationary Markovian behavior.
Even “standard” Markovian errors, which can be described by process matrices, are often obscure and hard to analyze in practice — especially those afflicting multiqubit systems. The QPL pioneered a new framework for describing Markovian errors that makes interpreting these errors easy. QPL researchers used this “error generator” framework in collaboration with experimental physicists to achieve precision tomography of a 3-qubit silicon processor. This framework opens the door to adaptive model-flexible characterization techniques for more qubits.
We develop and maintain the open-source pyGSTi software package. It provides researchers and engineers around the world with optimized, reliable implementations of the QPL’s methods for assessing performance of quantum computers. PyGSTi is a mature Python package providing powerful tools for simulation, tomography, benchmarking, data analysis, robust reporting and data visualization. In addition to extensive documentation and tutorials, a survey article describes the capabilities of pyGSTi.
Although pyGSTi originated as a reference implementation for gate set tomography (GST), it grew to contain many protocols for characterizing quantum computing components and quantifying their performance. These include focused characterization protocols for 1-2 qubits, and holistic benchmarks for hundreds of qubits.
PyGSTi has been used by groups around the world to test and characterize many types of quantum computing hardware.