# Will Quantum Computers without Error Correction be Useful for Optimization?

Authors: Daniel Stilck Franca, Raul Garcia-Patron

First Author’s Institution: University of Copenhagen

Status: Pre-print on arXiv

## The Big Idea

The authors develop a theoretical technique to identify situations where a noisy quantum computer without error correction loses to current classical optimization methods. The authors use their technique to provide estimates for many popular quantum algorithms running on near-term devices, including variational quantum eigensolvers,quantum approximate optimization (both gate-based), and quantum annealing (think D-Wave). The authors found that for quantum computers without error correction “substantial quantum advantages are unlikely for optimization unless current noise rates are decreased by orders of magnitude or the topology of the problem matches that of the device. This is the case even if the number of qubits increases.”

The author’s conclusion allows others researchers to identify and focus on the slim regime of experiments where quantum advantage without error correction is still possible, and shift more time into development of error-corrected quantum computers. Even seemingly “negative” results advance the field in meaningful ways.

## Currently in Quantum Computing

Current quantum computers are noisy (errors frequently occur), and of intermediate size: large enough to compete with the best classical computers on (almost) useless problems, but not large enough to be fault-tolerant (~50-100 qubits). A fault tolerant quantum computer has enough error correction so that errors mid-calculation do not affect the final result. Fault tolerance requires more and better qubits than are available today. The community is fairly confident fault tolerant quantum computers will outperform classical computers on many useful problems, but it is unclear if noisy, intermediate scale quantum (NISQ) computers can do the same.

Optimization problems are a very useful, profitable, and ubiquitous class of problem where the goal is to minimize or maximize something: cost, energy, path length, etc. Optimization problems occur everywhere, from financial portfolios to self-driving cars, and often belong to the NP complexity class, which is widely accepted as extremely difficult for classical computers to solve. Comparing computers based on ability to solve optimization problems has two benefits. First, it is easy to see which computer’s solution is better. Second, if quantum computers have an advantage, there are immediate applications.

## How to Tell if a NISQ Computer is a Poor Optimizer

Get familiar with the state diagram- I’ll be using it to explain the entire technique!

The optimization task is to minimize a “cost function,” which takes an input and assigns a “cost”(the function’s output). Here, the input is the quantum state of the device $\rho$, and the “cost” is the energy of the state. The function is characterized by a linear operator $H$ (usually a Hamiltonian). Every $H$ has a family of thermal equilibrium states (inputs) that do not change with time. The device has a unique thermal equilibrium state (Gibbs state for short) labelled by $\sigma_\beta$ for every temperature. $\beta$ represents the inverse of temperature, so $\beta = 0$ is “burning hot,” though for such a tiny device “hot” just means randomly scrambled (i.e. noisy). Likewise, $\beta = \infty$ is absolute zero temperature, meaning everything is perfectly in order and functioning as intended (i.e. noiseless).

Figure 1 from the paper shows device states (labelled black dots) at various points in a quantum computation. The noisy quantum computer with $n$ qubits initialized at state $\rho=|0\rangle^{\otimes n}$ attempts to follow the absolute zero temperature path (orange arrow) through the space of “noiseless” quantum states (black Bloch sphere) and arrive at the true answer $\sigma_{\beta=\infty}$. However, the real computation takes a noisy path (black arrow) that after enough time, leads to the steady state of the noise at $\sigma_{\beta=0}$.

The quantum device state at an intermediate point in the real computation $\Phi(\rho)$ is too difficult to simulate classically (NP complexity), but at each intermediate state $\Phi(\rho)$, there is a set of states(located on the blue line) with the same cost function value (i.e. energy) to within some error $\epsilon$. One of those equal-energy states is the Gibbs state at temperature $\beta$, denoted in the diagram by $e^{-\beta H} / \mathcal{Z}$. In fact, we can “mirror” the real path of the quantum device (black arrow) as it progresses in time by matching all intermediate states with Gibbs states, producing a “mirror descent” (green line) from the correct answer to the steady state of the noise $\sigma_{\beta} = 0$.

For any class of optimization problems, there is a critical threshold $\beta_C$ below which we are guaranteed to efficiently classically compute the Gibbs state, providing a cost function estimate better than the current quantum state $\Phi(\rho)$: this threshold is depicted in red and labelled $\mathcal{C}$. Once a quantum computation passes the threshold( i.e. has an equivalent Gibbs state at with $\beta<\beta_C$), one has certified classical superiority. All that remains to use this technique is a method for quantifying how far along in $\beta$ the associated quantum state $\Phi(\rho)$ is.

The authors quantify the distance between the quantum state $\Phi(\rho)$ and the steady state of the noise $\sigma_{\beta=0}$ by the relative entropy (for our purposes, relative entropy is simply a way to measure the distance between quantum states). The relative entropy only decreases throughout the computation (meaning $\Phi(\rho)$ is always moving towards $\sigma_{\beta=0}$, never away). Sparing the relative entropy proof (see the paper), it is possible to identify an upper bound on the relative entropy (distance between $\Phi(\rho)$ and $\sigma_{\beta=0}$) at any time during the computation. If the Gibbs state corresponding to the upper bound has $\beta<\beta_C$, the noisy quantum calculation now provides a certifiably worse answer than a classical optimizer would.

Any noiseless quantum optimization computation produces better answers the longer it is allowed to run, but in real processes, the longer the computation, the more the noise corrupts the answer. This technique gives a hard limit on how long a computation can be run for some quantum noise/device/algorithm combination before the noise makes the process worse than classical optimization.

## Takeaways from the Authors

This technique requires one to choose the noise model, device parameters, optimization technique, and target problem before “ruling out” the quantum advantage of any such combination. This freedom is very powerful. One can use this technique to find regions of classical superiority for all near-term quantum optimization devices, as well as to quantify the reduction in noise level required for a chance at quantum advantage.

Let us return to fault-tolerance for a moment. If the reduction in noise required for a NISQ device/algorithm to reach quantum advantage is below the noise threshold where fault-tolerance becomes possible, it will likely be better to perform fault tolerant operation instead. Any NISQ device/algorithm revealed by this technique to have such a noise reduction requirement will likely be a bad optimizer for its entire existence, rendered obsolete by current classical superiority followed by the rise of fault-tolerant quantum computing.

The authors consider variational quantum eigensolvers, quantum annealing, and quantum approximate optimization with currently available superconducting qubit devices and simple noise models, finding a noise reduction requirement below the fault tolerant threshold. However, accurate noise models are often far more complex, and unique to devices than the local depolarizing noise used by the authors for quantum circuits, so some hope of quantum advantage in optimization remains for existing devices. Further, the authors found current NISQ devices have a chance at quantum advantage in the arena of Hamiltonian simulation, a related and important problem (though perhaps not as profitable as optimization).

The most interesting applications of this technique maybe yet to come, as more scientists begin applying the technique to a wider variety of quantum computations. At its best, this technique can guide experimental NISQ devices into parameter regions with a real chance at quantum advantage and reveal which emerging quantum devices have the best shot at quantum advantage.

Thanks for reading – I’ll answer questions in the comments, and don’t be afraid to look at the paper (open-access here) if you are curious about the details.

Matthew Kowalsky is a graduate student at the University of Southern California whose research focuses on quantum annealing, open quantum system modeling, and benchmarking dedicated optimization hardware.