# Quadratic speedups aren’t enough in the near term

Authors: Ryan Babbush, Jarrod McClean, Craig Gidney, Sergio Boixo, Hartmut Neven

Status: Pre-print on arXiv

Now is an exciting time for the world of quantum computing. If estimates from researchers at IBM and Google are to be believed, we may have access to a modest fault-tolerant quantum computer with thousands of physical qubits within the next five years. One critical question is whether these devices will offer any runtime advantage over classical computers for practical problems. In this paper by researchers at Google Quantum AI, the authors argue that constant factors from error-correction and device operation may ruin the quantum advantage for anything less than a quartic speedup in the near-term. The cubic case is somewhat ambiguous, but the main message is that quadratic speedups will not be enough unless we make significant improvements to the way we perform error-correction.

Note: In this paper, when they say ‘order-d quantum speedup,’ they mean that there is a classical algorithm solving some problem using $M^d$ applications of a classical primitive circuit (also called an ‘oracle’), and a quantum algorithm solving the same problem making only $M$ calls to a quantum oracle.

The results of the paper are a little disheartening, because many of the most relevant quantum algorithms for industrial applications offer only quadratic speedups! Examples include database searching with Grover search, some machine learning subroutines, and combinatorial optimization – things any self-respecting tech company would love to improve. We may have to wait for quantum computers that can revolutionize the tech industry, but quantum runtime advantage for algorithms with exponential improvement, like Hamiltonian simulation, could be just around the corner.

Methods: Estimating a quantum speedup

The paper focuses on a specific error-correcting protocol, the surface code, which is widely considered the most practical choice for Google’s architecture. Suppose we want to run an algorithm with an order $d$ polynomial quantum speedup. If a call to the quantum oracle takes time $t_Q$ and the classical oracle call takes time $t_C$, then realizing a quantum speedup requires that

$M > \left(\frac{t_Q}{t_C} \right)^{1/d-1}$.

This condition is a bit naive: it assumes the classical algorithm cannot be parallelized.

We can get closer to the real world by incorporating Amdahl’s law for the speedup from parallelizing a classical algorithm. This law states that if $\alpha$ is the fraction of the algorithm’s runtime that cannot be parallelized and $P$ is the number of processors used, then the runtime can be reduced by a factor

$S = \left(\alpha + \frac{1-\alpha}{P} \right)^{-1}$.

$M > \left( \frac{St_Q}{t_C}\right)^{1/d-1}$
The rest of the paper deals with estimating the parameters $S$, $t_Q$, and $t_C$ for realistic scenarios.
In the paper, the authors consider two scenarios: 1) a hypothetical ‘lower bound’ based on hardware limitations for the surface code where $t_Q \geq 17 \ \text{ms}, t_C \leq 33 \ \text{ns}$ and 2) a specific algorithm for solving the Sherrington-Kirkpatrick model via simulated annealing where $t_Q = 440 \ \text{ms}, t_C = 7 \ \text{ns}$. The ‘lower bound’ gives an estimate for the best speedup that can reasonably be achieved using Google’s quantum computing hardware, while the simulated annealing example is meant to ground their estimates with a specific, practical algorithm which has already been studied in depth. This results in the following table (Table 1 in the paper), where $M$ is the number of quantum oracle calls and $\mathcal{T}^*$ is the total runtime required to realize a quantum speedup, given for several values of $d$ and $S$.