Quadratic speedups aren’t enough in the near term

Title: Focus beyond quadratic speedups for error-corrected quantum advantage

Authors: Ryan Babbush, Jarrod McClean, Craig Gidney, Sergio Boixo, Hartmut Neven

First Author’s Institution: Google

Status: Pre-print on arXiv

Now is an exciting time for the world of quantum computing. If estimates from researchers at IBM and Google are to be believed, we may have access to a modest fault-tolerant quantum computer with thousands of physical qubits within the next five years. One critical question is whether these devices will offer any runtime advantage over classical computers for practical problems. In this paper by researchers at Google Quantum AI, the authors argue that constant factors from error-correction and device operation may ruin the quantum advantage for anything less than a quartic speedup in the near-term. The cubic case is somewhat ambiguous, but the main message is that quadratic speedups will not be enough unless we make significant improvements to the way we perform error-correction.

Note: In this paper, when they say ‘order-d quantum speedup,’ they mean that there is a classical algorithm solving some problem using M^d applications of a classical primitive circuit (also called an ‘oracle’), and a quantum algorithm solving the same problem making only M calls to a quantum oracle.

The results of the paper are a little disheartening, because many of the most relevant quantum algorithms for industrial applications offer only quadratic speedups! Examples include database searching with Grover search, some machine learning subroutines, and combinatorial optimization – things any self-respecting tech company would love to improve. We may have to wait for quantum computers that can revolutionize the tech industry, but quantum runtime advantage for algorithms with exponential improvement, like Hamiltonian simulation, could be just around the corner.

Graphical illustration of constant factors that may ruin quantum runtime advantage (Figure 1 of the paper). On the left, large amounts of space and time are required to ‘distill’ a Toffoli gate out of quantum primitives, whereas the corresponding classical circuit (right) is built out of just a few transistors. Such basic logic operations are much more expensive to perform in the quantum case.

Methods: Estimating a quantum speedup

The paper focuses on a specific error-correcting protocol, the surface code, which is widely considered the most practical choice for Google’s architecture. Suppose we want to run an algorithm with an order d polynomial quantum speedup. If a call to the quantum oracle takes time t_Q and the classical oracle call takes time t_C, then realizing a quantum speedup requires that

M > \left(\frac{t_Q}{t_C} \right)^{1/d-1}.

This condition is a bit naive: it assumes the classical algorithm cannot be parallelized.

We can get closer to the real world by incorporating Amdahl’s law for the speedup from parallelizing a classical algorithm. This law states that if \alpha is the fraction of the algorithm’s runtime that cannot be parallelized and P is the number of processors used, then the runtime can be reduced by a factor

S = \left(\alpha + \frac{1-\alpha}{P} \right)^{-1}.

Adding this parallelization speedup, the condition for quantum advantage becomes

M > \left( \frac{St_Q}{t_C}\right)^{1/d-1}

The rest of the paper deals with estimating the parameters S, t_Q, and t_C for realistic scenarios.

Conclusion: Quadratic Speedups Could Take Hundreds of Years

In the paper, the authors consider two scenarios: 1) a hypothetical ‘lower bound’ based on hardware limitations for the surface code where t_Q \geq 17 \ \text{ms}, t_C \leq 33 \ \text{ns} and 2) a specific algorithm for solving the Sherrington-Kirkpatrick model via simulated annealing where t_Q = 440 \ \text{ms}, t_C = 7 \ \text{ns}. The ‘lower bound’ gives an estimate for the best speedup that can reasonably be achieved using Google’s quantum computing hardware, while the simulated annealing example is meant to ground their estimates with a specific, practical algorithm which has already been studied in depth. This results in the following table (Table 1 in the paper), where M is the number of quantum oracle calls and \mathcal{T}^* is the total runtime required to realize a quantum speedup, given for several values of d and S.

Even in the most generous ‘lower bound’ case, realizing quantum advantage with a quadratic speedup would take several hours of compute time! And, for a more realistic model using simulated annealing, you would need almost a year to see any runtime advantage at all. The situation is much more promising for quartic speedups, where the runtime needed is on the order of minutes and seconds rather than days or years. The moral of the story is that we either need to speed up our surface code protocols significantly or start looking beyond quadratic speedups for practical runtime advantage in the near term.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: