Thank you for the explanation. It's still an upwards update on the qubit timelines of https://arxiv.org/pdf/2009.05045 (see Fig. 7), but not by an insane amount. We've realized their 95% expectation of qubit progress (1 logical qubit) for 2026, in 2024.92 instead.
Which to be clear is quite a bit faster than expected in 2020, but still within the realm of plausible stuff.
Nice, thanks for linking that paper. I also did below.
The authors argue (e.g. in the first comment here https://scottaaronson.blog/?p=8310#comments) that by their definition, Google still only has a fraction of one logical qubit. Their logical error rate is of order 1e-3, whereas this paper considers a logical qubit to have error of order 1e-18. Google's breakthrough here is to show that the logical error rate can be reduced exponentially as they make the system larger, but there is still a lot of scaling work to do to reach 1e-18.
So according to this paper, we are still on roughly the same track that they laid out, and therefore might expect to break RSA between 2040 and 2060. Note that there are likely a lot of interesting things one can do before breaking RSA, which is among the hardest quantum algorithms to run.
How to understand that logical error rate can reduce exponentially when the system becomes larger? Does it mean each physical qubit will generate more logical qubit when the total number of physical qubits increases?
A simple classical example where this is true is a repetition code. If you represent 1 as 11...1 and 0 as 00...0, randomly flip some bits, and then recover the logical state with majority voting, the probability of a logical error occurring shrinks exponentially as you add more bits to it. This is because a logical error requires flipping at least half the bits, and the probability of that happening decreases exponentially.
The error correcting code used in this work also has a nice intuitive explanation. Imagine a 2D grid of bits, visualized as pixels that can be black or white. Now imagine drawing a bunch of lines of black pixels, and enforcing that lines can only end on the top or bottom boundary (loops are allowed). If there is an even number of lines connecting the top to the bottom, we call that logical 0, and if there is an odd number of lines, we call that logical 1. This again has the property that as you add more bits, the probability of changing between logical 1 and 0 gets exponentially smaller, because the lines connecting top to bottom get longer (just like in the repetiton code).
This code also has the nice property that if you measure the value of a small patch of (qu)bits, there's no way to tell what the logical state is. This is important for quantum error correction, because measurement destroys quantum information. So the fact that local measurements don't reveal the logical state means that the logical state is protected. This isn't true for the repetition code, where measuring a single bit tells you the logical state.
Which to be clear is quite a bit faster than expected in 2020, but still within the realm of plausible stuff.