A note for the savvy: A quantum computer is not a magic bit-string that mysteriously flips to the correct answer. A n-qubit quantum computer is not like 2^n phantom computers running at the same time in some quantum superposition phantom-zone. That's the popular misconception, but it's effectively ignorant techno-woo.
Here's what really happens. If you have a string of n-qubits, when you measure them, they might end up randomly in of of the 2^n possible configurations. However, if you apply some operations to your string of n-qubits using quantum gates, you can usefully bias their wave equations, such that the probabilities of certain configurations are much more likely to appear. (You can't have too many of these operations, however, as that runs the risk of decoherence.) Hopefully, you can do this in such a way, that the biased configurations are the answer to a problem you want to solve.
So then, if you have a quantum computer in such a setup, you can run it a bunch of times, and if everything goes well after enough iterations, you will be able to notice a bias towards certain configurations of the string of bits. If you can do this often enough to get statistical significance, then you can be pretty confident you've found your answers.
EDIT: I rather like Issac Arthur, but unfortunately, his Quantum Computing episode is an example of exactly this kind of popular misconception. I've called him out on it in comments.
EDIT: I can't find my comment anymore, and I've also discovered that I'm now banned from the public Facebook group! Hmmm.
EDIT: It seems that Issac did correct his video, kind of. He still seems to advocate the 2^n parallelism, but then explains why that can't work around 18 minutes in.
I'm not sure it's useful anymore to point to quantum computers and say: 'Hey! You're not getting all your function evaluations back....so they're weak'. I feel this leads to misconceptions about quantum computing as well. You have to remember, the tensor product of N qubits is an exponentially expanding space...and the functions DO get evaluated in that space, the trick being you have to extract a JOINT property of the function evaluations, rather than the result of each one individually. Let's be clear, even with this caveat, the potential of quantum computing is enormous. I've personally read Kalai's papers and in my opinion, his objections boil down to 'it's just too good to be true'....but hey...we'll see :)
the trick being you have to extract a JOINT property of the function evaluations, rather than the result of each one individually.
This is an even more succinct way of saying it. This is the key point that people need to know, which separates someone from understanding the real potential of quantum computers from "quantum woo."
Let's be clear, even with this caveat, the potential of quantum computing is enormous.
No question. But to think clearly about it, one has to know this caveat. Likewise, to understand the potential of conventional computers, one needs to actually be able to think about computation within its realistic limits, not treat them as magic boxes.
Your answer doesn’t actually explain what we can do on a quantum computer that we can’t on a traditional computer.
The answer: we don’t know. We don’t even know if BQP is actually any bigger than P.
However, we’ve figured out how to do some things quickly on quantum computers that we haven’t ever figured out how to do quickly on classical computers, like certain mathematically useful operations on abelian groups.
One specific example is that you can do Fourier transforms in O(log(n)log(log(n))) rather than O(n log(n)), which is pretty cool. If your vector space is too big to even represent in classical memory, you might still be able to work with it on a quantum computer.
> Your answer doesn’t actually explain what we can do on a quantum computer that we can’t on a traditional computer.
Why would there be anything you could compute on a quantum computer that you can't compute on a classical one (provided the classical one is powerful enough or given enough time for certain algorithms quantum computers are more efficient at).
Is there speculation that a quantum computer could be used for hypercomputing?
They meant due to practical limitations on the speed of the computer and/or the amount of time you can afford to wait for the answer.
They were not saying that there may be computations that can be done with finite space and finite time on a quantum computer that cannot be done in any finite space and finite time on a classical computer.
They were talking about computational complexity, not computability.
The fundamental observation behind quantum computing is that an irreversible bit flip releases a minimum amount of heat, while a reversible one releases none. See https://en.wikipedia.org/wiki/Landauer%27s_principle for the theoretical limits to classical computing based on this. But there are, to the best of our current knowledge, no theoretical limits to how much computation can happen reversibly.
However the requirement that the operations all be completely reversible is very strict. For example logic operations like "and" and "or" are off the table. BUT when physicists like Richard Feynman looked into, what DOES happen is that your computation can progress in a quantum superposition of states. And there is no upper limit to how many states are in the quantum superposition.
So in essence what you get should be a very hard to program but unbelievably parallel computer with no upper limit on computational speed.
The first demonstration that this could be useful for problems that people care about was https://en.wikipedia.org/wiki/Shor%27s_algorithm for factoring integers. The fact that we do not know how to factor integers quickly is an underpinning of public key cryptography algorithms like RSA. However we know how to solve that if we had a quantum computer. And this is just engineering, right?
Everyone recognizes that it is a very hard engineering problem. But the minority opinion laid out in this article is that the engineering problem is not just a practical problem, but the physics makes it intrinsically hard in a way that cannot ever be worked around. No matter how well a quantum computer works in theory, it is physically impossible for us to build and operate one that does better than the classical computers that we already know how to build.
Okay, first of all, we are no where near Landauer's Limit right now, and will likely never reach that point because we'll run into thermal noise "danger zones" far before then.
Second of all, there are plenty of (and probably the primary ) forms of non-quantum reversible logic.
In the slightly less exotic category there is also adiabatic computing which can actually be done in standard CMOS in most flavors.
"Solve problem x in polynomial time". AIUI, BQP is thought to be a superset of P and to include some stuff from outside NP. Specific problems that are part of BQP but that we expect are not in P is the interesting response here.
An example of such a problem is integer factorisation (see: Shor's Algorithm), which means that RSA and similar schemes might become vulnerable once suitably large quantum computers are available.
Your question is too simple to have a correct answer.
You can represent any unit vector in a vector space with 2^n basis elements using only n qubits.
Constructing an arbitrary vector might not be more efficient than doing it on a classical computer, but constructing certain vectors that would probably be very large and/or unwieldy on a classical computer is possible. For example, it’s easy to represent any single standard basis vector in a vector space with size 2^64 on a classical computer; just use a list of (64-bit word to describe which basis vector you’re talking about and a weight). Now go ahead and try to use that same representation for a Fourier transform, and you’re out of luck. But this is no problem on a quantum computer. You just end up with a 64-qubit system that has a non-zero value for all of those 2^64 basis states (using only 64 qubits). You can also construct the circuit required to do the Fourier transform with ~64log(64) gates.
So for this problem, there’s definitely a space advantage. The trouble is that most of these advantages don’t generalize in any obvious way. Maybe there’s even a representation on classical computers that’s equally efficient and we just haven’t found it yet.
And as you mention below, this isn’t useful for arbitrary linear operations (many of them seem to be just as expensive on a quantum computer as on a classical computer) but there are certain known-to-be-useful linear operations that are extremely cheap to represent compared to what we know how to do on a classical computer.
The essential problem with a quantum computer, as I understand is it, is that you can't actually read out the state you've created. Sure I can create a state which is an equal admixture of all 2^64 basis vectors, but if I try to look at it, I'm going to randomly end up with only one of the basis vectors.
Sure, but you might be able to perform some computation that involves those 2^64 possible states that ends up with some determined state that you can reliably measure. So while you can't store 2^64 classical values in one quantum bit, nor can you necessarily simulate that quantum bit exactly with even 2^64 classical values.
Imagine you have a classical bit string 0101, there are 2^4 possible different configurations of this string, and the classical computer will always exist in ONE of them.
A quantum computer, with a qubit register 0101, can be put into a superposition state between all 4 qubits, where the state of the qubit register is ALL 2^4 different configurations simultaneously....now scale that up to 50-100-1000 qubits and you get the idea. The quantum computer has a ridiculous advantage in terms of holding state spaces.
That argument is patently wrong: it's the quantum bullshit fallacy.
The point of the space argument I was making was that, if you need N bits of space to represent the input/output in classical computers, you should still need (I'm not sure if it's the case, but I don't see why it's not) Ω(N) qubits to do the same representation in a quantum computer.
The quantum bullshit fallacy is that, since an entangled N-qubit is represented as a 2^N-dimension vector with complex arguments, it's really a computation on 2^N elements in O(1) time. The first sign that this is fallacious is that the basis here isn't of size 2^N but of size 2^N - 1. We're still only capable of reading N bits of information out of an N-qubit register; the fact that classical simulation requires a much larger state space to compute the probability doesn't mean that there's an inherent ability to freely vary through all of those states to represent the entire state space.
Honestly, I don't understand the point you are making. If you can explain to me how the Deutsch-Joza algorithm works without making reference to the fact that a function evaluation happens on a qubit in superposition(and thus an expanded state-space, as the simplest example), sure. I sincerely don't understand your points about classical simulation, or input/output in classical computers.
As you say, N qubits can be thought of as a distribution over its possible 2^N values, but you can't ignore the starting distribution, which may be very far from the set of classical bits that you want to operate over (and therefore may take an exponential number of quantum operations to realize).
So one very simplified model to think of what quantum computation is that you can do parallel operations to a distribution over your 2^N vector of classical bits, but you don't get to specify an arbitrary starting distribution, you can only start with relatively 'simple' distributions.
This restriction is what makes the claim 'you can compute over 2^N values simultaneously' at best very incomplete — yes, you can do that, as long as you're fine not being able to start from arbitrary starting values. But this is a big restriction!
What do you mean exactly by starting distribution? I'm definitely not saying you get the answer to every problem in a single evaluation step, no one ever said that.
A superposition is (a bit of an oversimplification since there are also complex numbers involved and a linear constraint) is in some sense a probabilistic distribution over the possible (classical) values that you get when you measure those N qubits.
E.g. if you measured this N-qubit state many times, which of the 2^(N-1) possible superposition values would you get? Would it be the uniform distribution? Biased towards particular classical bit patterns?
Note that there are 2^(N-1) possible classical bit patterns, so an arbitrary collection of these patterns would take O(2^N) operations to define.
> I'm definitely not saying you get the answer to every problem in a single evaluation step
Yes, but my point is that if your input is an arbitrary collection of classical N-bits, then in general you will require an exponential number of quantum operations to set stuff it into an N-qubit initial state, which means you get zero speedup.
Quantum computers only see speedup on inputs that require a relatively small number of quantum operations to set up. 'Small' could mean polynomial. But it's true that the space of 'easy to set up' initial N-qubit states is much smaller than the space of all possible N-qubit states, which is why a quantum computer cannot simply considered 'a magical computer that computes on 2^N bits at once' without considering how you get those bits in or out of the damn thing.
> Yes, but my point is that if your input is an arbitrary collection of classical N-bits, then in general you will require an exponential number of quantum operations to set stuff it into an N-qubit initial state, which means you get zero speedup
Can you provide a reference for this? My understanding is that you can efficiently setup qubits using a hadamard transform
Yes, but the Hadamard transform is essentially a linear operation.
The span of qubit states reachable in a polynomial number of Hadamard transform operations from the starting state is much less than all possible qubit states.
This follows immediately from information-theoretic principles. There are 2^(2^N-1) possible 'collections' of 'classical bits' representable by N qubits. Therefore it takes 2^N-1 bits of information to represent, but since the Hadamard transform is basically linear, we can see (in a very handwavy way) that only a polynomial number of possible states is reachable from a polynomial number of quantum gate operations.
This is verbatim from 'Explorations in Quantum Computing' by Colin P. Williams:
"The utility of the Hadamard gate derives from that fact that by applying, in parallel, a separate Hadamard gate to each of n qubits, each initially in the state |0⟩,
1 H|0⟩⊗H|0⟩⊗···⊗H|0⟩= √
|j⟩ (2.20)
we can create an n-qubit superposition containing 2n component eigenstates. These eigenstates represent all the possible bit strings one can write using n bits. The importance of this capability is often overlooked. But, in reality, it is one of the most important tricks of quantum computing as it gives is the ability to load exponentially many indices into a quantum computer using only polynomially many operations. Had Nature been unkind, and had we had to enter the different bit-strings individually, as we do in classical computing, then quantum computing would have had far less potential for breakthroughs in computational complexity."
I'm just talking about loading a quantum register, there are other issues with gate times and decoherence.
If you have an N-bit classical computer in a black box, you get at most 2^N possible outputs. If you have an N-qubit quantum computer in a black box, you can get at most 2^N possible outputs, not 2^2^N. We write the N-qubit registers as a vector of 2^2^N elements to make the math work, but measurement means we still can only distinguish between 2^N possible values.
Well, only in the same sense that the vector (1, 1) is "both (1, 0) and (0, 1) simultaneously." It's still a single vector in a 2^N dimensional space. It just happens to be expressed in terms of a basis that makes it look complicated.
This illustrates the disconnect of mathematicians trying to do physics. Not only his error model is unrealistic (you essentially need a frequency dependent power spectral density function to represent fluctuations in control which decays according to a power law at high frequencies, sometimes called 1/f noise but the power doesn't have to be 1; also you don't get that kind of strong spatial correlations in real systems, cross-talk among spins more of a realistic problem but it is not what he's doing either), his argument is based on something that is just wrong:
> that the effort required to obtain a low enough error level for any implementation of universal quantum circuits increases exponentially with the number of qubits, and thus, quantum computers are not possible.
That's what dynamical decoupling (DD) or pulse sequences are for: you can get arbitrarily high quality quantum gates (typically, but not always, at the cost of increasing the gate times; removing the most significant first order error typically increases the gate time by less than an order magnitude, think 2x-4x) without increasing the number of physical qubits at all. People don't just rely on surface codes, anyone serious about implementing a quantum computer use surface codes after DD to reduce the infidelity to the threshold required for them. Which is why you don't need hundreds of physical qubits to have a single robust logical qubit.
For those who are not familiar, DD is like one of the oldest tricks in the bag, it's nothing like a new cutting edge type quantum error correction code. In fact, the oldest form of DD, spin echo, precedes any real discussion about quantum computers by a decade.
DD is possible essentially because quantum operations don't commute so errors don't simply add up as they do with classical errors; this makes it possible to obtain a better gate by carefully combining noisy gates such that (significant) errors cancel.
Geezus. I have a degree in CSC from an engineering college. I did well in both math and physics; hell, I even enjoyed them. I've been working professionally for 25 years (with considerable success!).
I'm not sure if I could distinguish what you just said from a markov-chain-based paper generator.
I wouldn't expect you to be familiar with it unless you've actually worked in the academia doing research quantum information. This isn't stuff we teach in class at all, even to PhD students, it's a part of research.
While it's about flux qubits, the 1/f behavior is almost universal in all current promising candidates, and DD virtually works in all quantum computers.
I love that the journalist asked ”What do your critics say to that?”, and that the researcher gave a meaningful, non disparaging reply. This should be expected from everyone doing research work, but is sadly far from the norm.
It’s along the same lines as, in a debate, asking each participant to sum up their opponent’s point of view in a way that they agree with. A fundamental tool for productive discourse.
Scott Aaronson still has his $100 000 wager available for an actual refutation of scalable QC. The impetus for the bet was actually from arguing with Gil Kalai in the comments on a blog.
Whether or not you can make a scalable quantum computer is beside the point, though. The question is, can you make a quantum computer that is competitive with a classical computer, speed and/or cost wise.
I suspect that any speedup imparted by clever uses of entanglement will be counteracted by the fundamental need for error correction and averaging over several runs. I look forward to being proven either right or wrong, and I suspect I will be in my lifetime.
'I suspect that any speedup imparted by clever uses of entanglement will be counteracted by the fundamental need for error correction and averaging over several runs.'
Perhaps you don't understand what people mean when they say scalable quantum computing? They mean that it scales with the error correction included. Needing to average over several runs is already factored into the complexity of quantum algorithms.
Also, whether you can make a scalable quantum computer isn't beside the point. It's the entire point Kalai's trying to make. If you can make a scalable quantum computer we already know that there are applications it will outperform a classical computer on performance and cost.
Perhaps I don't, but your description doesn't seem to contradict my point that scalability is distinct from competitive with classical computing. Perhaps people also implicitly mean "scales faster than a classical computer", in which case I did indeed misunderstand.
On the topic of whether or not the need for averaging is already accounted for in algorithmic complexity, my understanding is that it is not. This [1] preprint from 2006 seems to support my understanding. If you have evidence to the contrary, I would greatly appreciate a link.
The paper you linked to is just saying that the need for several runs and 2006 readout error rates together led to awful scaling (though still better than classical factorisation). That isn't surprising. Error rates in all parts of even modern quantum computers are still too high to scale well. Especially when repetition is called for.
Ultimately, complexity is calculated using logical qbits. Kalai doesn't think that even with error correction codes we can get close enough. Most other people think we can. That paper you linked to is just making the point that we're definitely not there yet. When we do, algorithms in BQP will still need several runs on many algorithms (the exact same way as algorithms in BPP do) but that should already be incorporated into those algorithm's complexity.
The flip side of this argument is that if a quantum computer can never show "quantum supremacy" over a classical computer, then perhaps there are much better ways of simulating quantum phenomenon with classical computers than we've yet discovered.
It's important to remember that the motivation for quantum computing came from the realization that it's intractable to simulate quantum phenomenon on classical computers. So any claim that classical computers can do just as good as quantum needs to butt up against this realization at some point (IMHO).
I think he is referencing several theorems about fault tolerant computing. I think the reasoning goes that if we assume some level of correlated noise, then these theorems tell us we need an error correcting code that runs in O(e^n) where n is the number of quibits in the program you want to check for errors. I could be wrong though, see my comment above where I link to the paper.
I dont buy this argument, for one reason and one reason alone. When we started integrating silicons we had similar problems of noise and distortion due to circuits packed so close. There was always scope for corruption of signals and then the problem got amplified when these circuits started working at higher frequencies. A high end processor these days could be 4Ghz, such a high frequency data transaction already creates distortion, so we employed many schemes to circumvent these things most notably using different wire interconnects, going smaller and using various chemical shielding. His key argument revolves around noise/distortion which is not a good argument. If the math doesn't work then its a real problem. Also massiveness argument is also not correct because it was given for computers aswell that we will never be able to build smaller computers and yet we did.
"There is a Hebrew proverb that says that trouble comes in clusters." In France, we have a famous quote from Jacques Chirac "La merde, ça vole toujours en escadrille !" (Shit always flies in squadrons! ).
Is there a distinction being made between the quantum computer being impossible, and it being impossible without a future breakthrough in engineering or physics?
If it really is exponentially more difficult to correct for noise as the number of qubits increases, then it will be impossible to build a useful quantum computer. We may not be able to say whether it's impossible after 24 qubits, or after 29, or after 32, etc., but we may be able to know that it's impossible to build anything useful, if someone can rigorously mathematically prove this.
On the other hand if nobody can provide a mathematical proof that it's impossible, it could just be engineering. Then again, "just engineering" doesn't prove a useful quantum computer may require practically impossible configurations (i.e., having to be in the middle of a sphere of lead dozens of miles in radius cooled to near absolute-zero is not necessarily impossible, but isn't something we're going to build anytime soon), or that we simply won't be able to figure out the requisite configurations.
Alternatively, this guy will prove to be wrong and someone will just build a quantum computer with a couple hundred qubits, at which point we may have enough data to observationally draw the curve rather than via math and the simplifications required to make it tractable, and maybe it'll be fine.
Gil Kalai can have his contrarian opinion on quantum computers. But we can have a much simpler argument against quantum computers. That they will remain infeasible for a long enough time for traditional computers to catch up to it.
What I mean by traditional computers catching up with it is that we will see innovations in traditional algorithms that either make quantum computers extremely pricey for a very long time in respect to traditional computers or that we find innovations that will bypass the things that they are good at without violating any of the current theories we have.
Currently, we have an enormous effort invested in our silicon architecture. We seem to be hitting limitations to silicon and they seem to be making a large bet on Quantum computers to eventually allow them to proceed. But, if quantum algorithms take a long time to have useful implementations then we have a local maxima where quantum computers aren't close enough to implement without investing a massive amount of money and that silicon just becomes cheaper and cheaper. When Intel finally reaches the limit of traditional computing that doesn't mean that it won't stop being cheaper.
Intel, IBM, Microsoft, DWave, Google and other companies have small bets on quantum computer engineering strategies that may or may not pay off. Also there seems to be a bunch of researchers publishing more and more papers on the theory of quantum algorithms. There are cross collaborations with academia on quantum computation (and they are basically what the research institutions work with the quantum engineering departments in those companies).
If we find that quantum computers can only speed up computations in the limited set of algorithms then they will probably end up as accelerator cards attached to traditional computers. I expect a order of magnitude of inefficiency to translate problems or do work on quantum computers. Either due to interconnecting or the way that we will figure out to operate them. So we will see these systems relegated to supercomputer workloads.
On the algorithms side, funding the research of quantum algorithms will probably dry up in the way that string theory research has.
Hopefully we will see a large breakthrough that will make quantum computers feasible. From my reading of the research it seems like Topological Quantum Computers have the best chance.
> That they will remain infeasible for a long enough time for traditional computers to catch up to it.
Doesn't work for things we know are exponentially faster on quantum computers. Eventually, with enough (but not an absurd number) of bits, a decent quantum computer can solve problems that wouldn't be possible on a classical computer the size of the universe operating for billions of years.
Improvements in algorithms cuts both ways, too. And for some classes of problems, it's possible to prove (given P != NP, etc) that any classical algorithm is much worse than quantum.
We don't know of any such thing. There is nothing proven to be NP-hard that can be done in polynomial time with a quantum algorithm. Factorization isn't proven to be NP-hard.
In short, it's hard to classically model pretty simple quantum mechanical systems and sample the probability distribution of outcomes. Using a quantum computer to simulate the system and sample the probability distribution is easy.
> Gil Kalai can have his contrarian opinion on quantum computers. But we can have a much simpler argument against quantum computers. That they will remain infeasible for a long enough time for traditional computers to catch up to it.
IIUC and IIRC (both somewhat doubtful admittedly), Feynman argued -
https://people.eecs.berkeley.edu/~christos/classics/Feynman.... - that only a quantum computer can perform a scalable simulation of quantum physics. Assuming that is correct, there is no option for "traditional computers" to catch up for all problems at least.
Yes, they do handle decoherence better. Here is a video explaining the theory behind the particles that are used to perform topological quantum computation [0]. I emphasize that unlike superconducting, quantum dots, ion traps, photonic e.t.c; topological quantum computers have not been physically realized because non-abelian anyons have not been detected.
...they have, if you believe this (fairly convincing!) set of evidence showing perfect Andreev Reflection (robust to changes in gate potential), which should only be possible (in a way robust to changes in gate potential) in the presence of Majorana states (i.e. a type of non-abelian anyon):
https://quantumfrontiers.com/2017/11/21/majorana-update/
Yes, they are the theoretically the most elegant way to deal with decoherence. They rely on an exotic state of matter, non-abelian anyons(many still working on verifying their existence), and winding discrete non-abelian anyons into a topological braid, which disperses the quantum state spatially, making it much more resilient against decoherence(obvious simplifications made). This is the specific type of quantum computation that Microsoft is pursuing ala Michael Freedman and co.
What's interesting about topological quantum computing is that these Majorana pairs are resistant to disturbances for the same reason that Cooper pairs are in superconductors.
Here's what really happens. If you have a string of n-qubits, when you measure them, they might end up randomly in of of the 2^n possible configurations. However, if you apply some operations to your string of n-qubits using quantum gates, you can usefully bias their wave equations, such that the probabilities of certain configurations are much more likely to appear. (You can't have too many of these operations, however, as that runs the risk of decoherence.) Hopefully, you can do this in such a way, that the biased configurations are the answer to a problem you want to solve.
So then, if you have a quantum computer in such a setup, you can run it a bunch of times, and if everything goes well after enough iterations, you will be able to notice a bias towards certain configurations of the string of bits. If you can do this often enough to get statistical significance, then you can be pretty confident you've found your answers.
https://www.youtube.com/watch?v=IrbJYsep45E
https://www.youtube.com/watch?v=wUwZZaI5u0c
EDIT: I rather like Issac Arthur, but unfortunately, his Quantum Computing episode is an example of exactly this kind of popular misconception. I've called him out on it in comments.
https://www.youtube.com/watch?v=wgCuKTN8sX0
EDIT: I can't find my comment anymore, and I've also discovered that I'm now banned from the public Facebook group! Hmmm.
EDIT: It seems that Issac did correct his video, kind of. He still seems to advocate the 2^n parallelism, but then explains why that can't work around 18 minutes in.