Hacker News new | past | comments | ask | show | jobs | submit login
Learn Quantum Computation (qiskit.org)
177 points by headalgorithm on Dec 30, 2020 | hide | past | favorite | 61 comments



Just last week I finished a course on Quantum Computing on coursera[1].

I found the idea of true randomness and multiverse really fascinating. However, I could not understand the following and would greatly appreciate if someone could help me out:

-- The need for complex numbers in describing a quantum bit. Why do we have to use complex numbers when the system is very much real? What is the real world interpretation?

-- The fact that an n-qubit quantum computing system is significantly powerful than classical system because of 2^n states. But, a classical system with n-bits will also have 2^n states, isn't it?

[1] https://www.coursera.org/learn/quantum-computing-lfmu


> The need for complex numbers in describing a quantum bit. Why do we have to use complex numbers when the system is very much real? What is the real world interpretation?

The world isn't "real" (people rightfully find imaginary numbers a bad name, but real is equally bad). Complex numbers show up all over in physics. As to why, who knows? We just model what nature does, and it turns out that complex numbers form a very simple (relatively speaking) accurate model.

> The fact that an n-qubit quantum computing system is significantly powerful than classical system because of 2^n states. But, a classical system with n-bits will also have 2^n states, isn't it?

A n-bit pure quantum state can be fully described using 2^n complex numbers, and thus have 2^(n+1) degrees of freedom, which is a vastly bigger space than n bits. While quantum amplitudes are distinctly not probabilities (and more powerful), you can think of the difference in state space roughly by comparing the difference of storing one n-bit number v.s. storing one specific probability distribution over all n-bit numbers.


It's not a mystery, but after generations of lecturers teaching something they never learned themselves, physicists have forgotten the history of it.

Complex numbers turn up a lot because exp(2πiθ) is a convenient way to model any kind of rotation or cyclic process.

It turns out that there's lots of cyclic processes in physics, such as atomic orbitals or waves in fields.

Similarly, vector algebra is just terrible at modelling physical spaces, especially 2D or 4D. It just happens to work okay in 3D, and even there it has all sorts of sharp edges, such as gimbal-lock.

Geometric algebra is a lot better at modelling physical spaces of arbitrary dimension, and one of its strengths is the ability to take various interesting sub-spaces and use them without having to make changes to formulas.

Both the complex numbers and quarternions are such subspaces of matching geometric algebras, so it's no surprise that they "turn up" a lot in the algebras of geometric spaces.

Interestingly, some of the other subspaces of a GA are isomorphic to the various "complex valued matrices" used in Physics, such as the Pauli matrices and the Gell Mann matrices.

These matrices are introduced to students as essentially scalar-valued black boxes. "Don't worry about it, they just have the 'right' properties." is commonly heard in lectures.

This seems like mathematical magic, but no magic is necessary. A much more consistent and logical theory based around GA could have been used, but wasn't for merely historic reasons.

Unfortunately, lots of people will say that these alternative formulations are all isomorphic to each other, so who cares, just shut up and calculate. However one method yields an intuitive understanding, and the other one yields complex numbers all over the place with no clear picture of their origins...


Although they don't affect models due to the isomorphic quality you describe, these historical representational accidents seem to have a powerful effect on our intuition.

I am reminded of the conceptual differences between the Copenhagen and Bohmian interpretations. Using one or the other basically does not change our models or results, but what affect do these different perspectives have on the field?


Bohmian mechanics (pilot wave theory) is not the same as traditional QM, it is a stronger theory from which traditional QM can be derived (of course, we don't know if it's correct yet).

Examples of interpretations of QM that are mathematically equivalent are Copenhagen and Many Worlds. Examples of theories that are stronger (make more predictions than) QM are Objective Collapse theories, de Broglie - Bohm Pilot Wave theory, Superdeterminism.


> A n-bit pure quantum state can be fully described using 2^n complex numbers

Why 2^n? Isn't it 2 numbers per state (real and imaginary parts)? So it would be 2+2+2...+2 (n times) = 2n total numbers


Consider how the number of states are calculated: If you have n qubits, at the time of making the measurements those can be in 2^n distinct (basis) states. This is similar to how in the classical case you can use n bits to express at most 2^n different states.

Which one of the basis states we end up measuring depends on the squared of the complex amplitude of that state (Born rule). Therefore, if you were to simulate a quantum computer, in the general case you would need to keep track of these complex amplitudes belonging to the states, every 2^n of them. Because of the real and imaginary parts this amounts to 2*2^n=2^(n+1) numbers.


Because the State Space of an n-qubit System is the Tensor product of the single qubit State Space.


> Why do we have to use complex numbers when the system is very much real

If you don't use complex numbers you end up with a bunch of maths that looks suspiciously like complex numbers. The algebraic properties of complex numbers make it a very natural way to express quantum mechanics.

I believe 3Blue1Brown has some good videos on the topic - you really need to learn about complex numbers separately to really see the magic of them (It's largely visual, so it's hard to transmit via text)

Edit: The book "Visual Complex Analysis" is very good if you resonate with its approach - I personally simply cannot click with almost any geometry, so it wasn't really for me although I appreciate that it's really very nice.


> The need for complex numbers in describing a quantum bit. Why do we have to use complex numbers when the system is very much real? What is the real world interpretation?

This isn't an answer to your question but an analogy. Negative numbers probably seemed strange when you learned about them as a child. After all they don't seem to exist in the "real" world. But they're still a useful mathematical tool for turning subtraction into addition, among other things. (And negative numbers do have real-world connotations, like floor numbers below ground level.)

Complex numbers are exactly like that: They are a useful tool for extending the idea of multiplication to "stuff that rotates or behaves like a wave", among other things.

Don't read too much into the names; "complex" and "imaginary" are meaningless labels that simply let us keep track of how multiplication and addition differ from those operations on so-called "real" numbers. Just like "negative" numbers should not be considered "less desirable" or "more pessimistic" than positive numbers, the everyday meaning of the labels doesn't apply in a math context.


To your first question, two things. First it could be real, sure, and that's called an analog computer (not to be confused with an analog circuit). That subject has been studied too, but hasn't been terribly fruitful in terms of realizable improvements over classical. Second, complex because that's how physics works (we think). Scott Aaronson wrote up an article about what the alternatives might be. https://www.scottaaronson.com/blog/?p=4021

To your second question, the quantum system can represent all N states simultaneously (until you observe it) because of entanglement. The classical system can only represent one at a time.


A quantum computer with real amplitudes is not an analog computer. The two are quite different. Note that a quantum computer with real amplitudes is actually just as powerful as one with complex amplitudes; the complex amplitudes are actually inessential.

Also, your answer to the second question fails to distinguish a quantum computer from a classical probabilistic computer. The key here is really cancellation of amplitudes, not the larger state space.


representation of all N (where N = 2^n) states simultaneously is due to superposition, not entanglement.


Complex numbers are just a mathematical convention. You can't have (-1) apples so you could say negative numbers are equally weird. Complex numbers are just weird in different way.

Complex numbers are weird because we start with the square root of a negative number and then it turns out the same rules you invented for that actually then present rotations in a plane as well.

That Sqrt(-1) is isomorphic with a 90 degree rotation on a plane of a vector length of 1 never seizes to amaze me. It's not a trick - all the math works and it's fairly obvious once one does all the math. But still it feels like a magician pulled a rabbit out of his hat each time I think about it.

This rotation aspect is one of the main reasons why imaginary numbers are used so much in physics.


3blue1brown's group theory videos explain complex numbers better than i've ever understood before.

I wonder if every quantum programmer will need to be a theoretical physicist to some extent. I'm currently reading "physics from symmetry" which uses group theory intensively. It has a lot of math, but i'm ready to dive in this time. Excited for the special relativity sections.

I've also been watching Institute of Advanced Study videos lately which are way over my head, but from what I gather, the black hole information paradox has a solution using a method called entanglement wedge reconstruction. The model is holographic, and the goal is understanding the mapping of the physics of a quantum holographic surface onto the "bulk"(spacetime), and that mapping turns out to be very similar to a quantum error correcting code.

https://www.quantamagazine.org/how-space-and-time-could-be-a...


Complex numbers just allow you to add a dimension to real numbers (so think vectors) while allowing handy operations (rotations). So they end up being useful in all sorts of places, although you don’t have to use them.

An n qubit quantum computers allow you to “visit” 2^n states but to to store only a few states. A n bit computer allows you to store 2^n states but to visit only a few states (for n large)


> The fact that an n-qubit quantum computing system is significantly powerful than classical system because of 2^n states. But, a classical system with n-bits will also have 2^n states, isn't it?

Let's consider the execution of an algorithm. In the classical case you start with _one_ of 2^n states and keep evolving that single state. In the case of a quantum computer you can start with a superposition of all of the 2^n states and evolve them simultaneously. This is called quantum parallelism.

This sounds very powerful but comes with a caveat: at the end of the algorithm you will end up with some other superposition of the 2^n states but you can only read one of them. Which one it is going to be will be decided by the square of the complex amplitudes associated with those states. The greater the squared amplitude, the more likely it is that you will end up measuring the state associated with it. A key requirement when a quantum algorithm is designed is that it should yield a useful state at the time of measurement with high probability. This is in general difficult to do.


> In the case of a quantum computer you can start with a superposition of all of the 2^n states and evolve them simultaneously. This is called quantum parallelism.

This is a common misrepresentation of how quantum computers work. Because we can start in one state (e.g. |0^n>, the all-zero n-qubit state), and apply the Hadamard gate to each qubit. And now we are in the universal superposition state (measuring has an equal probability for each n-bit bitstring). Oh no! what happened?

If you are under the belief that quantum computation is 'like classical computation, but in parallel', going from one single state to a superposition is impossible. In other words, it doesn't accurately capture what quantum computation does.


Granted, it's an oversimplification but I wouldn't call it a misrepresentation. It's one of the (arguably key) things that make quantum algorithms work.

To quote Nielsen & Chuang (section 1.4.2 on page 30):

> Quantum parallelism is a fundamental feature of many quantum algorithms. Heuristically, and at the risk of over-simplifying, quantum parallelism allows quantum computers to evaluate a function f(x) for many different values of x simultaneously. In this section we explain how quantum parallelism works, and some of its limitations.

Then they go on and explain how e.g. the Deutsch–Jozsa algorithm is a good example of this. But I think it's a key component in Shor's algorithm as well when it comes to the period-finding step.


Quantum numbers have a very nice property: they allow express sum of angles as multiplication. So anything periodic can take advantage of this property. Even if it's not periodic if you can take advantage of Fourier Transformation you can use complex numbers.


Here are my two cents on the subject:

-- The need for complex numbers (...)?

Complex numbers are just a mathematical tool. They are used to deal with calculations that involve magnitudes and phases, such as modeling qubit’s internal state.

-- The fact that an n-qubit quantum computing system is significantly powerful than classical system (...)?

Due to the physical phenomenon of entanglement quantum computers allow a coherent superposition of multiple states. Quantum algorithms sort of operate on all superposed states at the same time. Classical computing can only handle one state at a time.

A while ago I wrote a blog post explaining how quantum computing works, from a programming perspective, in case you're interested:

- https://thomasvilhena.com/2019/11/quantum-computing-for-prog...


Quantum algorithms sort of operate on all superposed states at the same time.

This is not entirely correct.

--

Also, from your linked article:

I will now introduce the Deutsch Oracle algorithm, one of the first examples of a quantum algorithm that is exponentially faster than its classical counterpart on the size of the input.

This is not entirely correct either.

--

See [1] for an authoritative source.

[1] https://www.nytimes.com/2011/12/06/science/scott-aaronson-qu...


Hi, thanks for going through my blog post!

Could you elaborate on why you believe my writings are imprecise? I read the link you provided, but couldn't find any divergence with its text though.

Here's the line of thought behind my statements if it's not clear to you:

1) Until a measure is made leading to the collapse of the quantum state qubits remain (possibly) in a superposed state. When a quantum gate is applied to these qubits it affects all superposed states (from a linear algebra point of view).

2) I took this statement from Wikipedia [1], forgot to link the source in the article: Simon, Daniel (November 1994). "On the Power of Quantum Computation". 94 Proceedings of the 35th Annual Symposium on Foundations of Computer Science: 116–123.

[1] https://en.wikipedia.org/wiki/Deutsch%E2%80%93Jozsa_algorith...


It's not entanglement that allows for coherent superpositions of states but, well, superposition. (Note, however, that entanglement is responsible for decoherence.)


You are correct in that superposition phenomenon is independent from entanglement!

Nevertheless, In addition to supporting coherent superpositions, what makes qubits powerful for information processing is the ability of coupling multiple qubits for creating entangled i.e., non-separable states. This property confers exponential complexity to a N-qubit system which can then encode and ultimately process 2^N bits of information simultaneously.


1. Complex numbers are especially useful to describe systems dealing with rotation, or if you want to describe a 2-dimensional space, corresponding to a single variable. Instead of x/y, you can just have a real/imaginary component to x. The entire qubit state space (Bloch sphere is a great visual) is a 3 dimensional sphere with a radius of 1, and because that space will output a binary output that splits the sphere in half along some plane, 2 2-D vectors can describe the entire space, with a 2-D vector corresponding to each measurement output possibility. That being said, a 3-D vector could describe the entire system instead (though this would not be capable of describing global phase), but I believe that the common dual vector system is primarily favored due to the nature of the binary outputs. Each 2-D vector has a norm (amplitude) related to the probability of that measurement, and some single 3-D vector system would require more math to get those amplitudes. Alternatively, the real and imaginary components of each 2-D vector could be described individually as 4 real values (this is overkill, since we only need 3 real values to describe the whole system, unless we are considering global phase), but does that improve anything? Consideration would still need to be made to respect norm along each vector, and the values would correspond to directions orthogonal to each other. This effectively describes a complex number.

2. A classical system of n bits also has 2^n _possible_ states, and it "occupies" one of those states at any time, based on the concrete value of the classical bits. A quantum system of n qubits has 2^n _possible_ states, and it occupies X states (where 1 <= X <= 2^n). All n qubits could be set to pure states (say |000....0>), and this would mean that the entire system occupies one single state within the 2^n space. Alternatively, every single qubit could be set to a superposition (like |+++...+>), and this means that the system is "occupying" all 2^n states via superposition _simultaneously_. Now, of course, measuring such a system in the 0 1 basis will simply give you a random state from the list of 2^n possibilities. But, _before_ you measure, there are amplitudes corresponding to 2^n states. Manipulating those amplitudes for some computational goal is quantum computing.

A metaphor I've used to help myself understand superposition is flipping a coin. Before flipping, there are two possible outcomes, and I could say that an unflipped coin is "both" Heads and Tails. Once the coin is flipped (even if you haven't looked yet), the coin is either H or T (and _not_ both). Similarly, two unflipped coins have 4 potential outcomes, while 2 flipped coins is again a single state (one of: HH, HT, TH, TT). The space of possible outcomes remains the same, but the flipped coins are only a single state in the space. The key is that before measuring (flipping the coin), the only indication that a qubit has of its future value is stored in the qubit's amplitudes. This is why quantum speedups rely on manipulating interactions between amplitudes, like amplitude amplification used in Grover's Algorithm, or the Quantum Fourier Transform. If this metaphorical explanation isn't terribly useful, I'd encourage you to read others' descriptions of superposition. Unfortunately, I haven't found a great amount of support for describing superposition in the English language.

Alternatively, the dimensions of a space are defined as the number of vectors in that space that are all orthogonal to each other simultaneously. One great way to prove orthogonality is the Pythagorean theorem. For a single qubit system, we know that the amplitudes (a and b for simplicity) are related to the probability via a^2 = (P of a), and b^2 = (P of b). Because the measurement outcome must be either a or b, we can also say that a^2 + b^2 = 1. This generalizes to multiqubit systems, where summing the squares of each measurement outcome amplitude will equal 1. This implies that a superposed 2 qubit system (|++>) is described as a 4 dimensional vector in a 4 dimensional space, and not just a 1-D vector within that same space. A classical system has no equivalent multidimensional vector system, just concrete values (corresponding to a 1-D vector pointing along some axis in this visualization).


I have a question to the folks who are interested in quantum computation: do you feel getting benefits if you can have direct access to a physical quantum computer? In other words, what hands-on experience you hope to obtain, but cannot make it using a quantum simulator or cloud based quantum programming environment?


No, nothing. The only thing that a real machine is useful for now is testing that the machine works. Other than google's recent contrived test, any quantum algorithm ever executed will run just as well (or better) on a simulator as on a real machine.

It's going to be like that for a while, even after we're well into the quantum supremacy realm. First, there's not anything you can do with a real machine until error correction is working, which requires 10k(?) qubits. And frankly, there isn't a whole lot you could do with even a real million qbit machine now if one were to exist. Prime factorization is the big one, but other NP complete problems have no known quantum algorithm to speed them up (and note prime factorization is not NP complete -- so it's possible no quantum speedup for any NP complete problem exists (or it's possible that P==NP in which case....)). The real work is in the math and algo theory to find solutions for these problems. Coding and running them is actually kind of incidental and "cute".


> And frankly, there isn't a whole lot you could do with even a real million qbit machine now if one were to exist.

Aside from being able to break the majority of asymmetric encryption schemes, in particular RSA as you mention with prime factoring.


Mining bitcoin would be a good use, right? Probably would be able to break even on the cost.


In addition to that, Stephen Wolfram hypothesized that QC won't be possible because with the growth of QC state, the energy required for error correction will grow just as fast, so the theoretical exponential speedup will require exponentially more energy, and this is why the today's QC devices has to split QC bits into small groups.


Many people hypothesize similar things, but there is as yet no real reason behind these hypotheses.

Still, it would be an extremely exciting discovery for physics if it turned out that a QC is not physically realizable. It would prove that quantum mechanics is not the final description of the world, and it would likely be a huge step forward for understanding the measurement problem.


I work in a QC lab, and here's my perspective. There is one pretty good reason for executing programs on real hardware: because we're still pre-error-correction, errors are rampant, but they are hard to model correctly. After all, if you could model them, you could simulate quantum mechanics, and then... why were you building the thing? So, simulators have to make some strong assumptions. Running programs on real devices can teach us more about the physics of those devices and about how compile programs for them.

There's one other really good reason: it's fun!


Very insightful! Could you share how many qubits you can use in your lab? I guess one need at least 3 qubits to run some interesting quantum algorithms?


As a matter of fact, we published just this year on a novel algorithm using just two qubits! The trick is that this algorithm is in some sense "generic" over arbitrary data structures, so we were able to claim victory by tackling the simplest possible case. It was still a challenge in practice, since it required a lot of sneaky tricks and record (to our knowledge) two-qubit gate fidelities.

We're also working with larger devices, but I'm not sure if I'm supposed to comment on those right now :)


Super interesting! Could you share the link to your 2 qubit algorithm paper? I'm also doing quantum research, bit on the hardware side, and really interested in knowing algorithm development.


I was interesting in quantum computation / programming about a year ago, but failed to really get into it (I'm too dum, I think :)). Mostly, my interest was in understanding the mental model.

I learned Golang like 1.5 years ago, and it totally changed how I think about concurrency. I think about interacting processes way different now, as a result - and I was kinda hoping the same thing would happen with ~quantum~. Blarg.


Quantum mechanics is certainly mind-changing, but it can't explain yet what our world really is. The most popular many-worlds interpretation is very far from complete.


Many-worlds is definitely not the most popular interpretation among practitioners, Copenhagen is by far.


Just quantum supremacy experiments. Running things on quantum hardware instead of using a simulator will just be slower and more expensive for the foreseeable future.


My 2c. From the programming perspective, QC isn't much different from GPUs or ASICs: a slightly different model, a slightly different language. For this reason, learning QC now, when there are no QC chips available, is like learning an obscure VHDL dialect to program a not yet released chip. If QC chips become available, I should be able to pick up QC programming in a matter of weeks, if necessary,


Do you have any QC experience? Because I strongly disagree with your statement, depending on the level of abstraction that you mean.

At some point in the future certain quantum algorithms will become 'callable' from traditional programming languages, and will be usable by any competent programmer without much hassle. But making the quantum algorithms themselves is quite a different ballgame altogether, very much unlike traditional programming.


I'm a pragmatic person. If QC mental model is so complex that I can't learn it in a few weeks, then very few in the world can and QC will remain a subject of scientific papers understood by a few.


haha i am waiting for a smart nerd to answer this. you elegantly said what i thought "but what can i do with it?"


Haskell purity and now linear types (since GHC 9.0) are better suited for the consequences of quantum effects for computation. For example, the "no-clone" principle. Thus, languages like Quipper[1] are better positioned for such modelling.

[1] https://www.mathstat.dal.ca/~selinger/quipper/


I would think that linear algebra and matrix operations as a first class concept would be more useful than modelling the linearity of types.


I suggest an alternative first step to quantum computing: hands-on, by playing.

Either Quantum Odyssey (https://www.quarksinteractive.com/) - a game purely about quantum circuits. And, up to my best knowledge, the only easy introduction to quantum computing that passes the Scott Aaronson's Minus Sign Test.

Another one (full disclaimer: I am a co-founder) is the Quantum Lab (https://quantumflytrap.com/) - a real-time, install-free, LEGO-like simulation of an optical table with a few entangled photons. Written in Rust + WASM, and goes well beyond "just qubits".


Are these the same as the https://quantum.country people? The styling on the site looks similar?


The list of authors doesn't include Michael Nielsen or Andy Matuschak so I guess not.

(Aside: I learned so much from Quantum Country!)


Man! I just got this book for Christmas, but I guess I didn't need to...

https://www.amazon.com/Quantum-Computation-Information-10th-...


Nielson and Chuang, while getting quite dated now [1], is still the best introduction to the conceptual framework of quantum computation and information. Its algorithms section though is not very detailed, and I found it difficult to teach my undergrad students from it.

The qiskit book gives a nice introduction to the algorithms, and contains all the latest algorithms, for someone wanting to learn the field.

[1] it doesn't include a lot of new advances in the past twenty years.


I can vouch for that book too. About 3 years ago I used it as the main text book for a QC course and came back to it multiple times during my master thesis. For me it was just concise enough to stay focused but not get lost every other sentence (unlike 90% of math books I worked through). However the parent is right in that it is a little outdated when it comes to the physical realizations and maybe some algorithmic findings.

Anyway it was an invaluable resource for me and I would love a new revision from both authors.



Is there a follow up book to it? Like that you’d recommend, not that they wrote.


Keep it. You will need it in a parallel universe.


I don't really need it in this universe - I've realized that I'm a loser and wouldn't benefit from it.


you should be some sort of survivalist oracle AI. seems useful.


Shor just used this book last semester in MIT’s quantum computing class.



Maybe it is me, the material looks really good, but:

Minutes into it the notebooks on the page stop working, mainly due to trivial errors, like missing imports. It is not much use having a 'run' button if you can't run it.

The setup is incomplete so you are not going to be running it locally.

The introduction to linear algebra assumes you already know abstract algebra.


What's the ROI on learning this?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: