Hacker News new | past | comments | ask | show | jobs | submit login
Guide to the Many Meanings of Quantum Mechanics (nautil.us)
77 points by dnetesn on Sept 4, 2020 | hide | past | favorite | 54 comments



"The Many-Worlds Interpretation has it that each time we make a measurement, reality splits into several alternative versions, identical except for the measurement outcome."

That's not right, the Many-Worlds Interpretation is just that the wavefunction in the Schrödinger equation (or its generalizations) is real, and it rejects that there's a separate process that somehow collapses the wavefunction to the single "branch" we perceive ourselves to be in. At the metaphysical level, there's no splitting involved, and no ontological measurements either (much less a "we" to do the measuring).


A first rate presentation on this topic is 'The emergent multiverse' by David Wallace. There's a detailed review in http://philsci-archive.pitt.edu/10940/1/Wallace_review_final...

In the book, Wallace quotes an interchange between Paul Davies and David Deutsch. <Davies> "So the parallel universes are cheap on assumptions but expensive on universes?" <Deutsch> "Exactly right. In physics we always try to make things cheap on assumptions." & Wallace re-quotes 'I do not know how to refute an incredulous stare'.


Multiverse and many-worlds are two different things.


No offense intended, but the "lost contact with empirical reality" critique really lands for the above description of MWI. A mathematical construct (the wavefunction) is taken as more real than basic empirical experiences like "measurements" and "we."


One of the major aims of physics is to try to describe reality with mathematical constructs.

If you start with what can be directly observed or experienced, then that's phenomenology.

Those are both valid and interesting ways of trying to understand reality, and I believe they are ultimately fully compatible with each other. But I think you are trying to hold physics to phenomenology standards in a way that doesn't make sense. No one thinks that Newton's theory of gravity "lost contact with empirical reality" because it doesn't have measurements or people as ontological components.

Anyway, that's all beside the point I was trying to make, which was just that the description of the MWI in the article was plain wrong.


But Newtonian physics is compatible with our experience of reality. The Many Worlds Interpretation as described above isn’t.


Not necessarily. When you live inside the wavefunction the outcomes of certain experiments very much look like a wavefunction collapse. MWI is a way to explain why that happens without having to have a special case where the universe has a completely different behaviour for just a moment in time.

You can formalise the mathematics of it using the concept of quantum decoherence.


How is MWI incompatible with our experience of reality?


We experience one reality, not many realities.

MWI handwaves that problem away without really explaining it. "It's random but subject to the Born Rule" is a description of what's already observed, not an explanation with predictive power for new and distinct observations.

There are much more complex criticisms that use words like "ontic" and "epistemic", but that's the fundamental problem that MWI claims to solve but doesn't.


> We experience one reality, not many realities.

How does MWI imply that we should simultaneously experience multiple realities? If two different states of you experience two different realities, each state of you is only aware of one reality.

MWI largely reduces to something like the Copenhagen interpretation for large systems. It's just that MWI explains the transition between the quantum and classical regime, doesn't require any ad hoc rules about observers, and doesn't need Schrödinger's equation to be violated.


The claim wasn't that we should experience inaccessible worlds for MWI to be compatible with our experience of reality. That's obviously absurd.

The claim was that inaccessible worlds are inherently incompatible with our ability to empirically investigate or falsify them (via experience).

MWI has a metaphysical parsimony to it. But to believe it's physics is religious faith not physical science- and that's fine, but it's still not science.

I don't think collapse is real either. I think it's a phenomenological byproduct of consciousness requiring particilarity to model the world it perceives. You've already alluded to that being the case ('we can only experience our worldline').

If you knew the metaphysics behind that particularity well enough, you'd also know that it leaves no ground for and has no need of the existence of a physical material reality to begin with. As such, materialist physics has already made an epistemic leap which inevitably leads to mistaken intuitions about the most reasonable ways to interpret empirical phenomena.


> The claim was that inaccessible worlds are inherently incompatible with our ability to empirically investigate or falsify them

That's not what the claim was. The original claim was that MWI is "incompatible with our experience of reality." TheOtherHobbes then said that the incompatibility is that we experience only one reality, instead of many realities.

> MWI has a metaphysical parsimony to it. But to believe it's physics is religious faith not physical science- and that's fine, but it's still not science.

It's not religion to point out that QM explains the phenomenon of wavefunction collapse without any additional postulates (through decoherence). That's all that MWI says.


> We experience one reality, not many realities.

Isn't that just a claim about the sensitivity of the detector?

I'm sure folks were dubious about the theory of the electromagnetic spectrum given their bias to visible light, but as experiments improved and detectors with them it doesn't seem so farcical.


It's not though. Instantaneous gravity, third law, etc, are not true in GR.


From a physics point of view this is a very strange thing to say. Why are 'we' or 'measurements' things of special status? Because we have a soul or something? Now, that would be a theory where an abstract concept is taken to be more real than observable things. The measurement apparatus is also a physical thing that should be described by the physical theory in use as would the humans be. The very painful point about the copenhagen interpretation is this distinction between 'normal' time evolution and the special procedure when a 'measurement' is carried out. Actually, there exist mathematical proofs that the normal time evolution when applied in cases where information is transferred from microscopic stats to macroscopic ones leads to something that looks like a collapse of the wave function but is actually a split of it. This is as a physical theory much more attractive than giving a 'measurement' a special status. The question what this all means is a bit more mind-boggling though including multiple worlds and that kind of stuff. Taking 'a mathematical construct as more real than basic empirical experiences' is basically the history of physics and it has in the past been highly successful.


Why are 'we' or 'measurements' things of special status?

Because we experience the world through our senses. Everything else is one mathematical model or another that we’ve created. And our models aren’t even consistent!

very painful point about the copenhagen interpretation is this distinction between 'normal' time evolution and the special procedure when a 'measurement' is carried out

This is not a special procedure. Measurement occurs whenever physical interactions take place. To measure a particle, we bounce another particle off of it and then try to detect the result. The measurement is the particle collision, not the detection. It’s like playing billiards in the dark. We don’t know where the balls are.

Taking 'a mathematical construct as more real than basic empirical experiences' is basically the history of physics and it has in the past been highly successful.

Except for all of the times when it broke down. When one model was found to contradict our experiments and we had to replace it with another, which later turned out to be wrong as well. Perhaps the most embarrassing example of this, in human history, is all of our attempts to make geocentric models work [1].

The most well-known critique of science’s institutional habit of inventing new models whenever old ones broke down is probably Kuhn’s paradigms [2]. If you’re interested, you’re better off reading Kuhn than anything I have to write here. I think the best evidence for Kuhn’s thesis is the abject disappointment we witness every time particle physicists fail to overturn the standard model. If that’s not supremacy of measurement, then I don’t what is.

[1] https://en.wikipedia.org/wiki/Geocentric_model

[2] https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Re...


You say "The measurement is the particle collision, not the detection.", but this is not right.

When two particles bounce off each other, there is no collapse according to traditional copenhagen, instead the wave function just evolves according to the SE. Even worse, when that particle (let's say photon) then travels to the measurement device to interact with the particles that make up that machine, the evolution is similarly governed by the SE. Somehow though at some point, nature decides that a measurement has taken place and collapses the wave function. What dictates where that happens? Honestly this way of thinking about it makes no sense to me. The MWI is, to my mind, the simplest explanation for all of this mayhem.


I do not believe they are arguing for a soul but rather just pragmatism. It's not that “measurement” has some special status as coming from a soul or something: indeed your unwillingness to see it as a normal thing that happens, such that you immediately jump to this question, indicates that you have already “drunk the Kool-Aid.”

This is more clear in Everett's original formulation. Everett didn't speak of “many worlds” but of a difference between relative and absolute truth. You measure a spin-½ particle along some axis, it is only “relatively” true that you saw what you saw, say ↑, and there is also a relative truth that you saw ↓. But because these relative truths are exhaustive there is also an absolute truth that you have the deluded belief that you saw either one or the other; that truth holds in both relative truths. The “lost contact” objection is precisely that in Everett's theory this belief is ultimately delusional (there is nothing like collapse to which they correspond, it is just formally incorrect to confuse your relative truths with the absolute truths) and we are led by the theory to delusions like this “with high probability” (scare quotes because Everett realized more so than his successors that eliminating collapse also untethers the theory from probability in a deep way). It is just accepted that these deeply practical things like “my existence as an observer” and “my tool’s measurements” are ultimately based on a sort of illusion which has no correspondence to the true reality; this is buried in a mathematical technicality in his thesis (he notes that he is not looking for an “isomorphism” between experience and the external world, but a “homomorphism”), but it is kind of the crux of the whole enterprise. “It’s fine if I predict things which are not observed, so long as I also predict the things that are observed and I predict that you will be very opinionated about not observing the things predicted but not observed,” if you will, is the homomorphic approach to QM that Everett advocates.

In that respect there is something very different from “the history of physics.” Physics does in some cases say that certain things are illusions. And those statements track very closely with MWI. For example a rainbow appears to be a thing out in the world, but modern physics is very happy to say that the phenomenon cannot be correctly located as an object inside the cloud in which it is perceived, because it turns out the cloud is “rainbowing” in different ways in many different directions and you are only getting part of the story. You see part of the light suggesting a reconstruction of a physical object, but if we look at all of the light we realize that the different reconstructions are not reconcilable because they all have to occur at a 41° angle from the rays of the Sun and real objects have varying angles.

You can see a lot of similar ground trod here. Both claims of illusion appeal to the act of trying to reconstruct the external world from observed experience. Both then appeal to looking at all observations taken together. But there is a difference in scope. In the rainbow case there is a counterfactual, a “what you would have seen,” that can be confirmed with a camera at some other location. We can do a parallax measurement to reconstruct that the rainbow is actually around as far away from you as the Earth is from the Sun or so. But in MWI these sorts of experiments which may be possible are ultimately forever infeasible, I need to have quantum control of every atom of my measurement apparatus if I want to make some similar observation, and even then any particular observation will be consistent with a classical ontology, it's just the pattern of many observations which will suggest non-local correlations which I can interpret as evidence for maybe everything being illusory. One rocks your boat gently, the other is sailing in a hurricane.


I think there is some misunderstanding here based on the wording used. OPs point is simply that there is no separate, explicit branching process. Instead, there is only the normal, continuous evolution of the wavefunction.

Our experience is recovered from this by positing that subjective/phenomenological experience is somehow tied to the individual components of the wavefunction. Since the individual components don't interact with each other, it gives the appearance of branching. This is compatible with our observations.


Furthermore it firmly places ourselves and our own experiences within the very same fabric of reality it's describing. This is a rather inevitable problem of a fundamental theory of reality, since we ourselves are part of it and our inner working (whatever that is) must be layered on top of that fundamental machinery.


So we get from the “the wave function somehow collapses” to “the phenomelogical experience is somehow tied to the individual components wave function” which are somehow connected go the measurements.


You're right regarding the first part, we don't have a clear explanation of how the wave function might be related to subjective experience, but that's mostly because we don't have a good account of subjective experience at all.

For the second part, there's nothing mysterious going on at all: measurement is simply physical interaction.


What are "the individual components of the wave function"? They are a mathematical construction. The wave function can be decomposed in many (and infinity of) ways. Why is one decomposition chosen and not another? Or are all the decompositions equally "(un)real"? The issue is not so simple and it's related to the "mysteriousness" of the first part.


How do we know that opponents of MWI aren't succumbing to antibrunoist prejudice?

> [Giordano Bruno] is known for his cosmological theories, which conceptually extended the then-novel Copernican model. He proposed that the stars were distant suns surrounded by their own planets, and he raised the possibility that these planets might foster life of their own, a philosophical position known as cosmic pluralism.

I need to point out that he was burned at the stake for advocating cosmic pluralism.


Later observations proved Bruno's speculations to be correct.

But MWI does not make different predictions from other quantum interpretations. No observations can confirm or falsify it.


Modern cosmology by definition doesn’t make different predictions than saying “cosmological models mention things like stars and galaxies, and those models fit our observations, but those stars and galaxies aren’t technically real.”


Maybe in the greater context of our universe cosmic pluralism is a design pattern, which would make MWI a more preferred interpretation.


> I need to point out that he was burned at the stake for advocating cosmic pluralism

I need to point out that he wasn't. His heresy trial had nothing to do with his cosmic beliefs. He was convicted for teaching religious beliefs, as a Catholic, that were contrary to Catholic dogma. The church punishment for this was excommunication but the secular punishment was execution, so he got burned.


> A mathematical construct (the wavefunction) is taken as more real than basic empirical experiences like "measurements" and "we."

In chemistry, atoms are also taken as more real than "measurements" and "we". Some chemists would probably insist that "we" are actually composed of atoms.


The mathematical construct was developed/discovered to explain observations. Normally when we do that we consider the mathematical explanation to be “real” in that it is describing reality.


I mean, it seems strange to reject implications of a successful model if those implications aren't observable. Eg, I believe that stuff exists outside our light cone, because it would be unparsimonious for there to be an extra rule saying that stuff that I can't access doesn't exists.


The MWI is literally obtained by taking traditional Copenhagen QM and removing the postulate of measurement. In other words, it is by the most genuine way of measuring complexity of a theory, demonstrably simpler than copenhagen! We just don't like it because it doesn't jive with our experience... it feels weird.

When you have two theories that make the same predictions, but one is strictly simpler than the other, Occam's razor tells us to prefer the simpler one.

In my mind, MWI or something akin to it, is the way to go, and is generally the way I conceptually think about QM.


I agree MWI is the minimal "version" (don't get hung up on that word) of quantum mechanics. I take it as a given and consider the way we experience/interpret the universe as an insight into how the brain works.


Totally agree


I don't think you get to claim simplicity for the theory that results in an exponential explosion of entities. Simplicity isn't just about descriptive simplicity.


That's not the right way to frame MWI, there's no explosion of entities at all, just a single wave function that evolves. The wave function starts out being a function of n variables, evolves according to a differential equation and remains a function of n variables.


It is strange given my physics training to read this discussion.

Physicists do not directly apply Occam's razor in most circumstances, and we certainly don't do bookkeeping on how many “entities” there are, and your comment illustrates precisely why: how you count is not a given.

Here is something that did happen in classical mechanics: we transitioned from F_i = m a_i to Lagrangians even though they have roughly similar explanatory power. Here is an argument that was not made: “Lagrangians are truer because you don't have to postulate three equations of conservation of momentum and one of conservation of energy, you just have one law of least action.” Nobody even declared a confident end to the tyrrany of Newton's third law as Lagrangians no longer need it.

Furthermore nobody said that classical field theory was “better” per Occam's razor merely because you were no longer bound by the tyrrany of the least action principle and could now consider essentially a world in which F_i = m a_i was not universally true, to be replaced with a philosophical interpretation by some bloke Neverett who declares the fields on-shell “typical” and derives the least-action principle as a statement that “if you find yourself in a typical universe then almost surely your retroactive reconstruction of events satisfies the least action principle.”

No, many worlds interpretation is thriving precisely because it calls physicists attention to the importance of decoherence calculations in the understanding of various physical phenomena. It gives you an idea for how to model measurements that are somehow partial, or being continuously performed. Occam doesn't enter into the discussion in the first place.


You're right that you can't just count the number of postulates naively because there is generally not a well defined way to do so. A great example is the one you give: three laws of motion vs one law of least action. However, if I told you that it was possible to reproduce all of mechanics with only the first two of Newton's laws, then surely you'd agree that there wouldn't be a need for the third law and in that sense the new system of postulates would be simpler.

In other words, because MWI is obtained by removing a postulate from the usual formulation of QM, I think it's fair to say it's simpler. If, instead, MWI had been obtained by formulating all of QM in some other distinct framework where there was no mention of wave functions, measurements, or the Schrodinger equation etc, and it had one fewer postulate, then yes I would agree that you can't arbitrarily say that it's simpler.


Put another way, suppose you have some linear dynamical differential equation in n variables that you solve somehow. Then, take that solution and expand it in some set of basis functions (e.g. a Fourier series). You wouldn't throw your hands up in the air and say "wow that's so complex, look at all those infinite terms in the solution!". The complexity isn't really there, it just appears to be there because you've chosen to expand your solution in a basis that makes it appears really complex. Similarly in the MWI we see something that looks complex simply because we've chosen to expand the solution in a set of states that makes sense to us (state1 = particle at location 1, state2 = particle at location 2, ...)


With the Fourier example, there is a constant amount of information in the system, and so the apparent complexity in the Fourier basis representation is an illusion.

Is that the case with MWI? Is there a constant amount of information at time t and t+1? Note that I see a fundamental equivalence between information and entropy (of the computational sort), and so an exponential growth of computation required to get from t to t+1 is an inescapable theoretical burden.

To put it a different way, MWI seems to reify possibility. But the state of possibility grows exponentially in time, and so the theoretical entities grow exponentially.


Yes, there is a constant amount of information in the system. In fact, that's part of the beauty of MWI in contrast with Copenhagen. In MWI, the state at any point in time can be used to reconstruct the state at any other time. However, because of the collapse, that's not the case for Copenhagen. In other words, measurement in Copenhagen actually destroys information. As far as computational complexity goes, the same happens in classical mechanics. Start with 10^23 particles far apart but moving toward each other. Then simulating the first second is simple, but once they get close together, it gets hard with the computational complexity growing as time progresses (or alternatively the error growing for fixed computational resources).


I still don't follow. There is a constant amount of information as input into the system, but (from my understanding) the "bookkeeping" costs grow exponentially with time. This is different than the classical case where the complexity is linear with respect to time. A quick google search says that simulating quantum mechanics is NP-hard, which backs up this take. This bookkeeping is an implicit theoretical posit of a QM formalism. We can think of different ways to cash out this bookkeeping as different flavors of MWI, but we shouldn't hide this cost behind the nice formalism.

Comparing MWI to collapse interpretations, collapse is better regarding this bookkeeping as collapse represents an upper limit to the amount of quantum bookkeeping required. MWI has an exponentially growing unbounded bookkeeping cost.


Yes, that's right but that has to do with entanglement in QM and is not specific to MWI. In classical mechanics, a system of n particles is specified by 3n different functions of time - the three coordinates for each of the n particles. The complexity in terms of e.g. memory then scales linearly with the number of particles.

In QM by contrast we have entanglement, which essentially means that we can't describe one particle separately from all the other particles (if we could, then QM would be just as "easy" to solve as classical mechanics). Instead of 3n functions of time, we instead have a single function of 3n variables (plus time). The complexity of these functions does not scale linearly with n (imagine e.g. a Fourier series in one variable vs one for two variables)

So, you're right that QM is an exponentially harder problem to solve compared to classical mechanics, but this is because of entanglement and has nothing to do with Copenhagen vs MWI.


We don't like it because it can make no actual predictions about the physical world if it doesn't include measurements. And when you include them it's no longer that simple.


You can make predictions if you assume that you are in (weighted) randomly selected world.

Well rather than a single world, I think that the perceived identity exists in multiple highly similar and interacting worlds seen as an entity. Just like we have a size in physical space we also have a non-zero "size" in probability "space".


uh... didnt early proponents of many worlds interpretation actually say that it did? Say Hugh Evrett? If it is reinterpreted to say otherwise, is it really the same QM model anymore?


This recent video has a very watchable overview of several interpretations https://www.youtube.com/watch?v=XQ25E9gu4qI


Sabine Hossenfelder youtube channel has nice videos about Quantum Mechanics and interpretations:

https://www.youtube.com/channel/UC1yNl2E66ZzKApQdRuTQ4tw

https://en.wikipedia.org/wiki/Sabine_Hossenfelder

Some quantum materiel also on Science Asylum:

https://www.youtube.com/c/Scienceasylum/videos


> I am also personally offended that Baggott gives short shrift to superdeterminism. In this approach, quantum mechanics is emergent from a deterministic hidden-variables model which acknowledges that everything in the universe is connected with everything else. He either mistakenly or accidentally leaves the reader with the impression that these have been ruled out for good, which is most definitely not the case. I cannot really blame Baggott for this, though, because this omission is widespread in the scientific literature.

Please correct me if I'm wrong, but I thought that hidden-variables theory was considered in scientific literature, and was experimentally disproved.


Hidden variables theory was not disproved. Local hidden variables theory was disproved, but, actually, all local theories of any kind whatsoever that say that experiments have definite results have been disproved.

EPR is the argument that showed any local theory agreeing with the results of quantum experiments must have hidden variables. Bell showed that any hidden variable theory must be non-local. Conclusion: theory is non-local.

Many worlds escapes this by not having definite results of experiments or anything else. Since anything that could happen does happen, there is no need for non-local communication to achieve the (non)results.

Bohmian mechanics is a mathematically consistent theory (uniqueness and existence proven for a wide range of Bohmian systems) and it gives rise to an explanation of the quantum formalism. One can choose to dismiss because of prejudice, but there is no mathematical or physical reason to do so. Its biggest flaw, being nonlocal and thus philosophically (not actually) incompatible with relativity, is what Bell established as having to be the case for any theory with results.

The story with quantum field theory is more questionable, but there is a perfectly fine setup with creation and annihilation of particles, thereby being compatible in spirit with much of QFT. The biggest problem is that technically QFT has problems with having a well-defined wave function. But there have been Bohmian inspired methods of solving that problem, basically using boundary conditions that respect the preservation of probabilities under annihilation and creation.


> Please correct me if I'm wrong, but I thought that hidden-variables theory was considered in scientific literature, and was experimentally disproved.

'Disproved' is a bit strong. Bell's theorem is the most used to tool to attempt to rule out hidden-variable solutions. Whiles it's true that experiments thus far support Bell's theorem, they all contain loopholes[0]. It's not even clear if an entirely loophole free test can exist, so far no one has been able to come up with one.

I have personally come across two camps of people on this: Those that ignore or discount the loopholes and thus claim that hidden variables have been disproven. Or those that consider the loopholes important, or that the loopholes seemingly being unavoidable is a clue itself.

[0] https://en.wikipedia.org/wiki/Loopholes_in_Bell_test_experim...


I suspect that trying to explain quantum computers from a global-hidden-variables perspective will be very difficult and ugly.


She goes into detail about this in the article that is linked to later in the paragraph you quote.


For anyone interested in QM foundations, last year the IQOQI Vienna had a very interesting lecture series on Scientific Realism that many living philosophers and scientists to discuss the matter. Very interesting exposition + lots of different ideas presented: https://www.youtube.com/playlist?list=PLtIs3eEC6pzL1v_haWfzn...

Warning: each lecture is ~2.5h long and informationally dense so can't be played at 2x speed.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: