It seems like some of the interpretations of quantum mechanics can be falsifiable with this method. Very Cool! In fact it looks like it would be possible to probe my favorite solution to the measurement problem:
https://en.wikipedia.org/wiki/Objective-collapse_theory
>The third says that outcomes of measurements are absolute, objective facts for all observers.
The key part of the argument that is (I feel) a bit obscured to make it sound more shocking, is the fact that "observers" here includes "observers who are themselves in a superposition state".
Edit: the closing paragraph of the article is worse:
>In which case, taking the position that an observation is subjective and valid only for a given observer — and that there’s no “view from nowhere” of the type provided by classical physics — may be a necessary and radical first step.
I think that if you remember that they only proved this for "a given observer" that is itself in a superposition, it is a lot less "radical".
Indeed, it’s remarkable they want to make a fuss out of something that I can undo entirely via unitary transformation. There have been a number of papers with these kinds of “theorems” that seem to miss that proponents of the many-worlds interpretation would have explained (in this instance) Wigner’s friend in this way in the first place by eliminating all non-unitary evolution a priori.
> Outcomes of measurements are absolute, objective facts for all observers.
Seems easy to reject, it's just an extension of what Einstein's Relativity already showed (time is not objective), which is an extension of what Newton already formalized (velocity is not objective).
Why are physicists reluctant to give up the idea of experimenter freedom, as the article says? Is there anything more to it than not wanting to deny free will?
It's extremely unlikely that the universe compels scientists to make certain measurements at certain times in a conspiracy to make quantum physics look real.
Why? All physics theories we have are deterministic.* So if we take that to the limit, then we are also deterministic. To me, there is nothing conspiratorial about that.
*Yes, quantum mechanics is kind of debatable.. But if I understand it correctly, the actual theory is deterministic. It's just the interpretation of the measurements that can lead to some debate. However they can also be viewed in a deterministic framework, as the link you cite shows.
No, it isn't. Quantum randomness is of a fundamentally different character than chaos. Bell's theorem (and concommitant experiments) shows that it is not possible for the results of quantum experiments to depend on information that exists in our universe before the experiment was actually conducted. So if the results of quantum experiments are deterministic, the information that determines their outcomes can only exist outside of our universe.
Please read about superdeterminism. Bell's theorem makes three assumptions. One of which has no basis other than "I like it".
EDIT: Also, as Gerard 't Hooft points out[1], if you have a deterministic theory and you don't know the initial state, that is, you feed it probabilities, then you get probabilities on the other end. In this light there is nothing special about the quantum randomness. It's just that you don't know the initial state of the system.
It's not just that you don't know, it's that you cannot know. The initial state includes the state of the wave function, and you can't know that because of the no-cloning theorem.
My understanding was that Bell's theorem rules out hidden LOCAL variables, but doesn't address or rule out hidden GLOBAL variables. Bohmian mechanics haven't been disproven, right?
That's right, except that the opposite of "local" in this case is not "global". (This is physics, not software engineering.) The opposite of "local" in this case is "non-local", which refers to state that propagates faster than the speed of light. Bell's theorem shows that describing reality requires some kind of non-local state. In the case of both Copenhagen and Bohm, that non-local state resides in the wave function. But here's the thing: you cannot know the state of the wave function because of the no-cloning theorem. The best you can do is prepare sub-systems in known states. So QM really is different. You cannot predict the outcome of a quantum experiment even in principle and even with arbitrarily advanced technology.
What matters is the reason that it's not knowable. Some things are knowable in principle but not in practice because of technological or economic constraints, like whether or not there are life-supporting planets in the Andromeda galaxy. But in principle, if you could build a big enough telescope, you could know. Quantum experiments are not predictable even in principle. Even with arbitrarily advanced technology and unlimited resources, you can never know the outcome of a quantum experiment (assuming QM is correct, of course). That is an operational definition of the idea that the information required to predict a quantum experiment does not exist in our universe.
This is the reason I hedged with "assuming QM is correct, of course". You are recapitulating the EPR argument. The reason the Bell inequalities are a thing is that they refute the EPR argument. It is not possible for QM to be completed as a local hidden-variable theory. If it turns out that the information required to predict the result of a QM experiment actually exists, then QM is not merely incomplete, but actually wrong. That is possible, of course. But I'll give long odds against.
Heisenberg says that we cannot know both position and velocity with arbitrary precision.
This is inherent to any wave-based system. BUT, this only means we cannot know (as in theoretically prevented) all of the variables to arbitrary enough precision to accurately predict the outcome.
It doesn’t mean, however, that there aren’t any initial conditions even prior to measurement.
Nor does Bell’s inequality doesn’t negate this. Note Lso that non-local does not imply that causality is broken (you cannot transmit information FTL via decoherence).
In fact, one of the more interesting (and unexplored) possibilities is that the boolean-logic law of excluded middle is wrong.
This is because Bell’s derivation is pure arithmetic and logic. It’s the one bit of QM that any student can follow.
Lest this is handwaved away, know that there are entire branches of constructivist mathematics that do just this.
> It doesn’t mean, however, that there aren’t any initial conditions even prior to measurement.
This is the EPR argument.
> Bell’s inequality doesn’t negate this.
Bell shows that there is no measurement you can make, even in principle, that will give you the information you need to predict the outcome of a quantum experiment.
You can, if you wish, insist that those initial conditions exist notwithstanding our inability to measure them even in principle. But you could equally well insist that the outcomes of quantum experiments are determined by an invisible pink unicorn. Both hypotheses are equally unfalsifiable (if QM is correct).
I have actually coined the term IPU (Invisible Pink Unicorn) as an intentionally derisive description of hypothetical constructs that cannot be measured even in principle. Many QM interpretations contain IPUs. Bohmiam particle positions, for example, are an IPU.
One thing to look at is how is the theory formulated. Standard QM seems to build its Hilbert space of wave functions from functions over R^3N. It has a Hamiltonian built out of the notions of that space. So configuration space seems pretty crucial. But configuration of what? If you say particles with positions do not exist, then what exactly is the relevance of this space? What is the primitive stuff whose behavior can be right or wrong from our perspective?
It is also odd to say that position cannot be measured. We can tell in an experiment whether something ended up over there or over here. It would be reasonable to then try to have a theory that correlates the position measurements with something that has a position. Now it is not necessarily the case that there has to be such a thing, but it seems like a reasonable first step.
We can even see trails of particles in cloud chambers and the like. Why is that an IPU?
I will grant that it does not have to be the case that the only possible explanation is that of particles with position. But it certainly seems like if there is such a theory (and, of course, there is), then it would seem reasonable to consider it as quite plausible.
It also helps to ask you what is real in your theory. Are wave functions real? They certainly can't be measured in their entirety. Are operators the real thing? We don't measure them, but rather get something close to their eigenvectors/eigenvalues. Are those real?
Many worlds is the closest version with nothing added, but even that requires some kind of mass density function to make explicit connection with our lived experience. While it doesn't add too much in the way of extra mathematical structure in the theory (integrate over the wave function in a certain way: https://arxiv.org/abs/0903.2211 ), the implication in terms of what it says reality is actually like certainly involves a heck of a lot of IPUs.
> If you say particles with positions do not exist, then what exactly is the relevance of this space?
That's a good question. The real answer is that no one actually knows. I think this is actually the biggest mystery in QM. But let me start with this, because I didn't make myself clear:
> It is also odd to say that position cannot be measured.
When I said that Bohmian positions are an IPU I did not intend that to mean that particle positions can't be measured. Obviously they can. The IPU-ness of Bohmian positions has to do with their ontological status, not their epistemic status. On Bohm's theory, a particle position considered along some axis is a real (in the mathematical sense) value, which is to say, it contains an infinite amount of information. But this information cannot be accessed in the same way that information stored in (say) a book can. I can open a book, even a book with an infinite number of pages, to any page and start reading it, and having read any page, I can go back and read that same page again. The information stored in Bohmian positions doesn't work that way. The laws of physics somehow conspire to hide all that information so that it can only be accessed serially and non-repeatably. The first time you measure a particle's position you get the most significant bits of its position. Those are then lost forever. You can never measure them again. The next time you measure a particle's position you get the next most significant bits of what that particle's position originally was, and so on. But you can never go back and do a second experiment to verify that the result you got for any of your measurements was actually correct and not a result of experimental error.
So the much-vaunted determinacy of Bohmian mechanics is not a reflection of the determinacy of the underlying metaphysical reality. It is really nothing more than a rhetorical trick. All the randomness is still there, it's just "pre-computed" and stored in particle positions in a way that it can only be accessed so that the world behaves exactly as if it were "really random" (whatever that means).
This same kind of trick is made manifest in a thought experiment [https://www.mathpages.com/rr/s9-07/9-07.htm] proposed by Kevin Brown. He points out that, if pi is normal (which is almost certainly is) then all of the results of all experiments ever conducted could be produced by a "cosmic Turing machine" computing the digits of pi. (See the two paragraphs beginning with "Even worse, there need be no simple rule of any kind relating the events of a deterministic universe.") Bohmian positions have exactly the same ontological status as the cosmic Turing machine. Only the window-dressing is different.
> It also helps to ask you what is real in your theory. Are wave functions real?
I'm a day late to this immense comment chain, but my point is that Bell's Theorem specifically rules out local hidden variables unless scientists are forced to choose never to conduct an experiment that will expose those local hidden variables.
I'm not talking about determinism, I'm talking about Superdeterminism, which is akin to saying that coins will always land heads-up because people are compelled to avoid flipping coins in situations where it will land on tails. Please see my link.
"We are deterministic" is no big deal, it's what scientists thought for centuries going back to Newton. In classical mechanics, if you knew the exact state of every particle then you could predict everything.
But superdeterminism is nothing like that. It's determinism not based on past states, but based on future states, where the universe prevents anyone from making decisions that would cause them to learn a certain type of information. How would that work?
Superdeterminism just says that there is no statical independence. Not sure where you are getting the future determines the past from that. However, yes, in a deterministic world there is only one way to get to a certain future, so I guess the future does indeed determine the past. Is that what you mean?
No, I mean that classical determinism says "here is the current set of particle positions/momentums/etc, so we can work forward and predict everything from that." Superdeterminism says "if you do X then you will learn Y, so you aren't going to do X."
That doesn't necessarily mean signals going back in time. It could mean that the initial condition of the universe was set, such that nobody would do X. But that's still a way of the past (initial condition) being determined by the future, even if the future was just predicted via determinism. It makes physics teleological.
A common criticism of superdeterminism is that it eliminates falsifiability from science. E.g. to quote physicist Nicolas Gisin, "If we did not have free will, we could never decide to test a scientific theory. We could live in a world where objects tend to fly up in the air but be programmed to look only when they are in the process of falling." https://en.wikiquote.org/wiki/Superdeterminism
So her criticism #3 of essentially my second paragraph is that you can't assume anything, because we don't actually have a superdeterministic theory to evaluate. Maybe there's a dynamic law we haven't figured out that could make it work without initial fine tuning.
Fair enough I guess, but I'm not convinced it's a good answer to say "I think a theory with property X is the answer, and you can't criticize it because I have no actual theory with property X." I think it makes more sense to give little credence to superdeterminism until someone comes up with a plausible superdeterministic theory.
I add this because no one seems to have mentioned it yet. Without experimenter freedom it is hard to make any inferences from the results of experiment. If the experimental choices you make are correlated with the stuff you're experimenting on in hard to understand ways then the experimental data you gather is essentially meaningless.
To make a dumb example, you measure some system and always observe outcome A. You might conclude that A is a property of the system you measured, but it just so happens that your decision to do the experiment is correlated with the system, so it always has A when you measure it, but it has other properties at other times.
It seems to me that issue is simpler than people are making it to be.
"Consciousness"/"Observer" etc are too high level concepts to matter there.
I don't need to choose the experiment directly, I can make a complicated arrangement that then picks the experiment to be executed. (Let's say, I make a random number based on the current wind speed and I make a mechanical apparatus that throws a die and choose the experiment based on that). Try changing that experimental freedom.
"Observ(er|vation)" is really one of the worst terms ever in this area. It really is just about the experimenter (plus apparatus) becoming entangled with the experiment -- or entering into a superposition with it, if you will. That's it. No spooky consciousness, etc. goes into it. One's consciousness simply enters into the superposition.
(If you can't tell, I fully subscribe to the Many World interpretation. AFAICT it's, by quite a long way, the most parsimonious explanation in terms of extra "theoretical stuff", even if it is "wasteful" in terms of the amount of actual stuff it posits.)
Many worlds doesn't really explain more than other interpretations of standard QM. You still have the problem of why the Schrodinger equation shows you that that a particle has some amitude both here and there, and can interact with other particles both here and there, yet when you experiment you only find it either here or there, never in both places.
MWI just posits that the measurement apparatus exists in a single world, while the particles exist in many worlds. But it can't explain this basic fact, like any other interpretation. What we need is an actual new theory that can actually measure what a measurement is (at what precise point the Born rule must be applied, or how can we do without the Born rule).
There are essentially 3 options for Wave function collapse.
Their are some hidden variables which means it’s deterministic and there is no amplitudes involved. The universe has some sort of random number generator which determines the outcome. They don’t actually collapse, which is what the many worlds theory essentially posits.
> They don’t actually collapse, which is what the many worlds theory essentially posits.
That's not consistent with classical physics, and it is not what MWI actually posits. The universe branching that MWI posits is not really different from wave function collapse, and decoherence also doesn't really solve things. You still fundamentally have a single position for any classical system, but multiple positions with different amplitudes for quantum systems, and some mysterious threshold where you pass from one to the other.
Probability also can't really explain this, since the classical model also applies to single particles after they have interacted with a macroscopic system.
MWI considers that observers inside one branch can't interact or notice observers inside other branches, and this is why we perceive the world as if objects have unique definite positions. But this still doesn't hold up for quantum systems, which do in fact perceive and can interact with all of the other "worlds", including interacting with themselves in other worlds such as in the single particle double-slit experiment. So MWI doesn't really get away from the duality in any rigorous way.
By not collapse I mean no specific outcome is selected. The waveform simply gets directly translated as the distribution of universes. With effectively infinite universes the odds of someone being in an unusual one are directly proportional to how unusual it is.
You can’t experimentally differentiate between MWI and single universe wave function collapse.
This is correct, MWI is a no-added sugars interpretation. Just pure and simple what the QM equations say.
There is another interpretation that can also be considered as close as possible to no-added sugars, the "ensemble interpretation". In "ensemble interpretation" the formulation of QM is understood to be applicable to ensembles of similarly prepared systems. In this interpretation QM is not really applicable to a single system by construction.
In MWI you accept it can be applied to a single system but you have to pay as a price that the system can branch into many different ones simultaneously (the "many worlds").
Thank you for the info. The relevant Wikipedia article mentions Leslie Ballentine and his textbook ("Quantum Mechanics, A Modern Development") quite a lot. I happen to have studied a couple of the latter chapters in this book and now you made me want to open it again :)
The problem is the insistence that there are two distinct particles that have an independent existence and properties. There isn't. There's just a quantum field which when it interacts with other fields in a measuring apparatus produces classical results in accordance with qm. Trying to understand what's really going on in terms of distinct and separate particles is never going to work.
The emphasis of the question should be on “exactly one”. How does using fields instead of particles resolve the question of the collapse of the wave function?
This is just another take on Wigner's experiment. It is has more to do with philosophy, than physics.
TL;DR; most probably you can't be sure what another person saw when they made a quantum measurement, because the person itself is a quantum system and therefore can be in a state of superposition of measuring A and measuring B.
Could it be that each quantum is an encoded message or information that grows in a field called time. The quantum interacts with a universal machine, called universe that computes the result. So that the entaglement is only delayed computation, like in Haskell. So there is no problem understanding how the physics works, the universal machine is in a monad and we can only call return when the time is due. So I suggest considering space time as a delayed computation (more as a monad or time stated encoded) for a universal machine called our universe.
What kind of computation machine is the universe?
Could it be that a black hole is a loop, a recurrence call in this system. Could this be tested by forcing the machine to perform many delayed computation at once, line in a supernova ?, could it be that gravity and time measure the speed of computation of a local node?, can we design an experiment to count the number of local clusters of this machine?, is the computational power of this machine an input for the machine, i.e, it is self-regulated?
Just nonsese.
Systems get entangled with each other by interacting. So entanglement cannot "propagate" faster than the fastest possible "mediator" of an interaction, which is the speed of light.
The "instantaneous" aspect comes up in entanglement and measurement. Say you start off with an entangled pair of electrons A and B in your lab, and measure electron A. Measurement is an interaction, so you become entangled with the electron. By knowing the initial entangled state and your reading of electron A's state, you can know stuff about the state of electron B at the time of the measurement, even if electron B has traveled a very long distance in the meantime.
I personally find this point of view nicer than assuming that I somehow collapsed electron A's state and caused an instantaneous remote collapse of B's state.
Suppose you had two entangled particles light-years distance apart.
Observer A observes one entangled particle at a spin up. At the same time, Observer B observes particle B at a spin down.
Observer A's observation of a spin up means observer B must see a spin down. However, the two observers are restricted at the speed of light to communicate this to each other.
It would initially seem like there is a instantaneous causal impact based on observer A's observation. But that's not the case.
I tend to think it was predestined what each observer see, but that leads to all sorts of philosophical questions.
I hope I did this topic justice, I find it quite interesting but have no background in it.
But isn't superdeterminism still possible? I thought that was implied by the article's first assumption that physics was loathe to abandon, that the experimenters had free will, so to speak.
None of the interpretations of quantum mechanics are. By design, they all predict exactly the same outcome for any conceivable experiment. The subject is entirely philosophical, not scientific.
They make predictions, which can be falsified. They even make different predictions---Scott Aaronson agrees that WF poses problems for the standard Copenhagen interpretation. Sean Carroll is on record somewhere saying that e.g. objective collapse models predict an in principle measurably different evolution of a system's entropy than many worlds.
I suppose I should have asked what predictions SD actually makes.
Copenhagen Interpretation and MWI don't make different predictions from each other, but there are other theories that do. For example, pilot-wave theory makes different predictions. There are also some superdeterminist theories that make new predictions (Gerard T'Hooft is one advocate of such theories, I believe). Unfortunately I don't know of any concrete examples. Here is some more information on the subject from Sabine Hossenfelder:
Global hidden variables are the predestined effect. The argument against global hidden variables is that it requires a godlike meddling in the whole universe to populate an infinite amount of arbitrary asymmetric details.
Why do I need to do that, to combat your unsupported assertion? Why don't you show some evidence instead that faster than light entanglement has been "accepted"?
The fact is, there are theories of QM that do not assume that entanglement happens faster than light. The Many Worlds theory is one that has no need for such hypothesis. And more generally, since you need to send the results via a classical (no faster than light) communication method, there's no way to be sure that the entanglement has happened faster than light.
It seems quantum mechanics systematically refuses to properly model the "observer". As in: mathematically quantum model of the large and complex experimental setup that tries to "measure" something, including the observer.
It's probably not done because it's too hard, but the day they do this and actually include it in any equations that try to describe a system, I suspect much clarity will ensue.
It is AFAIK a very standard approach to treat two interacting quantum systems (e.g. two electrons) as "measuring" each other and work with density matrices and partial traces to see how the "measurement" is evolving and what is each system "seeing". You might be interested in learning about the field of quantum decoherence.
If you do that (at least with the approaches I've seen) and follow "standard" quantum mechanics rules for the observer as well as the observed the you end up deriving many worlds / Everettian quantum mechanics.