Hacker News new | past | comments | ask | show | jobs | submit login
“Spookiness” Confirmed by the First Loophole-Free Quantum Test (fqxi.org)
96 points by flurpitude on Aug 27, 2015 | hide | past | favorite | 44 comments



From TFA (or I guess that should be TFP):

"Strictly speaking, no Bell experiment can exclude the infinite number of conceivable local realist theories, because it is fundamentally impossible to prove when and where free random input bits and output values came into existence. Even so, our loophole-free Bell test opens the possibility to progressively bound such less conventional theories."

It's a bit strong to call it loophole-free then, isn't it? But "free-of-the-two-most-common-loopholes" is a lot less sexy. Nevertheless, cool work.


The difference is that the whiteboard formula is actually called the Clauser-Horne-Shimony-Holt inequality, not the Bell inequality, and it is a slightly more sophisticated version, concocted by the four physicists it is named after, five years The first two numbers were 0.56 and 0.82. The third was –0.59, so it seems I would have to take this way from the running total. The fourth number, another 0.56, should then have left me with a total of 1.35 and victory for Einstein.

That’s not what I showed. http://www.bbc.co.uk/iplayer/episode/b04tr9x9/the-secrets-of...

In fact, the subtlety is that the third term, the one that had a negative value, was already negative. The inequality read:

P(a,b) + P(a,b’) – P(a’,b’) + P(a’,b) ≤ 2,

So, plugging all the numbers, this looks like:

0.56 + 0.82 – (–0.59) + 0.56 = 0.56 + 0.82 + 0.59 + 0.56 = 2.53

So, sorry Einstein, victory goes to Bohr instead.


If you want to know more about quantum entanglement, here is an excellent lecture on the subject:

https://www.youtube.com/watch?v=dEaecUuEqfc


Slightly tangential question: I know that entanglement doesn't violate information transfer greater than the speed of light; c. But given that knowing the state of the measured particle means we also know the state of the entangled particle and measuring again could produce a different result, couldn't you conceivably construct a machine which repeatedly measured a number of particles until you reach a desired state for each, therefore constructing a message via the entangled counterpart particles for someone to consume? I suppose you'd need very accurate timing at both locations to know when to attempt to read the message at the destination.

Or am I missing something along the lines of "its not possible to measure the entangled counterparts without affecting something else, therefore making the whole thing impossible"? I'm sure I am but hope someone could explain.


The easiest way to think of it is as follows: When you entangle the particles they have opposite states, and each has their state hidden in a box. When you open one box, you know what the state of the other particle must be.

Only there is no box, and the state isn't well defined until you measure it.

Or there is no box, and the state isn't well defined for anything that hasn't interacted with the particles, but upon interacting whatever did the interaction becomes entangled too and sees the defined state of the particles but the rest of the universe wouldn't know the state of whatever did the interacting.

Or there is a box, but it's the size of the whole universe, and every time anything interacts with it (looks inside the box) the box splits up into all the possible things that could have been seen by that interaction and you get lots of universes.

Or a number of other ways to think about the issue depending on your preferred interpretation of quantum mechanics, but no matter how you look at it it's nothing like a normal box.


Yes, you're missing something. Measuring one of the particles breaks the entanglement, so that whilst you know the state of the corresponding particle, you can't affect it further.


Congratulations! The preparation scheme sounds very clever.


> Our observation of a loophole-free Bell inequality violation thus rules out all local realist theories that accept that the number generators timely produce a free random bit and that the outputs are final once recorded in the electronics.

Why should a local realist theory accept that? In determinism, there are no random events.


This has been studied. The kind of determinism that a local realistic theory requires in order to explain Bell violations is generally regarded as absurd: https://en.wikipedia.org/wiki/Superdeterminism


But that is just regular determinism; everything is determined by the prior state of the system, including the operation of the number generator and the experimenter's movements. As far as physics models go, I cannot see anything absurd about it.


Sorry, I didn't word that very well.

"Regular" determinism doesn't necessarily affect Bell's Theorem. All you need is that the measurement choices at each end are "effectively free for the purpose at hand". The kind of determinism you'd need to actually affect Bell's Theorem is a pathological, conspiratorial one in which the universe is well aware of which measurement choices you are going to make and just sets things up so that you'll get the same outcomes that quantum theory predicts.

This is discussed in Section D of the following paper: http://www-e.uni-magdeburg.de/mertens/teaching/seminar/theme...


That's a very interesting paper, quite well written too. Thanks for the link.


> And quantum theory allows two entangled particles to become linked in such a way that when a measurement is performed on one (breaking it out of superposition, and clicking it into a well-defined state), the properties of its entangled partner will likewise become defined, instantaneously — no matter how far apart they are separated.

Besides the fact that measurement at that level is actually interaction, how can you prove that both particles were not having the same state from the start? If it's a consequence of the wave function being placed in the configuration space by the Copenhagen interpretation, we need to be certain that we don't add epicycles upon epicycles.

BTW, there are many more quantum mechanics interpretations besides the most popular one, and some of them don't violate locality: https://en.wikipedia.org/wiki/Interpretations_of_quantum_mec...


They are prepared that way from the start. They are prepared in a state such that measurements along the same axis (for spin) will be anit-correlated.

the "spookiness" only comes into it when you try to understand how this would happen in a classical system.

This isn't a classical system, it's a quantum mechanical system. It's just how quantum mechanics works.

A good analogy is: how do electromagnetic waves travel through a vacuum? If you try to understand "how" from the perspective of someone who only has experience of sound waves which travel mechanically through a medium, the fact that electro magnetic waves travel through a vacuum may appear spooky.

But it's not, that's just what electro magnetic waves do. We can express this behaviour mathematically and predict how electro magnetic waves will behave and then verify those predictions experimentally.

Does this mean we know how they travel? No. We only know that they do. Do we know how quantum entanglement works? No, we only know that it does. We can express the behaviour mathematically and make predictions that are verified experimentally. No more spooky than radio.

QED pop science articles are deliberately confusing people.


The wave-particle duality is also often explained misleadingly. It’s not that particles sometimes behave as particles and sometimes as waves—it’s that they behave in the normal fashion, and sometimes our models for explaining their behaviours are wave-like and sometimes particle-like.

My intuition about the travel of photons is that, because of length contraction, there’s no distance to travel, so they don’t actually travel through a vacuum at all, and of course that’s why they don’t need a medium, and it takes no time at all—from the photon’s perspective, anyway.

The thing that blows my noggin is that this system we’re in is complex enough to express us, who may eventually have the ability to understand it from the inside. That’ll be the real trick. :)


> My intuition about the travel of photons is that, because of length contraction, there’s no distance to travel

This is not correct. Length contraction, as a concept, doesn't apply to photons, because there is no inertial frame in which they are at rest. Or, to put it another way, length contraction and time dilation arise from the way that Lorentz transformations act on timelike and spacelike vectors; they rotate them (in the hyperbolic sense of "rotation"). But Lorentz transformations act differently on null (lightlike vectors): they don't rotate them, they dilate them (physically, this means that changing frames changes the frequency/wavelength of photons, rather than their speed).


Oh wow, thanks for the eye-opener. Was it a different term than length-contraction that I was looking for? I thought that acceleration toward c resulted in the contraction of distance, and at c the distance would simply be reduced to zero. Where did I miss the point?


> My intuition about the travel of photons is that, because of length contraction, there’s no distance to travel, so they don’t actually travel through a vacuum at all, and of course that’s why they don’t need a medium, and it takes no time at all—from the photon’s perspective, anyway.

How is there length contraction in this experiment?


There isn’t, at least not as far as I understand. (Physics is just a hobby interest for me.) I was referring to GP’s comment about an intuition for how electromagnetic waves travel through vacuum.


> how can you prove that both particles were not having the same state from the start?

But this is exactly what these experiments aim to prove. That kind of predeterminism is precisely a form of local hidden variable theory. Such a theory is disproved if we can prove the violation of a Bell inequality. The standard reference for understanding the connection is this laymans-terms paper by Mermin:

http://web.pdx.edu/~pmoeck/pdf/Mermin%20short.pdf

If we accept that this experiment is indeed a standard-loophole-free violation of Bell's inequality, we then conclude that we must give up one of three things: locality, realism, or all free will. This rules out any local hidden-variable theory. The mainstream view is then to keep locality and free will. Some alternative theories (notably Bohmian ones) give up locality and keep realism, but this is strongly at odds with what we know from special relativity, and indeed locality is assumed in all work on quantum field theory (through Lorenz invariance) which has been verified to match experiments to a precision above that of any other physical theory. The third option, which essentially no-one supports, is that free will is impossible in our universe and that everything everywhere is predetermined.


Is there any chance that you can expand on the precise meaning of "free will" in a quantum physics context? My understand of the usual common-sense interpretation is roughly "The actions that I take are supernatural, and are not a consequence of physical laws", which seems quite absurd to me, and I'd expect that to have very little support in the physics community. If not this, then what does "realism" and "free will" mean here?


Free will is typically defined to mean you[n+1] != classical_function(you[n]), at least sometimes or with some probability. That does not necessarily imply anything supernatural, though if the supernatural (or any kind of dualism) exists that would explain it. It could also arise due to quantum noise or any other process that violates classical determinism.

There are those who define free will a bit differently though. It's not a precise term. Another definition is that you[n+1] cannot be computed from any function other than you or something isomorphic with you -- in other words you are not coarse-grainable or predictable using any subset of your state. Someone would have to literally make a copy of you to predict your behavior, possibly down to the atomic or quantum level.

I've heard functions/processes with this property called computationally irreducible:

https://en.wikipedia.org/wiki/Computational_irreducibility

Basically the computational irreducibility definition of free will just means nothing outside of you can predict what you're going to do unless it has an exact copy of you or something functionally equivalent (uploaded mind, etc.).

Another variation on the same idea is the "arrow of time" view put forward by Ilya Prigogine:

http://www.amazon.com/The-End-Certainty-Ilya-Prigogine/dp/06...

This is IMHO very close to if not identical to Wolfram's computational irreducibility, but framed a bit differently.

Obviously humans are somewhat predictable, but somewhat predictable doesn't imply deterministic. My personal opinion is that the second theory (irreducibility/arrow of time) is almost certainly true, and the first is also probably true. So we are probably both irreducible and indeterminate. I'd say the same is likely true of any living thing and possibly other complex natural processes.


My opinion is that we are predictable and deterministic, but we choose to cling to the idea of us being more than that because we can't deal with the other option. The other option is quite simple, from what I see: we'll never manage to completely read, simulate and predict a complex system like our body, so in reality nobody will be able to predict what we'll do - and for me that's enough to feel comfortable. From what I see free will is defined by others as some kind of magic process through which our decisions are based on some random factor which is unpredictable. I'd say the randomness doesn't need to exist as long as the unpredictability holds.


If we cannot really choose, we are not really responsible of anything we do. Are you comfortable with that?


This appears to be an appeal to consequences.

That aside, however, I would say that this depends on what you mean by "responsible".


> My understand of the usual common-sense interpretation is roughly "The actions that I take are supernatural, and are not a consequence of physical laws", which seems quite absurd to me, and I'd expect that to have very little support in the physics community.

I can't speak for the physics community, but most philosophers are ok with a purely physical being having free will.

http://plato.stanford.edu/entries/compatibilism/


http://www.math.leidenuniv.nl/~gill/causality.pdf :

> Finally we need to assume that we have complete freedom to choose which of several measurements to perform - this is the third principle, also called the no-conspiracy principle.

https://en.wikipedia.org/wiki/Bell's_theorem#Overview :

> Freedom refers to the physical possibility to determine settings on measurement devices independently of the internal state of the physical system being measured.


Thanks; that clears it up.


This in no way rules out local variables.

Supose each fundamental particle took Graham's number of bits to fully describe. Now, there is no way to disprove or measure such complexity. ( https://en.m.wikipedia.org/wiki/Graham%27s_number )

Granted, that generally puts such a theory outside of science, but that's our limitation and may not nessisarily apply to reality.


No, that's not sufficient. Since the observables are represented by self-adjoint linear operators on a Hilbert space which for some observables is infinite-dimensional, there is an infinite number of states for each particle, so Graham's numbers are insufficient.

As the wikipedia page mentions, physicists who work in the field and who believe there are hidden variables agree that experiments show these must be non-local.


You are confusing a model with reality. We have no way to observe the difference between a sufficiently large number and infinity.

For an overly pedantic counter example, each particle could simply simulate the rest of the universe to some finite precision.

PS: That's not to say you can't rule out specific theory's that use local variables. And we should use the simplest theory that works, however it's counter productive to suggest we can rule out all forms of local variables.


Nope. Entanglement has been observed for particles that have never coexisted in time [1]. This would mean that even your overly pedantic counter example requires that we give up free will: since we choose what property to measure, the first particle is unable to simulate the outcome of the second measurement unless it can simulate what we choose. (And yes, it's generally accepted that assuming superdeterminism, aka. no free will, allows a local hidden variable theory.)

[1] http://m.phys.org/news/2013-05-physics-team-entangles-photon...


"Requires we give up free will" sure done.

...

In other words it still works. Note, it needs not be 100% accurate simulation. If it's good enough we can't tell.

Not that I think reality works this way, but the goal is to look for ways that a theory falls down, not look for evidence in support of a theory.

PS: FTL communication also works as a means to sidestep the need for a lot of quantum weardness. Sure, we don't like it but that does not mean it can't be happening.


"The third option, which essentially no-one supports, is that free will is impossible in our universe and that everything everywhere is predetermined."

Well, not just "everything everywhere is predetermined" (which some people do support) but quite specifically predetermined so that every scientist that has so far made a decision on what to measure in these quantum experiments has had that decision correlate strangely with the underlying physical result.


Might it be that Lorenz invariance breaks down when Bell's inequality is violated? Is that what Bohmian theories postulate? Or is there independent evidence for Lorenz invariance under the conditions of the present experiment?


> Is that what Bohmian theories postulate?

No. Bohmian theories are non-relativistic to begin with, so they don't have anything to say about Lorentz invariance.

> is there independent evidence for Lorenz invariance under the conditions of the present experiment?

If you mean, has Lorentz invariance been tested for photons and electrons, yes, it has, to high accuracy. The Wikipedia article has a good summary of experiments:

https://en.wikipedia.org/wiki/Modern_searches_for_Lorentz_vi...


> Besides the fact that measurement at that level is actually interaction, how can you prove that both particles were not having the same state from the start?

You can't get the same results that you get with entanglement if there is some kind of internal "agreement" among the particles from the start. I think the clearest example of how that kind of thing cannot reproduce the results that you actually get with entanglement is a thing called the CHSH game.

The CHSH game goes like this.

You have two players, A and B. When the game starts, A and B are sent to separate locations, very far apart (say, a light-day apart), and the two locations will be at rest relative to each other (quantum is confusing enough...let's keep relativity out of this!).

At each of those locations, there is a referee. The referee has a true random number generator, which he uses to generate 1000 random bits, one at a time. After each bit is generated, the referee tells the player the bit, and then the player selects a bit, 0 or 1.

Once the players have picked their bits, they are brought back to the original location, and they are scored. Scoring works thusly:

For i = 1 to 1000, the players get 1 point if and only if the AND of the referees i'th bits == the XOR of the player's i'th bits. In other words, if the player's bits matched, then the players get a point if either or both refs had 0. If the player's bits do not match, they get a point only if both refs had 1.

Before the game starts, the players are allowed to work out a strategy for the game, and they are allowed to bring anything with them that they want.

Without using entanglement, the best strategy for the players is to agree to both pick 0 every time. They will get the point 75% of the time that way.

With entanglement, though, they can do better. They prepare 1000 entangled qubit pairs in the state (|00⟩ + |11⟩)/√2, and they each take one qubit from each pair with them when they are separated. They also agree on a common reference so that when they do measurements on their qubits they can orient their bases in a particular way as described below.

When player A is told the i'th bit, A takes the qubit from the i'th entangled pair, and if the ref's bit was 0 measures this in the {|φ0(0)⟩ , |φ1(0)⟩}, and if the bit was 1, A measures in the {|φ0(π/4)⟩ , |φ1(π/4)⟩} basis. The result of this measurement is the bit the player chooses.

B does similar, except B uses the {|φ0(π/8)⟩ , |φ1(π/8)⟩} if the ref gives a 0, and the {|φ0(−π/8)⟩ , |φ1(−π/8)⟩} for a 1.

Here's a diagram showing the angular relationships between these basis, in degrees:

   B1                B0
   |                 |
   |        0        |        45
   +--------+--------+--------+ 
   -22.5    |       22.5      |
            |                 |
            A0                A1
Notice that when either player sees a 0, then no matter what the other player sees, they are measuring their qubits using bases that are 22.5 degrees apart. When both players see 1, then they end up using bases that are 67.5 degrees apart.

The probability of their measurements agreeing is cos^2 of the angle between the bases. This terms out to be 85.4% of the time when either sees a 0 and so they use bases 22.5 degrees apart, and so they get the point 85.4% of the time in that case.

When both see a 1, and they use bases 67.5 degrees apart, the probability of agreeing is 14.6%, but since in the case of both seeing a 1 they want to disagree to get the point, they get the point 85.4% of the time in this case too.

So, overall, using their entangled qubits, they get the point 85.4% of the time, which is much better than the 75% of the time non-quantum approaches give.

If you replace the spooky entanglement with something where the qubits' state was determined at the start, the above does not work. You only win 75% of the time.

Obviously, no one has literally played the CHSH game as described above, but they have done equivalent experiments, and the result is a better score than you could get without the spooky entanglement.


The CHSH proof of S<=2 for hidden variables supposes that integral over all angles "phi" of probability of passing/interacting of spin/polarization/etc having angle "phi" with detection axis is 1 (i.e. either no loss or the loss is proportionally distributed over "phi"). For example like the integral of Malus law's cos^2("phi") produces 1. That is in the theory. On practice i haven't been able to find experimental confirmation of Malus law for single photons. Practical loss of high "phi" photons on polarizer - the cut-off or significant dampening of the trail of cos^2 - would easily produce S>2 for hidden variable "polarization" of photon. Moving cut-off to pi/4 (almost like Einstein suggested, the difference is that he preserved probability 1 which isn't necessarily true for practical experiments) even produces the maximum S=2.8 for hidden variable theory. The same applies to spin as well. Notice it isn't a detection loophole which is about detecting after polarizer or after spin measurement, it is about the measurement/interaction itself.

In the posted article the entanglement between the electrons exists only in theory - 2 different physical objects - the generators - receive the same random number, and the connection ends there. They could have used 2 completely separate random generators, and after completing of the experiment to use only the events where the 2 random numbers were equal. Absolutely no physical connection, yet I bet they would still see the "entanglement", entanglement post fact um, propagating from present into the past.

At least in downconverted photon pair case one may try to believe in some entanglement magic because the photon pair is "made" from one photon - ie. there is real physical connection exist at some moment in the photons' past.

In short - to state Bell violation here one have to experimentally show the single electron spin Malus law equivalent.


Right. The many-worlds interpretation is a standard SF plot device for reconciling free will and determinism, faster-than-light travel, time travel, and so on.

So is there evidence that rules out a many-worlds interpretation for the present experimental results?


> is there evidence that rules out a many-worlds interpretation for the present experimental results?

I don't see how there could be, since the MWI makes all the same predictions for experimental results as the other interpretations of QM. That's why they're all called "interpretations" instead of "different theories".


The Everett FAQ Q16 disagrees. http://www.hedweb.com/manworld.htm

And if you're in the mood for a lot of reading http://lesswrong.com/lw/r8/and_the_winner_is_manyworlds/


> The Everett FAQ Q16 disagrees.

It looks to me like it "disagrees" mostly by mis-stating what other interpretations predict about the (highly idealized) thought experiments it describes. For example, it says that if we had a "reversible machine intelligence", we could use it to entirely reverse a measurement after it was performed. But on a collapse interpretation (such as Copenhagen), such a "reversible" measurement, like any reversible process, is not a measurement at all; it doesn't collapse the wave function. So the interpretations don't actually differ on this prediction; they just differ on how the result (which they both agree on) is interpreted.

The FAQ does raise one issue which might amount to a "difference in prediction", namely the issue of whether linearity is exact or not. The MWI requires that it is; any collapse interpretation requires that it is not. However, even here there is a problem about what constitutes a "prediction". The MWI claims that the other "worlds" exist, but we can't communicate with them; we can only detect them via interference effects, which basically means by keeping processes reversible, as above, so a collapse interpretation will just say that no collapse has occurred. But that is effectively the same as the collapse interpretation's prediction that the other worlds don't exist after a collapse has occurred. In other words, these two putatively different "predictions" are in fact experimentally indistinguishable.


The only explicit experimental setup I can find on that webpage is in the last couple of paragraphs of Q.36, where it describes a "reversible machine intelligence" which can perform measurements and then reverse the entire measurement process. The issue is that this reversal requires having fine control over the joint quantum state of the system, measurement apparatus and all record of the measurement outcome. However, the very fact that this whole setup can be described as being in one quantum state or another is sufficient for a Copenhagen advocate to deduce that no collapse would have occurred.

Now, different flavours of Copenhagen will differ as to why collapse has not occured. Bohr would probably have argued that a measurement entails an interaction between a classical system and a quantum one, whereas this has not happened in the above. A more sophisticated approach like QBism would say that quantum theory is simply an agent's calculus of experience, and that if the entire system has a quantum representation then this means the agent is still external to the system and has not yet interacted with it. Either way, Copenhagen makes the same predictions as MWI in this experiment, as it does in all experiments that admit a purely quantum description.


> there are many more quantum mechanics interpretations besides the most popular one, and some of them don't violate locality

That's because they choose to violate some other postulate instead that is just as intuitively appealing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: