Hacker News new | past | comments | ask | show | jobs | submit login
There exists a classical model of the photon after all (lightbluetouchpaper.org)
170 points by inglesp on Feb 23, 2015 | hide | past | favorite | 84 comments



Scott Aaronson has commented negatively on a previous paper by the same authors [1]. I don't know if similar issues apply to this one:

> [...] the paper advances the prediction that quantum computation will never be possible with more than 3 or 4 qubits. [...] I wonder: before uploading their paper, did the authors check whether their prediction was, y’know, already falsified? How do they reconcile their proposal with (for example) the 8-qubit entanglement observed by Haffner et al. with trapped ions [...]

(Note: that's a critique of the previous paper, not the linked one. Although the linked post mentions quantum computers not working, the linked paper does not touch the subjet.)

1: http://www.scottaaronson.com/blog/?p=1255


They claim to have found a classical system that reproduces quantum mechanical effects. But if they manage to extend it to many particles, interacting, they will find that they have just come up with another interpretation of QM which is experimentally indistinguishable from the rest. And it wouldn't even be the first one. (Bohm's hidden variable theory has precedence.)

Furthermore the "incompressible fluid" they postulate sounds like it enables non-local behavior (which it has to to match current versions of the Bell test) so it is unable to help resolve the issue of reconciling GM with QM.

So this does rather less than they claim. Assuming that their claimed result is correct.


Models like theirs predate Bohm by a long shot. From the introductory paragraph of the paper:

"In 1746 Euler modelled light as waves in a frictionless compressible fluid; a century later in 1846, Faraday modelled it as vibrations in ‘lines of force’ … Fifteen years later Maxwell combined these approaches, proposing that a magnetic line of force is a ‘molecular vortex’…"

They basically updated Maxwell's model. From their conclusion:

"We brought Maxwell’s 1861 model of a magnetic line of force up to date using modern knowledge of polarised waves and of experiments on quantised magnetic flux. Our model obeys the equations for Euler’s fluid and supports light-like solutions which are polarised, absorbed discretely, consistent with the Bell tests, and obey Maxwell’s equations to first order."

What's nice is that their model is classical. Even if it "just" makes exactly the same predictions as other models, it's nice to have a model where physical intuition can be brought to bear.


>> What's nice is that their model is classical. Even if it "just" makes exactly the same predictions as other models, it's nice to have a model where physical intuition can be brought to bear.

A classical model can also be simulated on a classical computer, so if it produces the same results as QM then quantum computation would be... fiction or just redundant?


The only caveat to this is the ontological status of Faraday's "lines of force". Their model is based on these, and so far as anyone knows they are just a conceptual or pedagogical convenience. They have a quantitative meaning (you can actually calculate the "density of lines of force" between two charges or magnetic dipoles) but they aren't really good for much. They are generally mentioned in passing in intro or intermediate E&M courses, but mostly as a matter of historical interest.

If they could be shown to have independent effect of the kind that the vector potential was shown to have via the Ahronov-Bohm effect, then this whole approach to quantization would become extremely interesting. Otherwise, you're right: it's just another interpretation of QM, and not a very interesting one at that (despite their claims, as I explained in a separate comment, they can't reproduce the experimental violations of the CHSH inequalities in Aspect's and other experiments that introduce time-variation to precisely rule out the kind of prior communication they are arguing for.)


> Furthermore the "incompressible fluid" they postulate sounds like it enables non-local behavior

It says compressible not incompressible.


Usually a pretty common mistake. Fluid is always compressible (else sound won't travel in STP water). It is whether one chooses to model the flow as compressible or not.


They really ought to stop teaching that fluid is incompressible in middle school.


I think that is a simplification of the model for high school level physics. It's like when we use Hooke's law to describe a spring constant that only holds true for part of the range. Compared with a volume of gas, fluids are "incompressible."


What's the current status of Bohm's hidden variable theory? Does it stand up in light of the Bell test (I was under the impression that the Bell results suggest an arbitrary number of hidden variables would be necessary)?


> What's the current status of Bohm's hidden variable theory? Does it stand up in light of the Bell test

In "normal" quantum mechanics we have the wave-particle dualism. A particle behaves also like a wave, whatever that means.

In Bohm's hidden variable theory, also known as "pilot wave theory" the wave and the particles are separate. The pilot wave is a wave of unknown making (the theory does not say what it would be made of), and this wave follows the normal quantum mechanical behaviour. Then particles "ride" on this wave. So all the quantum mechanical wave effects happen in the pilot wave, and then the classical particle-like particles just follow theirs paths, already laid out by the wave.

Although "spooky action at a distance" is also experimentally verified, explaining this by postulating a wave that fills the whole universe, and explicitly reacts spookily over distances, makes the whole "spooky action at a distance" uncomfortable explicit in this theory, so it doesn't appeal to most physicists.

"The de Broglie–Bohm theory makes the same (empirically correct) predictions for the Bell test experiments as ordinary quantum mechanics. It is able to do this because it is manifestly nonlocal."

http://en.wikipedia.org/wiki/De_Broglie%E2%80%93Bohm_theory#...


What's the current status of Bohm's hidden variable theory? Does it stand up in light of the Bell test?

Yes. Bell's theorem doesn't forbid hidden variable theories (in fact, he has a publication preceding his famous result explicitly demonstrating a hidden variable theory that could reproduce QM measurements on a two-level system), just local hidden variable theories. In his collected papers, Bell often remarks that he feels Bohm's interpretation should be more widely studied.

It's a good question why it's not. My understanding, though I'm not well versed in the subject, is that there are some pretty severe shortcomings to it, particularly when you start considering systems with many particles. I've read some of Bohm's writings to try to understand it and as far as I got I found it pretty underwhelming; it seemed more like a bookkeeping trick than any sort of real insight.


Try reading some of the stuff from http://www.bohmian-mechanics.net/ I was a student of Sheldon Goldstein of that group and can attest that it has no problems with many particles.

In fact, part of my thesis was explaining how identical particles are handled in Bohmian mechanics. Basically, one considers a configuration space whose points are sets of points in physical space rather than some arbitrarily ordered points. One then immediately gets bosons and there is a principle which guides one to fermions. This works beautifully with spin as well, where the spin is associated with the physical point in space, not with the particle label as is usually done. I find it to be very elegant.

There is also an extension to quantum field theory in which the particle creation and annihilation operators actually represent those events with the Bohm particles. So that is not a problem either. It becomes an indeterministic theory at that point (creation is a random event while annihilation is simply when the particles collide).

The main conceptual question is dealing with instantaneousness of the theory. It is a merit of BM that it brings this to the forefront. One can always put in an arbitrary (hidden) foliation of space-time to deal with it, but it is a bit distasteful even if, as there are, foliations that can be derived the other existing structures.

One final note is that Bell was a very strong proponent of Bohmian mechanics even when Bohm had forgotten about it for a time. Bell's formulation was based on the first order probability flow and not the quantum potential. Bell's version works for spin while the quantum potential does not (sadly for numerical work).


I don't know and don't keep track. I lost interest after realizing that it can be made compatible with any possible observation.

I'm personally a fan of the Everett interpretation. Also called the many worlds. Which is what you get if you assume that quantum mechanics applies to the observer. Then the act of observation throws the observer into a superposition of possible states where different things were observed. And those states cannot meaningfully interact later for thermodynamic reasons.

Unless someone comes up with good reason to believe that quantum mechanics does not describe humans, I see no reason not to accept it. And if quantum mechanics is replaced by something different, to the extent that quantum mechanics is an accurate description of us, that interpretation remains correct.


There's a problem with with many-worlds that I haven't seen convincingly refuted: when collapsing superposition of two states you might need to (or rather almost always) need the ratio of worlds with outcome A to worlds with outcome B to be irrational. So unless you create a continuum of new worlds each time it won't work.


The wavefunction is a superposition of n independent wavefunctions at different amplitudes, like you can divide a piano's sound into a bunch of pure sine waves. "From the inside" each feels like a self-contained world (in a physically rigorous sense), but the ratio between their amplitudes and phases can be an arbitrary complex number - just as even if you're only playing a C and a G, the ratio of their amplitudes can be anything.


Why is this a problem? This complaint looks to me much like someone complaining about the Pythagorean theorem because it results in irrational numbers. (And yes, historically there were such complaints. And no, that is not a good reason to discard that theorem.)

A system described by physics evolves according to the laws of physics. We find the result surprising. But that is because we have bad intuition, not because physics is fundamentally broken.


> when collapsing superposition of two states you might need to (or rather almost always) need the ratio of worlds with outcome A to worlds with outcome B to be irrational. So unless you create a continuum of new worlds each time it won't work.

Has this been shown? I would want any such proof inspected for hidden continuum assumptions existing in the probability calculations, thus leading to irrational probabilities.


Try reading http://arxiv.org/pdf/0903.2211.pdf This basically presents many worlds as a mass density obtained by integrating out the wave function. The worlds we see can be traced through the evolution, like two videos overlain on each other can be deduced over time.


The Everett Interpretation is best thought of as the idea that collapse just doesn't happen. Which means that there aren't discrete integer worlds but only higher and lower amplitudes. If you had discrete worlds that would also be a problem because you could change how many worlds you had by how you did the math.


>good reason to believe that quantum mechanics does not describe humans

One simple idea is that the more energy and matter you shove into the wave function, the easier it is to collapse, so that at the macro level quantum effects are rendered impossible.


The problem is that there is absolutely no experimental or theoretical reason to believe that quantum states ever collapse. And there is an explanation for why we would perceive collapse even if there is none.

If you take those ideas seriously, you're forced into the Everett interpretation.


But the "explanation" assumes what it sets out to prove.

If there is no reason to believe in collapse, there is no reason to believe that we can only be conscious of our state of entanglement with one component of a wavefunction rather than both.

That is, the Everret interpretation assumes that for some unknown reason we can only be conscious of the classical world, and uses this assumption to "explain" that we are conscious only of the classical world.

Consider a polarizing beam splitter with detectors in either arm. We are only ever conscious of a photon being detected in one arm or the other. But why not both, since the matter of our brain is necessarily entangled with both components of the photon wavefunction?

All Many Worlds does is push the central mystery around, from "Why do photons prepared in the same initial state collapse into different final states?" to "Why aren't we conscious of being entangled with both photon polarization states rather than just one?" It won't do to simply say, "Well, consciousness doesn't work that way." We know it doesn't. The question is, given the otherwise completely continuous physics describing the world, why is the physics of the brain such that it can't generate consciousness of that world?

Decoherence and similar approaches have the same problem, because they assume that for some reason the brain is unable to detect the quantum world without the aid of such classical phenomena as interference patterns in photon detection, but there is simply no warrant for that assumption.

If you restrict your description of the universe to non-collapsing QM you would never guess at the existence of the classical world. Ergo, a brain fully-described by non-collapsing QM is a quantum brain, and there is no particular reason why it shouldn't be in all states at once. That is it not in all states at once is manifestly true, but the question is "Why not?" It won't do to simply assume it, as all these alternative interpretations of QM do.

Getting the brain to be aware of only a single classical world is exactly the same problem as getting a wavefunction to collapse. It has just moved the problem around, not solved it.


If there is no reason to believe in collapse, there is no reason to believe that we can only be conscious of our state of entanglement with one component of a wavefunction rather than both.

Absolutely true. The process of cognition is addressed by science. Consciousness, not so much.

That is, the Everret interpretation assumes that for some unknown reason we can only be conscious of the classical world, and uses this assumption to "explain" that we are conscious only of the classical world.

No such assumption is made. In fact you are the one adding an implicit assumption that consciousness obeys classical rules, when we have no data suggesting that such is the case.

Consider a polarizing beam splitter with detectors in either arm. We are only ever conscious of a photon being detected in one arm or the other. But why not both, since the matter of our brain is necessarily entangled with both components of the photon wavefunction?

This is a non-issue if consciousness can exist in non-interacting superpositions? Indeed there is indirect evidence that this is the case.

While science is currently unable to explain the phenomena of consciousness, we are able to say a lot about the process of cognition. And to date nobody has ever demonstrated that we can be conscious of something we did not learn about through physically understood processes. (If you have such a demonstration, there is a million dollar prize waiting for you, courtesy of James Randi.)

Quantum mechanics predicts that entanglement will cause that physical system to separate into a superposition of non-interacting states. And therefore all evidence is that there is no way for your consciousness of one state to affect cognition and therefore any awareness of any other.

Again, this is now a non-problem.


Most of this is me thinking out loud. Feel free to ignore it.

With regard to consciousness, I meant only that we are only conscious of the classical universe. This is not an assumption but a statement of fact. It is why QM seems weird to us, because we are only aware of quantum effects via inference from statistical distributions, not a conscious awareness of the wavefunction in the way we are we are conscious of rocks.

I'm not concerned with nor do I need to make any claim about the mechanisms of consciousness, but only rely on the factual and uncontroversial observation that "We are consciously aware of only the classical world". There is a case to be made that this defines the classical world.

> Quantum mechanics predicts that entanglement will cause that physical system to separate into a superposition of non-interacting states.

This is actually a vastly more coherent way (as it were) of putting the argument than it is usually stated. I don't find it immediately convincing for a variety of reasons, but it is at least a testable claim. I'm particularly concerned about the role of weak measurements in breaking the "non-interacting" aspect, and the potential for delayed-choice measurements. But even without those there is trouble.

There are also the usual conservation concerns: how is it that both states end up with all the mass, energy, charge and other quantum numbers we normally consider to be conserved? That is, what is the ontology of these non-interacting states?

Consider an ion and a photon that interact such that they are entangled. The ion has a net charge as well as a mass and angular momentum. For fun, let's say that the photon starts out unpolarized, as does the ion, which is in a magnetic field. The ion has spin 1/2 and the interaction is such that the photon goes from right to left circularly polarized and the atom goes from Jz = -1/2 -> 1/2, or vice versa. So we end up with states that look like |L 1/2> and |R -1/2>.

The claim is that no future evolution of the system will allow these states to ever interact with each other. But we know this is not the case. We could easily pass the photon through a beam-splitter and 1/4-wave plates to interfere with each other. We can do anything we like to the ion--pass it through a dipole, accelerate it, whatever, and it will not change this. So long as we don't "measure" it (whatever that is) it will be possible to get the components of the photon wavefunction to detectably interact. But it won't do to talk about "measurement" in such a context, because that's what we're trying to avoid.

The question is: what is the condition such that the components of the individual particle wavefunctions may never be brought back together to show an interference pattern again? "Thermalization" (entanglement with a heat bath) is the usual claim, but I'm unconvinced that this is not simply hiding the quantum mystery behind a thermodynamic one (and the thermodynamic mystery would require a proof of something like Boltzmann's H-theorem to actually work.)

Nor does this answer the real question, which is: why is it that we are aware of all this only via inference from interference patterns? The quantum state formalism captures that fact, but does not explain it (Note to self: I need to formulate more clearly what I would consider an "explanation" in this regard.)


Ah, but macro-level systems can amplify quantum effects. For instance, the human eye -- under the right conditions -- is sensitive to individual photons.


A packet of energy absorbed and exciting a molecule in my eye, causing a cascade through a complex system still feels entirely classical. I'm not sure merely invoking "photon" is enough to imply quantum effects. It's when the behavior breaks from deterministic to probabilistic (independent of initial conditions) that we switch paradigms.


"Updating this with modern knowledge of quantised magnetic flux, we show that if you model a flux tube as a phase vortex in an inviscid compressible fluid, then wavepackets sent down this vortex obey Maxwell’s equations to first order; that they can have linear or circular polarisation; and that the correlation measured between the polarisation of two cogenerated wavepackets is exactly the same as is predicted by quantum mechanics and measured in the Bell tests."

How long would I have to study physics to be able to understand everything in this sentence?


> How long would I have to study physics to be able to understand everything in this sentence?

Just start reading David J. Griffiths: Introduction to Electrodynamics. A very well written textbook. The problem might be, if you don't know vector calculus, you might not be able to read this book, so you need to learn some vector calculus, too.

Then start reading Introduction to Quantum Mechanics by Griffiths, too. Best introductory QM book that I know of. If you managed to read Electrodynamics, you should by now know enough calculus for this book, too. But you also need to know about complex numbers here.

The "inviscid compressible fluid" is about fluid mechanics. I don't know any splendid textbook on that.


I disagree with the suggestion of Griffith's QM. Somehow, Griffiths made an excellent EM book, but terrible QM. I find Shankar's Principles of Quantum Mechanics to be much more comprehensive and easy to follow.


Wow, so nice to hear props for Griffiths' textbooks. I think he's by far the best physics textbook author.


I think Griffiths' E&M book is great. It's very enjoyable and makes a fine book to use before going on to Jackson (although more supplements are often needed to make it through that). I like Griffiths' writing and I liked the problems and examples he gives in this book.

However, I never liked Griffiths' QM book. The writing is OK (it's mostly in the same style as the E&M book, but to me it just seems like he tries too hard). Overall I didn't like his selection of which topics went to examples and which went to problems. I think Cohen-Tannoudji, et al., is the way to go for learning some QM. It's a bit more formal than Griffiths, but I think it makes far more sense and it has tons of good examples in the appendices.

But, to each his own.


I haven't read it, and it may be too advanced for this purpose, but Landau's books are generally held in extremely high regard and volume six of his Course of Theoretical Physics is on fluid mechanics.

For what it's worth, volumes two and three cover electrodynamics and quantum mechanics, respectively.


> Landau's books

Landau's presentation is extremely condensed. Griffiths is much more friendly towards the reader.

I would compare Landau to Knuth's The Art of Computer Programming. Some people do read them, but the rest of us just hold them in extremely high regard :-).


The Landau Lifshitz series is absolutely amazing. It works so well with my brain, entirely concise with just enough textual clarification as needed.

Eg, volume 1 is classical mechanics. By page 3 or so you e already derived the Lagrangian equations of motion.


Landau/Lifshitz are freely available on the Internet archive:

https://archive.org/search.php?query=creator%3A%22L.D.+Landa...


For superfluid mechanics (which they seem to be using as a foundation) Donnelly's "Quantized Vortices in Helium II"'s introduction is a pretty good start. The field is quite hermetic, though.


You should be able to understand everything in that sentence if you have taken a typical undergraduate curriculum in physics including quantum mechanics and electromagnetism. Though it depends on what you mean by "understand". It should be clear what "quantised magnetic flux" is roughly about, but I have no idea what constitutes the "modern knowledge of quantised magnetic flux" in this context.


That doesn't help with the first part, about a phase vortex in compressible inviscous liquids.


It can take a bit of unpacking, but someone with some sort of college level physics and mathematics background can use Wikipedia to get a basic understanding of what they are talking about.

They are basically saying that the quantum mechanical 'strangeness' of light can be explained with classical, deterministic, physics. It is not necessary to have a separation in which quantum mechanics predominates at one level and trumps classical mechanics.

It call all be understood as movement of 'particles' of light (photons) on an underlying wave.

http://en.wikipedia.org/wiki/Magnetic_flux_quantum

http://simple.wikipedia.org/wiki/Magnetic_flux

http://en.wikipedia.org/wiki/Flux_tube

http://en.wikipedia.org/wiki/Quantum_vortex

http://en.wikipedia.org/wiki/Wave_packet

http://en.wikipedia.org/wiki/Maxwell%27s_equations

http://en.wikipedia.org/wiki/Differential_equation

http://en.wikipedia.org/wiki/Polarization_%28waves%29

http://en.wikipedia.org/wiki/Bell_test_experiments

http://en.wikipedia.org/wiki/Fluid_dynamics

http://en.wikipedia.org/wiki/Pilot_wave

http://en.wikipedia.org/wiki/De_Broglie%E2%80%93Bohm_theory


Pilot wave theory has been known for decades. So can someone explain how whatever they're talking about is different/new?


Their central claim is that they found a specific instance of quantum behavior where they can represent the whole model with only classical mechanics rather than falling back on quantum explanations like wavefunction collapse, etc. We think that describing quantum systems requires quantum explanations, so finding a counterexample could be interesting.


So they claim that up until now pilot wave theory hasn't been able to explain entanglement while remaining consistent with Maxwell's equations.

They are not saying they came up with pilot wave theory, only that they've removed a sticking point.


To really, really get it? The standard electromagnetism semester and a QM semester, both "for majors" (not the simplified general version), and something else for the "inviscid compressible fluid" (fluid dynamics, I assume, this might require some stuff that won't come up until semester 2, not sure), and a factor difficult to express in terms of semesters that you are not merely aping the mathematical results you are being taught by rote, but actually understand how to manipulate the math and follow deeply when you see other people do it. (Which I mean quite straight, not sarcastically.)

I'm pretty sure this would all be accessible to an undergrad physics major who passes my math criterion above. It would probably be beyond them to do the work, but they should be able to follow it.


An aerospace major who took a QM elective, or a physics major who took a couple of aerodynamics electives, would probably cover the necessary material by the end of year 2 of an undergraduate program. If you've got a solid, intuitive grasp of calculus and differential equations, you could probably learn enough from lecture notes to understand this in a few weeks. This is the most accessible physics paper that you'll ever see make the news.


Just about every concept (mathematical and physics) in the arXiv paper linked from the article is covered by the end of a typical undergraduate course in electromagnetism and quantum mechanics. The ones that aren't (specifically with respect to certain particulars of fluid dynamics and flux) are easily supplemented with material of the same depth and complexity (i.e. undergraduate level).


It depends on what you mean by "understand everything". That is a very terse, densely packed sentence of jargon intended for practitioners but a physicist who is a skilled communicator could describe the main results to a high school kid. To be able to fully understand the arguments leading to those results... it would take very serious study and decently sophisticated mathematical background before even starting.


A lifetime, or long enough that it would feel like a lifetime. The real question to ask is that how you really want to spend your life? The fact that you're asking makes me think you've already chosen another path.


It does not take a lifetime. It took me about 6 months to cover upper-division electricity and magnetism and quantum mechanics, which is probably all that you would need to get the most basic but complete understanding of this paper.


Climbing to 20,000 ft from the 16,000 ft makes you think that climbing mountains is easier than it's going to be for the person at sea-level that you're advising. If you've been at 16,000 ft for some time even just that acclimatisation is going to make a huge difference.


"our paper shows that the main empirical argument against classical models of reality is unsound.". That's quite an affirmation!


I hope this is as awesome as it sounds. It sums up everything I've been thinking about quantum physics, from "someone should look closer at Couders work" to "spooky action at a distance is BS" to "Quantum computers will never work - see spooky action".


> I hope this is as awesome as it sounds.

If it's true, it would be even more awesome than it sounds. It would be the biggest breakthrough in physics in 100 years. I'll give you long odds against it turning out to be true, but I won't bet my entire life savings on it.

[EDIT] I have now read the paper and I'm ready to bet my life savings that it's bogus. There's just nothing new here, just a hand-wavy argument that classical mechanics can violate the Bell inequalities because "lines of force." It's possible that QM will be overturned some day, but when it happens it won't look like this.


Yeah, I don't like when they say it matches XXX to first order... Does it match or not? OTOH it suggests that another model might, just like Couder suggested.


Spooky action has been empirically demonstrated. https://arstechnica.com/science/2012/04/decision-to-entangle...


I call BS. If that's true, then Victor can not only send information across a distance, but back in time. Victor is changing the correlation of Alice and Bobs measurements after they have already been made. That's going to need serious verification for me to accept.


Or you could just, you know, check the published experimental results:

http://www.technologyreview.com/view/512281/chinese-physicis...

Physicists are increasingly convinced that time is an emergent property that arises due to low level quantum effects, principally entanglement [1].

The experiment in the Ars Technica article is the equivalent of the universe using lazy evaluation. It's really not so mysterious if you think outside the box of classical physics.

Regarding sending information back in time, from what I've read physicists are divided on the issue. Most agree that sending matter back in time is impossible, but information is still up for debate. In this particular experiment, however, any past observer who measured the information sent back in time would have collapsed the entangled quantum state and prevented the experiment from being conducted successfully. So being able to "successfully" send information back in time doesn't appear too useful if you can't read it.

[1] http://arxiv.org/abs/1310.4691


There are 69 papers that cite that one, but I don't see any attempts to reproduce. https://scholar.google.com/scholar?cites=1565984354572506461...

Also, do you have some explanation for violations of Bell's inequality that don't rely on spooky action?


>> Also, do you have some explanation for violations of Bell's inequality that don't rely on spooky action?

The article that started this thread is one. In some ways it's not even important that the model be a 100 percent match with reality. It behaves very similarly to reality and it matches the Bell inequalities without spooky action at a distance. Bells statement pretty much says this can not exist, yet there it is. So now we can stop talking about models that can not be - because they do - and see if any of them is actually a good fit for reality. Of course I'm assuming the math in the paper turns out to be correct upon review.


Reasoning from intuition has a terrible track record, I'm afraid.


Really? Did Einstein have empirical proof of his ideas?


Einstein won his Nobel Prize for explaining the photoelectric effect, a measured phenomenon that didn't have an adequate explanation.

His work on brownian motion showed that the observations of the movements of small particles can be explained by the nature of fluids as being made up of small particles (i.e. molecules).

Special relativity is what you get when you combine the seemingly contradictory observed phenomena that there's no such thing as absolute motion and that the speed of light is a constant to all observers.

General relativity combines that with the observed phenomenon of gravity.

While I don't know whether these constitute "proof," Einstein certainly had a lot of empirical support for his ideas, and they weren't anything like pure intuition.


Einstein's idea of "intuition" is bastardized sometimes. He didn't mean "Having a common-sense opinion that something is wrong or right" as intuition, he just meant "Think really hard about the underlying principles and reach a conclusion", as opposed to "Make an arbitrary mathematical model, try to fit the data, iterate".


Why do you think this addresses the claim "Reasoning from intuition has a terrible track record?"

1) Einstein didn't work from intuition, but from a particularly narrow insistence on that the laws of physics be the same for all observers. This informed both SR and GR, and in fact his "intuition", such as it was, led him wildly astray in the run-up to GR. His papers in the 1913-1915 timeframe were all over the map. Furthermore, the final decades of Einstein's life were almost completely sterile in terms of new physics because he let his intuition guide him: he insisted that "god does not play dice" and so on, which turned out to be a hiding to nowhere.

2) Even if Einstein had worked primarily on the basis of intuition (which he didn't) and had been right (which he wasn't when he relied primarily on intuition) it would not in any way absolve us from the duty of taking experimental results far more seriously than theoretical intuitions, because against that one (actually imaginary) triumph of intuition we would have to balance thousands of years of intuitions from very smart people that turned out to be false.

"Things fall toward the center of the Earth and planets move in perfect circles about it" was intuitively obvious to Aristotle. So were a lot of other falsehoods. Galen had a whole raft of intuitions about human physiology that were false. Everyone from Kant to the Positivists believed it was intuitively obvious that detecting a violation of the law of non-contradiction of the kind implied by the experimental violation of Bell's inequalities was impossible. And so on.

So even if we had a single instance of intuition being correct, we would still be crazy to rely on it given its long track record of abject failure. "It just makes sense" are the most dangerous four words you can speak, because they are the terminus of critical thought.


For special relativity, yes. (Maxwell's equations plus the negative result of the Michaelson-Morley experiment.)


Also, Einstein's theories were a little bit more rigourous than "Well, the Luminiferous Aether sounds like bullshit to me"


Einstein started with large amounts of empirical data and created models that explained them.


How is this different from that?


You can go down to the local diy electronics store and get what you need to demonstrate "spooky action at a distance" in your own garage. Just like how in Einstein's time anyone with decently precise mirrors and measurement apparatus could tell that the changing velocity of the Earth did not change the relative velocity of light. Such experiments have been replicated tons of times. Yet, phkahler is now calling BS based on... what?


I would greatly appreciate a link to anywhere that explains how to demonstrate "spooky action at a distance" in my garage or at a makerspace. Provided the materials aren't too expensive, I'd love to do a lab experiment showing it, especially if it can be done with common parts.


I got lost in the intermingling contexts of this discussion thread. I thought that sp332 meant to suggest that the new model linked to by the OP was not starting with empirical data and finding a model which fit it all.


phkahler said "spooky action at a distance is BS" despite tons of evidence.


Special relativity still holds where it is applicable, and general relativity has yet to be refuted, and has been verified in interesting ways (gravitational lensing of Mercury with regards to the precession of it's perihelion for example). Einstein's other ideas about phenomenon such as the photoelectric effect and Brownian motion have been long since verified, so I would say, especially in comparison to some of his contemporaries, his theories oh withstood the test of time very well.


Part of Einstein's genius was that he took seriously the fact that in all the empirical data, no one had yet found a way to distinguish between inertial acceleration and gravitational acceleration. He decided this was not a coincidence, but a fundamental equivalence. It resulted in his elevator thought experiment that predicted that gravity can bend light.


It's quite exciting to see these sorts of papers.

What fascinates me is that we have achieved so much in the "quantum age" of the past century using the models derived from a quantum mechanical approach to physics. That the bedrock [or lack of one] of that could be removed and provide a better, more consistent, approach seems so counter-intuitive. But then one recalls how long the Newtonian or Aristotelian approaches [or any other such system] stood.

Also would this be a return to universal models with an aether: wonder how Michelson-Morley works with "flux tubes"?


The wheel of science turns, but it doesn't turn backward. Einstein refined Newton, but in no sense represented a return to Aristotle. Perhaps more to the point, the Bell Inequalities are true and have been experimentally verified; reality provably contains either nondeterminism or nonlocality. If you find the model with instantaneous communication along these "flux tubes" easier to work with then by all means work with it, but it's just another interpretation; most of us find the nondeterministic but local model is ultimately easier to reason about.


You may like the article, "Clearing Up Mysteries - The Original Goal" by E.T. Jaynes:

http://bayes.wustl.edu/etj/articles/cmystery.pdf

>While it is easy to understand and agree with this on the epistemological level, the answer that I and many others would give is that we expect a physical theory to do more than merely predict experimental results in the manner of an empirical equation; we want to come down to Einstein's ontological level and understand what is happening when an atom emits light, when a spin enters a Stern-Gerlach magnet, etc. The Copenhagen theory, having no answer to any question of the form: What is really happening when - - - ?", forbids us to ask such questions and tries to persuade us that it is philosophically naive to want to know what is happening. But I do want to know, and I do not think this is naive; and so for me QM is not a physical theory at all, only an empty mathematical shell in which a future theory may, perhaps, be built.

...and maybe chapter 10 of his book, "Probability Theory: The Logic of Science".

>We are fortunate that the principles of Newtonian mechanics could be developed and verified to great accuracy by studying astronomical phenomena, where friction and turbulence do not complicate what we see. But suppose the Earth were, like Venus, enclosed perpetually in thick clouds. The very existence of an external universe would be unknown for a long time, and to develop the laws of mechanics we would be dependent on the observations we could make locally.

>Since tossing of small objects is nearly the first activity of every child, it would be observed very early that they do not always fall with the same side up, and that all one’s efforts to control the outcome are in vain. The natural hypothesis would be that it is the volition of the object tossed, not the volition of the tosser, that determines the outcome; indeed, that is the hypothesis that small children make when questioned about this. Then it would be a major discovery, once coins had been fabricated, that they tend to show both sides about equally often; and the equality appears to get better as the number of tosses increases. The equality of heads and tails would be seen as a fundamental law of physics; symmetric objects have a symmetric volition in falling.

>With this beginning, we could develop the mathematical theory of object tossing, discovering the binomial distribution, the absence of time correlations, the limit theorems, the combinatorial frequency laws for tossing of several coins at once, the extension to more complicated symmetric objects like dice, etc. All the experimental confirmations of the theory would consist of more and more tossing experiments, measuring the frequencies in more and more elaborate scenarios. From such experiments, nothing would ever be found that called into question the existence of that volition of the object tossed; they only enable one to confirm that volition and measure it more and more accurately...

http://www.med.mcgill.ca/epidemiology/hanley/bios601/Gaussia...


When models like this are proposed, a lot of people are interested because of the philosophical implications of a classical theory of quantum phenomena.

The question I have, though, is this: does this model actually help model phenomena that we can't already model? Quantum gravity is the big spectacular example, but there are many others.

For instance, the Standard Model is very successful at predicting the anomalous magnetic moment of the electron. But it is not successful at predicting the same quantity for the muon. There are many other issues with the Standard Model that aren't so high-flung as quantum gravity.

Are classical models like these, if they can be shown to incorporate multiple particles interacting simultaneously, capable of going beyond the Standard Model or merely replicating it?


I don't know any physics, but it seems like a bit of a warning sign that these new discoveries in fundamental physics are annouced on... a computer security blog?


> Updating this with modern knowledge of quantised magnetic flux

It sounds like they are using quantum mechanics to explain quantum mechanics.


"if the fundamental particles are just quasiparticles in a superfluid quantum vacuum"

well obviously.


This paper has a variety of issues, the most glaring of which is that their "explanation" of the experimental violation of Bell's inequalities (specifically the CHSH form that has been realized in many experiments on polarization) is dependent on a static setup of precisely the kind that Aspect's experiments were intended to avoid.

Aspect's work is one of the most beautiful pieces of careful and precise experimental testing of an idea in the past half-century, and while it has been attacked from many perspectives it is still a very robust argument for the non-locality of reality. One of the important things about it is that the polarization direction was switched in a quasi-random way after the photons had left the source. Variations on this trick have been performed since, and they all agree with the predictions of quantum theory.

The authors say in this paper "The CHSH assumption is not true in Faraday's model. Instead there is prior communication of orientation along phase vortices such as(4), communication which the CHSH calculation excludes by its explicit assumption."

In experiments like Aspects, prior communication is ruled out because the experimental setup is varied in one arm of the apparatus outside forward light cone of the other photon. Each photon gets detected before the other one could possibly know (based on signalling at the speed of light) what polarizer orientation it should be lined up with.

So this is an interesting bit of work that might be useful in creating photonic quasi-particles in magnetic fluids that would allow for study of photon properties that might be difficult to get an experimental handle on otherwise, but the claim that they have a classical model that violates Bell's inequalities in a way that is relevant to the actual experimental work done in this area is considerably overblown.


I'm not a physicist, but I've always wondered about Caroline Thompson's work, like:

"Chaotic Ball" model, local realism and the Bell test loopholes

http://arxiv.org/abs/quant-ph/0210150

...any thoughts?


The scrutiny that Bell test experiments get from loophole people is always much appreciated, but the problem with Aspect's results in particular is that lovely parenthetical remark that appears in several of his figures, to the tune of "The dotted line is not a fit to the data, but the quantum mechanical prediction for this result."

While it is easy to imagine selective-detection effects that mess up the results enough to invalidate the test at the level of the inequality, it is very, very difficult to maintain all the physics required for precise, detailed agreement between theory and experiment of the kind that Aspect and others have shown. Here is an example of a "local realistic model" that reproduces the quantum mechanical results for in-time coincidences, but completely messes up any number of auxiliary measurements: http://www.tjradcliffe.com/?p=590

So while I'd love to see a modern version of Aspect's work using state-of-the-art entangled photon sources and the like, the likely reason it hasn't been done is that the odds of it revealing anything new and different are trivially small (but not zero, of course!)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: