I hope Pilot Wave Theory [0][1] gets more recognition and future work is able to extend it to account for relativity, there's hope that we actually can find a deterministic approach to Quantum Mechanics.
Here's a amusing video of an analogous macroscopic experiment, droplets oscillating and interacting with each other at stable states: https://www.youtube.com/watch?v=JUI_DtzXdw4
I've been fairly interested in pilot wave thoery, and the more I delve into it, the less I feel it's useful.
My understanding is that the statistical equivalent of the pilot wave theory is "when you flip a coin, there is not a 50/50 chance, but a 100% chance of getting the result, because you got the result". Hyper-determinism.
The programming equivalent is "all functions are built with lookup tables, with precomputed values". Great, but I still want to predict the values of the lookup tables to describe the function.
From the wiki description of the "guiding function" (basically, the universe's step function):
>The main fact to notice is that this velocity field depends on the actual positions of all of the N particles in the universe.
This feels totally infalsifiable! "in the exact state of the universe you get the result you get". None of the results end up being usable because there's always the "out" of "the universe has changed". You end up going into hyper-determinism, which is almost all about negating free will.
Even if the pilot wave theory is the "real thing", the other model of quantum mechanics ends up being more useful because at least you can make some predictions on the state of the universe.
(Would love to have someone explain to me how we can apply Pilot Wave Theory in a "local-ish" fashion, people worked hard on this stuff so I imagine there's some use)
...It makes the same localized predictions as other quantum theories and has the same utility (aside from not having relativity incorporated).
One just says "we fundamentally can't know, lulz random" and the other says "we could know, if we knew everything". PWT just has the local probabilistic behavior because we can't know the little fiddly bits of the rest of the universe, even if we know the dominant local forces. Copenhagen just says that it does random shit, no reason.
I'm not sure why you think the other interpretation of QM gives predictive power PWT doesn't.
Ed: To specifically address the stats point --
PWT says that if we had total knowledge of the universe, we could determine that the coin would always be, eg, heads under specific conditions, but due to measurement limitations we can only say that it will come up 50/50, and that the discrepancy between the theoretical ability to perfectly predict and our need to statistically approximate is related to our ability to measure (and compute).
By contrast, Copenhagen says there is no coin between when it leaves your hand and when it lands, so clearly we can't do any better to predict than statistics of how often we see the coin heads on the table after it leaves our hand.
You seem to fault PWT for making the same predictions as other models, just because it doesn't declare the problem of doing better fundamentally impossible.
Isn't PWT saying "well if you saw the entire state of the universe you could know the answer" the physics equivalent of "every terminating calculation is O(1) relative to the heat death of the universe"?
Because the interpretation still leads to the "wave function collapse" interpretation in the expermienttal scope anyways, how does it feel less impossible?
Is the theory equivalent to other quantum theories? If so, the value of that interpretation is purely philosophical right? Personally I'm not super satisfied. It's brought up as an interpretation that removes the non-determinism, but in reality it hides the non-determinism to the "practically invisible" by extending out to the entire universe.
To bring back the stats thing... I could either say I have a 50/50 chance to get heads or tails on a flip, or I could say "well, if I could see through time I'd know all the results and they'd be knowable, so the results are 100% likely to be known". Sure, that's true but that's not happening.
Granted, there's no local deterministic theory... but what's the value of determinism if the non-locality extends to the universe?
My understanding is that the guiding function mechanism of PWT could describe just about physical mechanism, not just quantum effects. Since the pilot wave itself is what truly encodes the results, you can mess around with this "initial data" and get whatever you want (Granted, I haven't thought too much about this, so this could be easily falsifiable).
If the objective is to describe some fundamental truth, isn't it odd that the description can be applied to everything?
Maybe the universe is odd and meaningless, though the Copenhagen interpretation captures that feeling pretty well ;)
I think PWT works better with TQFTs, but it's an area of active research how TQFTs work, so we'll have to wait and see.
So far, the difference is philosophical, but it's worth pointing out that so is Copenhagen.
They originally thought they had a choice in the theory between non-locality and non-determinism, so chose to build a theory around non-determinism for philosophical reasons. However, non-locality eventually became incorporated because some phenomena are non-local in nature.
So in a lot of senses, the Copenhagen interpretation simply has an extraneous assumption of non-determinism and they don't want to rework their model because it would be a lot of work.
How does "hyper-determinism" differ from "determinism"?
I find that when people use adjectives like "hyper" or "ultra" for some theoretical view (like people who talk about 'ultra-darwinism') they are trying to make something appear "extreme" without making any real argument for that position.
> You end up going into hyper-determinism, which is almost all about negating free will.
That phrasing implies that people come up with, or want, "hyper-deterministic" views because they want to negate the idea of free will. Is that what you mean? If not, what are you trying to say?
Sorry, this might be the wrong term for a higher level concept based on determinism.
If everything is deterministic, then our actions can theoretically be pre-determined. I've seen discussion around pilot wave theory that goes into details about this thought process.
This is fine and good. But if I publish "A solution to the double slit experiment" and the contents are "everything is deterministic, therefore all the protons go where they go" (hidden behind a pilot wave and initial values), that's not really the right genre of paper is it?
It's not that I'm demeaning it, but that I don't feel like it's in the same realm of discussion os the Copenhagen's rather pragmatic "stuff happens behind the curtain, but here's how to work with it" approach.
> It's not that I'm demeaning it, but that I don't feel like it's in the same realm of discussion os the Copenhagen's rather pragmatic "stuff happens behind the curtain, but here's how to work with it" approach.
I don't get the distinction you're trying to make. The maths are all the same, so the interpretation has almost no bearing on how to work with it.
Furthermore, pilot waves produce a deterministic theory because of non-locality, but it sounds like you're criticizing superdeterminism. They're not the same. 't Hooft is working on a superdeterministic theory based on cellular automata.
In any case, how are pilot waves any different than any other classical theory in this regard? It's about describing the system as accurately as one can, which entails inferring the initial conditions based on how we know the system evolves, ie. this cannon ball fell here because it was launched with force X at angle Y from height Z.
>How does "hyper-determinism" differ from "determinism"?
It's the difference between a trajectory that looks like Brownian motion consisting of a preprogrammed constant function, versus actually being sampled from a Wiener Process.
In the former case, if you know the function, you can predict the trajectory with total certainty. In the latter case, if you use the Wiener Process as a model, you can predict the trajectory probabilistically.
I'm not familiar with those concepts (like Wiener Process), but I don't think notions of predictability should be mixed up with notions of determinism. Whether something is predictable or not does not change how deterministic it is.
I think what I have to say, then, is welcome to quantum mechanics. Or, in fact, welcome to probability theory. Mathematically, there's no difference between a process which can only be predicted probabilistically, and a process which really is random; no difference between incomplete knowledge in your head and stochasticity in the world.
An interesting question is: how could a fully deterministic ontology give rise to beings capable of having probabilistic beliefs? Worse, how can a fully deterministic ontology give rise to truly random mathematical entities like Chaitin's Omega? In fact, in such a universe, where do the bits for atomic random-number generators come from? After all, if they're not really random, there should be some deterministic model capable of predicting them, and yet, quantum mechanics tells us precisely that such a thing is impossible.
You can try to have a Bayesian epistemology with a nonstochastic ontology, but it doesn't really make sense. Just let your ontology be stochastic: then and only then you can physically account for both physical randomness and probabilistic belief.
> I think what I have to say, then, is welcome to quantum mechanics. Or, in fact, welcome to probability theory. Mathematically, there's no difference between a process which can only be predicted probabilistically, and a process which really is random; no difference between incomplete knowledge in your head and stochasticity in the world.
I'm not talking about randomness. I'm talking about determinism, and as far as I can see what you're saying says nothing about determinism or differing degrees of determinism.
> An interesting question is: how could a fully deterministic ontology give rise to beings capable of having probabilistic beliefs?
I don't see that as a problem at all. It fits completely fine with the idea that probability is a matter of lack of knowledge.
> Worse, how can a fully deterministic ontology give rise to truly random mathematical entities like Chaitin's Omega? In fact, in such a universe, where do the bits for atomic random-number generators come from? After all, if they're not really random, there should be some deterministic model capable of predicting them, and yet, quantum mechanics tells us precisely that such a thing is impossible.
You're talking like it's already a settled matter as to whether reality is fundamentally deterministic or not. Are you saying that there's definitive proof that deterministic accounts of QM are incorrect?
The person I was replying to said "Mathematically, there's no difference between a process which can only be predicted probabilistically, and a process which really is random; no difference between incomplete knowledge in your head and stochasticity in the world."
This is because randomness can be seen as a matter of incomplete knowledge. Determinism is independent of our knowledge.
Now, perhaps there is randomness that is independent of our knowledge. But I was talking about randomness/determinism in the context of the quoted statement, which was a matter of knowledge.
> This is because randomness can be seen as a matter of incomplete knowledge. Determinism is independent of our knowledge.
I'm not sure there's a meaningful difference. "Determinism" as you're describing it is an interpretation with no practical implications.
You can imagine a god controlling every "random" event in our universe by selecting the outcome from "the big book of outcomes" and claim that everything is deterministic. You can also imagine the god consulting a D20 instead, and claim that everything is non-deterministic. The former makes randomness a function of our lack of knowledge and the latter makes it truly random. These two scenarios are equivalent and neither is falsifiable, though, making the distinction meaningless. From your perspective, random events are still non-deterministic.
You're just making up scenarios that fit your point of view, without arguing that these correspond to the actual options.
Are you arguing that there cannot be any distinction between determinism and non-determinism?
[In reply to the content below:
The scenarios are ones involving a god choosing and performing actions, and the supposed randomness of their dice. What are they meant to translate to?
> I'm arguing that your "deterministic randomness" is indistinguishable from "nondeterministic randomness". You're trying to define a class of "randomness" to be "deterministic but unknown
> You're just making up scenarios that fit your point of view, without arguing that these correspond to the actual options.
No, I gave scenarios that demonstrate why your distinction is not meaningful. If you have a scenario that changes this, can you please explain it?
> Are you arguing that there cannot be any distinction between determinism and non-determinism?
No, I'm arguing that your "deterministic randomness" is indistinguishable from "nondeterministic randomness". You're trying to define a class of "randomness" to be "deterministic but unknown". Near as I can tell, this is a distinction without a difference.
This isn't too surprising, because it's a QM interpretation that's designed to be indistinguishable from the others. The MWI is unfalsifiable (you can't see the other worlds) and Copenhagen is too (you can't see wave function collapse actually happen).
Copenhagenism is falsifiable. It says that "large" things are classical and can't be put in superpositions. "Large" isn't well defined, but it includes at least humans. So, put a human in a superposition, and you've falsified it.
The biggest problem with Copenhagen is that it doesn't say if there is a physical collapse or that there isn't one. Instead think of it as a rule of thumb; if you apply the quantum/classical cut in the appropriate spot for your experiment, it will give the right answer. But it gives no guidance where that spot is or what physically is happening.
From the many worlds perspective, it's actually quite clear where the Copenhagen "collapse" occurs: it's when something enters a superposition with Earth.
Some of us want to understand what the universe is really like. If that's uninteresting to you because you're more of a utilitarian, that's ok, but that's not the only way of approaching life on this blue planet.
It's not just about being a utilitarian. Science is supposed to make predictions. If it can't make a prediction that can be tested, it's not physics, it's philosophy.
I've had many discussions with people who don't truly seem to grasp the concept of determinism. I think we should never accept a theory that proves everything is predetermined. If the theory is right it doesn't matter if people believe or act in accordance anyway, and everything anybody does doesn't matter. Time does not exist, nor actions and neither do people. Nothing is here or there if it is predetermined. If the theory is wrong but seems right for a while it will lead to less favourable outcomes for those who act in accordance while they actually had other options.
That is if you believe in free will. I don't believe in free will (surely it's impossible) or determinism, but rather in chaos with (maybe) some reinforcement by deterministic systems (such as our brains). Our actions powered by constant rolls of the dice, at our birth, in our brains and on the surface of the sun where at this very moment photons are being released that indirectly animate all human life on earth.
"Pre"-determination is a misleading term in this context. The wave function collapsed, and the result was what it was because that's what the result was.
TLDR of my opinions in this space: the cosmos is simultaneously deterministic and probabilistic. There is no free will. Nor are all things predetermined in the sense that most people use the word.
I would say: because a totally deterministic, nonstochastic universe becomes subject to paradox theorems about Laplace's Demon.[1] You need at least a little bit of stochasticity to make prediction and unpredictability work out.
Most of the disproofs here don't actually seem to disprove the demon's existence, just limit it's ability to answer trick questions.
"Suppose that there is a device that can predict the future. Ask that device what you will do in the evening. Without loss of generality, consider that there are only two options: (1) watch TV or (2) listen to the radio. After the device gives a response, for example, (1) watch TV, you instead listen to the radio on purpose. The device would, therefore, be wrong. No matter what the device says, we are free to choose the other option. This implies that Laplace’s demon cannot exist."
This doesn't prove that the demon can't exist, just that it can't answer the question. If it knows what you will do in any case of its response, and it knows what its response will be, it knows what you will do, generally. It just can't tell you what you are going to do. Sort of a difference between being omniscient versus being omnipotent.
In other words, the demon knows that its response will be X, and your actions will be Y, it's response just can't be truthful. It's all still perfectly deterministic.
Free will is just a convenient illusion based on the fact that predicting the universe faster than realtime would almost certainly require more resources than the universe contains. At best, we can limit our scope for imperfect predictions based on imperfect knowledge and limited processing ability. The universe is likely deterministic, but there's no way to act on that in a meaningful way, so for the purposes of human existence, we may as well act as if we have free will.
Free will is an outcome of a universe which is not purely deterministic if you have a proof that the universe is then essentially there is no freewill.
Freewill goes beyond the concept of your brain deciding what to have for dinner.
People are actually analogous to some physical system it's much harder to predict the actions of an individual whilst predicting group actions on a larger scale is easier since the various individual inputs are effectively canceled out.
This is like modeling say a glass of water, modeling each individual molecule is nearly impossible because you get to the point of not being able to measure them especially when you trying to measure or predict the sub atomic make up of each molecule, but modeling the entire system is easy.
Free will can exist in the same way God can exist. At least with our current understanding of the universe, we lack a falsifiable test to disprove either. However, there is also no evidence to support the existence of either with any greater certainty than the claim that there is a teapot orbiting the sun somewhere between the orbits of Earth and Mars.
You are welcome to believe in God, orbital teapots, and/or free will, of course, so long as your belief doesn't lead to actions which create a negative imposition upon others. Sadly, that is all too common, and far less welcome, in my eyes.
> You need at least a little bit of stochasticity to make prediction and unpredictability work out.
There is already unpredictability in all sufficiently complex formal systems. For instance, it's unpredictable whether Turing machine T(i) will halt. Determinism is all you need!
"Free will" negates itself as a useful concept whether you embrace or reject determinism (either one's will is a function of body and environment or its not in which case it is arbitrary), so I don't think it's a relevant consideration to the veracity of determinism or PWT in general.
It's no more or less useful than any other interpretation of quantum mechanics. Is a many worlds interpretation more useful? They all have the same mathematics.
PWT has an underlying dynamics, much like classical mechanics, that to compute the predictions exactly, one has to have perfect knowledge of the current state. Given that knowledge and a perfect computational environment, one could then determine the future for all time, at least ignoring creation and annihilation of particles.
Clearly this is useless if that is all it provided. But so was classical mechanics. For example, to compute the gravitational force on my finger, technically, one needs in Newtonian theory to know the position and masses of all particles in the whole universe. Equally clearly, that's neither practical nor relevant.
Instead, we use the well-defined dynamics to do a statistical analysis of the theory. What happens when we do that is that we get a quantum thermodynamics equivalent which is what the Copenhagen interpretation is. But Copenhagen suffers from being vague and undefined (what the heck is a macroscopic measurement?) It is very imprecise and not a theory at all. And yet, when it appears as a statistical framework from an underlying well-define theory, the measurement problem completely recedes.
And it is important to understand that it is the theory that ought to tell us what a measurement actually means.
The next point to raise is, what is the point if we just derive Copenhagen? Well, the point is that it allows to both understand the setup and thus extend it. If all questions were known and answered, it would be largely philosophical. But we have not resolved the problems (QM + relativity) in standard quantum theory after many decades of research.
What are a few things that PWT explain?
1) Identical particles. By having configurations of particles, one can investigate what a configuration is. And, after a moment's thought, identical particles are not labelled by nature. Choosing a configuration space of unlabelled particles, we end up with the identical particle choices of symmetric or antisymmetric wave functions. It gets more complicated with spin, but it turns out spin is best represented as a value of the wave function located at a spatial point, not an intrinsic part of the particle. And then the problems just disappear from that.
2) Arrival times. There is a time of arrival naturally enough in a theory with particles actually moving about. In most theories, the collapse or splitting of the universe, etc., does not really fit in with experiments of time arrival. That doesn't stop physicists from coming up with answers, of course, but a theoretical framework is conceptually trivial based on this.
3) Collapse of the wave function. This collapse is central to the whole business. There is no collapse in the fundamental theory. But in suitable situations (measurements), the environment can be conditioned on to get a local wave function for the system in question. When a measurement happens with the environment registering it, the environmental conditioning changes that local wave function and that appears as a collapse.
This also helps deal with the murkiness of a position measurement which is never some precise state to collapse into. It is all a bit wishy-washy which is a problem for a theory such as Copenhagen in which that is all there is. It is trivial shrug in PWT.
4) Creation and annihilation of particles. There are particles that exist and they can be born or destroyed. There are formulas that give a probabilistic form for the creation of particles. This makes the theory no longer deterministic, but that was never the point of PWT. The point was to be an actual theory, one which was well-defined. There is nothing wrong with randomness. So the creation and annihilation of particles is perfectly consistent with PWT. It requires wave functions defined on a disjoint union of configuration spaces of different number of particles. This is exactly what quantum field theory does.
5) Well-defined quantum field theory. It is legendary that QFT is not well-defined mathematically. Recent work, inspired by PWT, looks promising in finding a well-defined path in QFT. Essentially, the perturbation from a free evolution cannot work. Instead, one has to work with functions in which the different number of particles spaces are linked in such a way as to preserve the probability. Once that is done, divergences fade away. The mathematical work is still quite difficult and it is early days, but it has had successes in toy theories that were not workable before.
6) Relativity. This is the big one. By having a clear theory to work with, one can understand the nonlocality that is present in reality (Bell's theorem) [unless you deny the results of experiments when they happen as in MWI]. There are explorations of natural foliations of space-time which provides the kind of nonlocality needed. In terms of something that works mathematically, that exists. A theory that is philosophically satisfactory is still being pursued. Essentially, the foliation is needed but is not detectable which is kind of what is needed, but it is unsatisfactory.
PWT is not a return to determinism. It is a return to clearly defined theories, whether they are deterministic or not. PWT is a theory that one could envision a computer being fed data and then computing out without further intervention. Copenhagen is not like that. One needs to know what experiments and measurements are to be done which, of course, can't really be specified in advance since what experiments are done may rely on what happened in the past.
Here's a amusing video of an analogous macroscopic experiment, droplets oscillating and interacting with each other at stable states: https://www.youtube.com/watch?v=JUI_DtzXdw4
[0]: https://en.wikipedia.org/wiki/De_Broglie%E2%80%93Bohm_theory
[1]: https://en.wikipedia.org/wiki/Pilot_wave