It's not about changing your mind for the money, it's about the fact that causality doesn't (shouldn't) run backwards in time.
Consider this: when you are faced with the choice, the allotted money is already under the boxes. How could what you choose now affect this outcome? It can't. You must always take both boxes to get the maximum amount of money possible. Either the $1,000,000 is under the single box, or it is not. If you take both boxes, you will get $1000 regardless of anything else, and possibly $1,001,000. If you take the single box, you will either get $0 or $1,000,000, but your action can't possibly change that, unless you believe somehow that causality runs backwards in time.
Under a generalization of this problem, you can do transparent boxes and get basically the same paradox: in that case, omega never even presents you with this choice while putting $1 million in a box unless you're "the type of person" who would one-box even then.
Still no causality violation: omega simulates everyone,[2] and only offers the filled box to one-boxers, but leaves it empty for two-boxers. [1]
But you don't even have to conser these esoteric, hypothetical situations to get a newcomblike paradox: even "merchants vs shoplifters" has a similar dynamic: you will only be in the position of being able to trivially shoplift merchandise if you're in a neighborhood that draws from the set of people who usually don't. Merchants (Omega) are accurate enough in their predictions to be profitable.
[1] See counterfactual mugging for a similar dynamic.
[2] With a thorough enough simulator, it may not be possible to tell whether "you" are in the simulator or doing the real thing.
That generalization is no longer a paradox; it's just a situation. The paradox is about choice theory and you have eliminated any element of choice.
People who choose (or act, if you don't care for free will) to take only one box are always leaving money on the table, full stop. The point of the game is to maximize winnings.
The alternative approach is that people who have chosen one box have always received more money than those who chose both. Explanations about how or why are distractions and are inconsequential; the paradox is about these two -- both generally considered to be valid -- approaches yielding such different results.
In my opinion, the resolution of the paradox is that's an impossible situation. Either someone is lying about the mechanisms (in which case take one box like everyone else because it's a magic trick of some kind) or not (in which case the "predictor" can be wrong and the boxes are already set, so take both boxes to eliminate the risk of receiving nothing and to maximize your winnings).
>That generalization is no longer a paradox; it's just a situation. The paradox is about choice theory and you have eliminated any element of choice.
You're still choosing whether to use a decision procedure that results in how many boxes to take when offered this choice, which then determines whether you get this offer at all.
And I don't know what you're trying to say with the paradox/situation distinction; "Newcomb's problem with transparent boxes" is a paradox and a situation, just like the original: how are people ending up better off by "leaving money on the table"? (whatever that would mean)
>People who choose (or act, if you don't care for free will) to take only one box are always leaving money on the table, full stop. The point of the game is to maximize winnings.
But once you pin down what "leaving money on the table" means, it's not at all clear that the concept coincides with something you want to avoid. If the people "leaving money on the table" have more money, then "I don't want to be right", as the saying goes.
>In my opinion, the resolution of the paradox is that's an impossible situation.
I disagree. At the very least, you can play as omega against an algorithm, with varying degrees of scrutability. How should that kind of algorithm be written so that it gets more money (in transparent boxes, how to get omegas to offer you filled boxes in the first place)? Your answer would require addressing the same issues that arise here for humans in that situation.
There are also statistical versions of the paradox, like merchants vs shoplifters. Obviously, they aren't perfect predictors, but they do well enough for the sort of "acausal" effects in the paradox to happen, ie people not shoplifting, even when they could get away with it. Here are some more real life examples:
To be sure, people aren't predictable enough now to get the kind of scenario described in the problem. But they are predictable enough for the uncomfortable implications: even an accuracy slightly better than chance gets you situations were one-boxing is statistically superior.
(I do agree that in pracitce, whenever you see this kind of situation, you should assume there's some trick until overwhelming evidence comes in to the contrary.)
> But once you pin down what "leaving money on the table" means, it's not at all clear that the concept coincides with something you want to avoid.
In this case (which I have to imagine is deliberate on the part of Nozick or Newcomb), "leaving money on the table" means literally leaving money on the table. Taking one box always, always results in less money than the total amount of money available in the boxes to people who take both boxes. (Of course, the evidence to date is that people who choose both boxes always have less money available to them in the first place.)
But the equally justifiable decision-making method is to perform the action that has yielded the best observed results in the past for others, despite there being no way that one's actions now can possibly have affected the past (choice or determinism doesn't matter).
The nature-of-the-predictor stuff is just irrelevant nonsense in either approach to the problem, which is a happy coincidence because it is, in fact, irrelevant and impossible nonsense. :)
Edit: "there's no way that one's actions now can possibly have affected the past" is given in the original problem. Wikipedia's article quotes it as "what you actually decide to do is not part of the explanation of why he made the prediction he made."
Well, the idea is that in order to predict which box you will pick, the predictor is basically running a perfect simulation of you. Depending on what the simulated version of you does, then the predictor will either put nothing or a million dollars in the second box.
So the problem is this: you are given the choice to pick one or both boxes, but you don't know whether you are playing for real or whether you are a simulation who will unwittingly tell the predictor what the real "you" will do. If your mind is deterministic and the predictor is perfect then you will necessarily choose the same in both simulation and reality. Alas, you need your simulated self to "tell" the predictor to put a million in the box, which is why it is preferable to only pick up that one box. It's not that causality runs backwards, it's that unbeknownst to you, you're actually choosing twice.
Of course, that whole thought experiment is a bit silly. Even under determinism it would be borderline impossible to do this with a physical system, and agents generally have an incentive to be unpredictable when it benefits them, so I don't think it is massively important to decide correctly in such contrived scenarios.
>Of course, that whole thought experiment is a bit silly. Even under determinism it would be borderline impossible to do this with a physical system,
What about if you played as Omega against against a
(physical instantiation of a) computer program that can play this game?
>and agents generally have an incentive to be unpredictable when it benefits them, so I don't think it is massively important to decide correctly in such contrived scenarios.
Under the original version of this problem, Omega stiffs agents who deliberately make themselves unpredictable eg by hooking their action to an unpredictable randomizer. But then, it's not even clear that agents would benefit from reducing Omega's confidence they'll one-box via deliberate unpredictability.
Your assuming something along the lines of free will. Remember the choice is made independent of what's in the boxes.
So, let's consider a deterministic system. If I write a program that uses pure logic to make the choice and it's non random then it always makes the same choice. EX: hard coding chose (A+B).
Then that choice has in effect already made based on the algorithm selection making the (A+B) prediction basically fool proof. I could of course then run that program after the fact and see (A+B), but barring a low chance random event it's going to give the result of my prediction.
PS: Consider chess, from a math standpoint every possible game already exists with players essentially picking just one game from the set of possible games. So, if you write two deterministic programs and run them the winner is already predetermined based on which algorithms where selected. If you then change one of the programs in such a way that it still chose the same move you know the winner before running the programs.
But that doesn't matter in the slightest. The money is either there or it is not there. It's not put in the boxes after you've made the choice. It's put into the boxes before you even know there's a game.
Whether or not there is free will or determinism, picking both boxes always nets the most money of what's on the table.
You're using a mathematical model that doesn't apply. The ahead-of-time simulations invalidate the idea that your decision can't affect the outcome, despite the final decision ultimately occurring afterwards.
An analogy would be asserting that you can't possibly shoot yourself in the back of the head when firing into the distance, and sticking to that position even after finding out you're in a pac-man-style loop-around world.
A much closer but more technical analogy is that you can't solve imperfect information games by recursively solving subtrees in isolation. Optimal play can involve purposefully losing in some subtrees, so that bluffs are more effective in other subtrees.
The fact that you are doing worse by two-boxing, leaving with a thousand dollars instead of a million, despite following logic that's supposed to maximize how well you do, should be a huge red flag.
How do "the ahead-of-time simulations invalidate the idea that your decision can't affect the outcome, despite the final decision ultimately occurring afterwards?" They're only simulations. The predictor is defined as being very likely to have correct predictions; it's not defined as God or a time traveler or an omniscient computer with knowledge of the universe's intricate workings.
The fact that to an observer who knows the contents of the boxes (say, the moderator or an audience) you always look like an idiot for taking only one box and leaving money on the table, should be a huge red flag.
But that's the thing you assume there something different about you and simulated you where in theory there might not be.
In other words if your hard coding something then hard coding pick B gets you 1,000,000 but hard coding pick AB gets you 1,000 assuming the predictor looks at your source code.
As to the game show you would gave a series of people where people who pick B get 1,000,000 and people who pick AB get 1,000 now which group looks like idiots?
Edit: Depending on the accuracy of the predictions, it's less about information traveling into the past as it is being the type of person that chooses B.
I don't know why you're talking about hard-coding and simulation and whatnot. The mechanism that the predictor uses is completely irrelevant and specifically defined to be unknown in the thought experiment description, aside from it disallowing backwards causality and things like time travel.
Every single person who picked only box B left $1000 on the table. That's a bare fact. You don't even need to know or care what the prediction is to know that.
In general when someone leaves $1000 that they could have had, no strings attached, that's a less desirable outcome than the one where they had the extra $1000.
Your assuming it's impossible to accurately predict what someone would chose when it's directly stated they can.
If your the kind of person that picks AB then you get 1,000.
If your the kind of person that picks B you get 1,000,000.
There are no other options.
PS: Consider, the 'prediction' is having you walk on stage and given the choice a random number of times greater than 20. Except, one of them will be randomly the one that counts.
I'm not saying it's impossible to predict anything. I'm saying that people whose choose box B are always choosing the inferior of the two options available to them, because the money is already on the table and no one is going to change that configuration based on the person's choice (stated in the problem).
As I have said, the prediction method or accuracy is largely irrelevant to the actual paradox, aside from a means to incentivize people to behave in an obviously irrational way :).
(I don't really think that, the other principle of decision for one-boxers is induction based on prior observations. The whole point of the paradox is neither side has a decisive argument against the other. The important point here is that free will/determinism, possibility of perfect simulation, etc. are not part of the problem this paradox is intended to illuminate.)
"I'm saying that people whose choose box B are always choosing the inferior of the two options available to them"
Except there are two occasions to chose B. On is on the stage and the other is as part of the model the predictor uses. And in that case you really want to be modeled as someone that chooses B.
In the end what happens on stage is irrelevant as 99.9% of the value comes from how your modeled .1% comes from what you do on stage. So, how do you get modeled as someone that chooses B?
Well, if their accurate the only way to influence that prediction is choosing B on stage.
And yes, with accurate modeling information can travel backward in time. Just consider people taking an umbrella to work because of a weather prediction. In this case the rain caused people to bring an umbrella before it happened.
Now, you can argue that picking AB is the rational choice, but if it consistently get's a worse outcome then it's irrational behavior. What makes it irrational? The assumption it can't influence what's under the table.
PS: The only counter argument is you have 'free will' and thus your choices can't be accurately modeled.
> And yes, with accurate modeling information can travel backward in time. Just consider people taking an umbrella to work because of a weather prediction. In this case the rain caused people to bring an umbrella before it happened.
The rain didn't cause this; ️the prediction of rain did. Comments like this, and your strange focus on simulation and modeling, lead me to believe that you are a little out of your element here. The questions raised and the paradox regarding choice are present no matter what the predictor's mechanism is, whether it is a perfect simulation or psychic connection with your mind, or messages from God.
Rain has no free will. In the face of a completely accurate prediction neither do you. And without free will the decision has already been made before you where on the stage. Even if you where not aware that you had made the choice otherwise you could not be 100% accurately modeled.
PS: The implications of not having free will are uncomfortable, but they directly fall out of having a completely accurate predictor. (And yes, this is often weakened to a semi accurate predictor.)
The rain could not have caused people to bring an umbrella, because people brought an umbrella before it rained. Regardless of whether or not the universe can unfold in any other way than the way it does, something cannot be caused by another thing that occurred after it. It's in the definition of "cause and effect."
Also, given that the entire point of the paradox is to illustrate a problem in decision theory, it seems a particular waste of time to deny that anything has a decision. Read the original statement of the problem. Read it closely. Don't read junk on the Internet or jabbering by Christian apologists desperate for credentials. The problem has absolutely nothing to do with free will vs. determinism.
What do you think is the point of it's not about free will? The only paradox is the assumption that you can make a choice that's not predictable. But, if conditions such that there will be rain or conditions such that you will pick AB exist then you will pick AB.
Sure, if you can lie to the oracle and say you’re going to pick B and actually pick AB then clearly that’s the better option, but if they can look past that lie and see how you think (aka read your source code) then that’s not a viable option. If you say to the oracle I am going to pick B because you know what I am going to do and something predictable changings you’re mind you still lose. The only option is to pick B and for that to be truth, and if it’s the truth you pick B on stage.
PS: As apologists, you seem to be stuck with the idea that thought is anything other than a predictable electro chemical process in your brain no different than a complex computer program. We can make pseudo random choices which are very useful in decision theory, but ‘free will’ does not exist. In the end we are no less predictable than the rain.
The predictability or non-predictability of a given decision is irrelevant; there's no need to assume that an unpredictable choice can be made. Choosing both boxes always gets the maximum amount of money available on the table.
The point is about decision theory, which has two approaches considered "rational" that yield different results. That's why it's a paradox. It's all spelled out in the paper: http://faculty.arts.ubc.ca/rjohns/nozick_newcomb.pdf
Go ahead, search the document for the phrases "free will" or "determinism." I'll wait.
Consider this: when you are faced with the choice, the allotted money is already under the boxes. How could what you choose now affect this outcome? It can't. You must always take both boxes to get the maximum amount of money possible. Either the $1,000,000 is under the single box, or it is not. If you take both boxes, you will get $1000 regardless of anything else, and possibly $1,001,000. If you take the single box, you will either get $0 or $1,000,000, but your action can't possibly change that, unless you believe somehow that causality runs backwards in time.