Hacker News new | past | comments | ask | show | jobs | submit login
Newcomb's paradox (wikipedia.org)
54 points by monort on April 3, 2015 | hide | past | favorite | 67 comments



Worth mentioning Scott Aaronson's blog post (http://www.scottaaronson.com/blog/?p=30), and a lot of discussion on Less Wrong over the years (http://wiki.lesswrong.com/wiki/Newcomb%27s_problem).


For the risk-averse:

Make a bet with someone for $500,000.00 that you can prove The Predictor is fallible. Take only box B. If box B contains $1,000,000.00, then you have lost the bet and you are left with only $500,000.00. If box B contains no money, you have wone the bet and are left with $500,000.00.


Mmm credit default swap.


Who would take the other side of that bet?


Since it is a rule of the game that the predictor is infallible, anyone with $500,000 would take that


It's not a rule of the game that the predictor is infallible, only that it's very likely to be correct.


It seems there are multiple versions of the paradox. It does seem reasonable that you can find someone to take the bet for <$999K if it is known that The Predictor is "very likely to be correct". Any bet amount <$999K is qualitatively the same as my original $500K suggestion, you increase your guaranteed minimum by decreasing your potential maximum without having to resolve the paradox.


I think any version of the paradox that defines the predictor as infallible misses the whole point of the paradox. That's just defining the outcome of the game as the will of God. "Has always been observed to be correct" is the appropriate construction.

However, I think someone could resuscitate the original intent of the paradox by having the Predictor's actions also hinge on whether or not it predicts that you would make such a bet, and leave box B empty if it does predict that. Essentially defining itself to be correct in the situation where, without this addendum, it would have been incorrect and you would have won the bet.


I mean, now you're just creating a moving target.

I think you are intent upon maintaining the paradox, whereas I was just pointing out a loophole that allows a better outcome without resolving it. Gordian knot and all.


The paradox is supposed to be maintained. It is supposed to illustrate the incommensurability of the two decision methods that it illustrates. It's not supposed to be "solved" or "cut."


The paradox exists. I did not solve it., I worked within the confines of it to find a potentially better outcome.

You have changed the definition of the paradox. Both can be valuable avenues of thought. I tend to view paradoxes as learning opportunities or a way to practice critical and logical thinking skills. Both of us have achieved that, so yay, but there's definitely not a single way to approach a paradox when presented with one.


As the sibling/nephew posts indicate, the bet is a good deal for anyone who has faith in The Predictor's ability to predict actions.

We also have a huge range for our bet to cover varying levels of certainty.

Again, this is not win-optimizing, but risk-averting. I am pretty sure I am being generous with the technical definition of risk aversion, but that is not material to my suggestion.

The paradox exists because there is no clear best strategy. The best you can guarantee (without resolving the paradox) is $1000, if you always take A and B you will net at least $1K, with the potential (depending on paradox resolution)for $1.001M. My suggestion aims to increase the guaranteed minimum essentially by purchasing it with the decreased maximum. $500K is really just a Schelling point, but unnecessary.

A bet of $999K is the breakeven point - any bet less than this amount increases your guaranteed minimum such that this guaranteed minimum is always greater than the $1000 which is the guaranteed minimum of taking A and B. Since The Predictor is (depending on version of the paradox) either infallible or nearly so, it seems reasonable to me that you can find someone to take a bet at <$999K.


> When formulated using Bayesian networks, two standard decision algorithms (Evidential Decision Theory and Causal Decision Theory) can be shown to fail systematically when faced with aspects of the prisoner’s dilemma and so-called “Newcomblike” problems. We describe a new form of decision algorithm, called Timeless Decision Theory, which consistently wins on these problems.

— Alex Altair, MIRI, “A Comparison of Decision Algorithms on Newcomblike Problems”

https://intelligence.org/files/Comparison.pdf


I always find this kind of philosophical thought experiment unsatisfying.

Super-accurate predictions of human behaviour are just not possible. If I could do it, I'd be a gazillionaire philanthropist/playboy dating supermodels and advising heads of state because I can't be bothered to rule the world directly. As it is, I can't do better than a draw against a 5-year-old at rock-paper-scissors.

So this paradox tells us more about psychology than philosophy. Folks who think "A and B" is the right answer basically ignore the bit about the predictor never (or almost never) being wrong and go with a strategy that is great for fallible human predictors.

And well they should. The only thing more ridiculous than an infallible predictor is one that wastes his time playing shell games where the best he can do is break even.


I can be more interesting reframed as such:

The player is the AI, and the predictor is the AI programmer.

The AI programmer can look into the innards of the AI ie. the source code and can thus predict with high accuracy what the AI will do.

What is a winning strategy for the AI?

Or taken another way, you can have AI's that have access to each other's source code and are competing for some scarce resource, how do you design an AI that 'wins' when it's behaviour is known it's opponent?


Sure you can. The AIs behaviour could be as simple as "Always pick box B".

The difficult bit would be designing an AI which, given perfect knowledge of its logic, would pick both boxes despite appearing to be more likely to pick only B.

In that case, you could simply have an AI with a 0.499999 chance of picking both and a 0.5000001 chance of picking B. The expected winnings would be $1,000,500.

But then, once it comes down to probability, the predictor is no longer a 'perfect predictor' any more.


That would probably be counted as a random choice and as the rules state: > if the Predictor predicts that the player will choose randomly, then box B will contain nothing.


I don't think the schism is because of fallible vs infallible so much as people being stuck in naive decision theory (just take both boxes; it's already decided beforehand!) or not (hey, the kinds of decisions I'm willing to make might have changed what Omega decided in the past!).

After all, Omega need not be infallible. So long as he predicts your decision with an accuracy of 50.05% (slightly better than a coin toss), you profit:

---

Let p be the probability that Omega predicts your decision correctly.

E(one-boxing) = p⋅$1mil + (1-p)⋅0

E(two-boxing) = (1-p)⋅$1.01mil + p⋅$1k

Solving for E(one-boxing) > E(two-boxing)

p⋅$1mil > (1-p)⋅$1.01mil + p⋅$1k

p($1mil + $1.01mil - $1k) > $1.01mil

p⋅2mil > $1.01mil

p > 50.05%

---

And if he doesn't predict you slightly better than a coin toss, why is he called the Predictor?


It's unsatisfying because it's so poorly defined. As you attempt to define it more precisely, the problem just converges on the problem of whether free will exists.


Not sure why it is a paradox - assuming the predictor is superintelligent, you don't try to fool it. By definition its intelligence can predict what you will do in the very last moment, so the fact that it doesn't get to change anything once prediction is made, is immaterial.


Did you read the article? It answers your question. Take a look at the section that begins with this:

The problem is called a paradox because two analyses that both sound intuitively logical give conflicting answers to the question of what choice maximizes the player's payout. The first analysis argues that, regardless of what prediction the Predictor has made, taking both boxes yields more money.

If you don't find this convincing, that's the point. Half the people who read this think one answer is obviously right, and the other half think the other is obviously right.


Yes - I did read.

What I meant is that based on the arguments I gave, I do not believe the other logic to be sound.


Let's reverse the numbers and make both boxes transparent.

If The Predictor suspects you will choose just box A, it'll put 1 million in box B and 1 thousand in box A, if it suspects you will take both A and B, it will put nothing anything in box B and 1 thousand in box A.

So now you standing in front of the two transparent boxes. You see that there is $1 million in box B, yet you still take just box A?


In that scenario, the predictor will always "predict" that you will take both boxes.

The lack of information of box B, is exactly what makes the other case different. Then you need to only rely on thinking and you do not know if box B has a million dollars till you open it. If you risk taking two boxes, you will lose it.

---

Let me give another example to make the original scenario transparent.

Imagine I will write a function "decide(double content_of_A)" to decide whether B or both will be opened, given content of A.

Imagine you can examine the function beforehand, and you are super-intelligent compared to me, so any attempt to obfuscate the code in my part will be utterly useless and easily seen through.

And you are honest - in putting the $1M in box B if your analysis suggests that the decision function will only take B.

Note that my function gets called after you have placed the money, just as in the original scenario.

Would I not write the function to choose B? I would.


In that scenario, it wouldn't make a difference what you choose.

If you were so inclined to take Box B's million, Omega would never have put that million in the first place.

The only way for Omega to put that million there is if you weren't inclined to take Box B - despite the million being completely visible to you - in which case you lose anyway.

The situation where Omega gives you the million, and you take it, just never comes up. Can't fool it.


In that case, Box B would only be full if you're a very honest person, who'll take only Box B.


The standard "solution" (that isn't really a solution, but an explanation) is that the predictor has decided to reward the kind of person who one-boxes, and is extremely good at predicting whether you are that kind of person (perhaps even better than you yourself are).

So, if you can "decide to be the kind of person" who one-boxes (and perhaps by induction "decide to be the kind of person who decides to be a certain kind of person"), you can make out pretty well.


I think there's a weak echo of this concept that plays out in an election. Except swap in the inconvenience of voting for giving up the $1,000 box.

On the one hand, why bother voting? It's a pain in the ass and my single vote has such a negligible effect. On the other hand, if everyone like me has that same attitude, then I and others like me lose our voice in the election. So should I vote, or not?


That's a fairly simple one. If all you value is the benefits to you of your influence on the election, do not bother to vote (and especially don't bother spending the resources required to educate yourselves so as to vote responsibly). It's very clearly not worth it except perhaps for the smallest local elections or closest large elections.

If you value other things related to voting, like feeling as if you've done your civic duty, or feeling like a member of a group (like a political party), then by all means vote if the hassle of physically voting is less than that benefit.

This seems like a pretty clear description of rational behavior, and if you expect people to behave largely rationally, this description seems to explain some of the problems often attributed to elections, like low voter participation and low voter knowledge of the candidates and issues.


But then if people only care about the benefits to them, then they'll be overtaken by "lizards" who exploit them but never get voted out because no one thinks voting passes a CBA. Populations who vote "despite" its wastefulness systematically win against those who don't.

Arguably, the only reason any population isn't overrun by lizards is because it's people are mostly "wasteful" in this sense.

One intermediate solution is to force everyone to vote so that it's no longer costly. This is arguably what is accomplished when people promote voting out of civic duty, etc.


But I don't think people are wasteful in that sense. Voter participation and voter awareness tends to be low, at least in the USA. Besides, if your government only works if people act in a specific irrational manner, I don't have high hopes for it, especially considering that one usually cited fundamental role of government is to fix problems where individual rationality does not lead to group rationality.

Forcing people to vote doesn't incentivize people to educate themselves on the issues, which is required to "vote responsibly" according to the usual Western civics class description of how democracy is supposed to work.


Voter participation is very high, and voter decisions very wise, relative to the lizard scenario; and it's not clear that "not being overlorded by lizards" is a kind of irrationality.

Remember, the lizard scenario is something like "500 lizards outvote 300 million and put in 99% tax rates on non-lizards, to be spent entirely on lizards, all because none of the 300 million want to vote, reasoning that their vote doesn't affect the outcome."

Mandatory voting would definitely be an improvement over that for much the same reasons I gave before.


>On the one hand, why bother voting? It's a pain in the ass and my single vote has such a negligible effect. On the other hand, if everyone like me has that same attitude, then I and others like me lose our voice in the election. So should I vote, or not?

There are going to be final poll numbers. Do you prefer that they be an accurate statistical sampling of the population, or a biased one (perhaps biased towards whatever causes some people to vote and others not to)? Your decision on whether to vote, and whether to encourage others to vote, and which others to encourage to vote, is determined by this preference.

There are a couple of simple Nash Equilibria here, as well. Assuming you want your ideology to win the election, you want everyone who agrees with you to vote, and everyone who disagrees with you to stay home. Therefore, you engage in vote suppression against your political enemies and vote recruitment for your political allies. The Equilibria are:

* Negative Nash equilibrium: the piling-up of voter suppression activities of all types causes election outcomes to be determined entirely by who's better at keeping their enemies from voting. Since this is a zero-sum game played against the enemy team's vote recruitment efforts, all teams should notice they've been sucked into a zero-sum, zero-net-productivity black hole, and pass regulations against the worst forms of vote suppression (such as mislabeling polling places or election dates, removing "enemy" demographic groups from the voting rolls, etc.).

* Positive Nash equilibrium: everyone spends lots of effort on vote recruitment efforts, and the actual election outcomes are thus a very accurate statistical sampling of real population preferences.

* Genuinely dangerous and subversive Nash equilibrium: some limited number of political parties consolidate power and trade it back-and-forth, either allowing each-other to win elections or drawing elections laws/boundaries so as to ensure each of them can confidently plan their next round of office-holding. Elections become play-acts of real political contest, and the population's real preferences are increasingly ignored.

You decide which of these we see in different actual democracies right now.


I like Scott Aaronson's solution (http://www.scottaaronson.com/blog/?p=30). Basically, assuming:

1) completely infallible and incapable of error

2) to get a perfect prediction, the predictor must simulate you perfectly to the extent that the simulated you won't have any way to know it is the simulation

Therefore, at the time of the decision, "you" might actually be the "simulated you" and any decision you make will indeed affect the content of the boxes for the "real you". Which makes single boxing the rational decision.


I guess this boils down to whether you believe in determinism.

If you do, then the predictor will always be right, and you have essentially zero chance of fooling it by "changing your mind" later for the extra money.

If you don't believe in such a thing, then theoretically some nondeterministic volition of yours could allow you to change your mind in a way that the predictor could not have deterministically forseen.


It's not about changing your mind for the money, it's about the fact that causality doesn't (shouldn't) run backwards in time.

Consider this: when you are faced with the choice, the allotted money is already under the boxes. How could what you choose now affect this outcome? It can't. You must always take both boxes to get the maximum amount of money possible. Either the $1,000,000 is under the single box, or it is not. If you take both boxes, you will get $1000 regardless of anything else, and possibly $1,001,000. If you take the single box, you will either get $0 or $1,000,000, but your action can't possibly change that, unless you believe somehow that causality runs backwards in time.


Under a generalization of this problem, you can do transparent boxes and get basically the same paradox: in that case, omega never even presents you with this choice while putting $1 million in a box unless you're "the type of person" who would one-box even then.

Still no causality violation: omega simulates everyone,[2] and only offers the filled box to one-boxers, but leaves it empty for two-boxers. [1]

But you don't even have to conser these esoteric, hypothetical situations to get a newcomblike paradox: even "merchants vs shoplifters" has a similar dynamic: you will only be in the position of being able to trivially shoplift merchandise if you're in a neighborhood that draws from the set of people who usually don't. Merchants (Omega) are accurate enough in their predictions to be profitable.

[1] See counterfactual mugging for a similar dynamic.

[2] With a thorough enough simulator, it may not be possible to tell whether "you" are in the simulator or doing the real thing.


That generalization is no longer a paradox; it's just a situation. The paradox is about choice theory and you have eliminated any element of choice.

People who choose (or act, if you don't care for free will) to take only one box are always leaving money on the table, full stop. The point of the game is to maximize winnings.

The alternative approach is that people who have chosen one box have always received more money than those who chose both. Explanations about how or why are distractions and are inconsequential; the paradox is about these two -- both generally considered to be valid -- approaches yielding such different results.

In my opinion, the resolution of the paradox is that's an impossible situation. Either someone is lying about the mechanisms (in which case take one box like everyone else because it's a magic trick of some kind) or not (in which case the "predictor" can be wrong and the boxes are already set, so take both boxes to eliminate the risk of receiving nothing and to maximize your winnings).


>That generalization is no longer a paradox; it's just a situation. The paradox is about choice theory and you have eliminated any element of choice.

You're still choosing whether to use a decision procedure that results in how many boxes to take when offered this choice, which then determines whether you get this offer at all.

And I don't know what you're trying to say with the paradox/situation distinction; "Newcomb's problem with transparent boxes" is a paradox and a situation, just like the original: how are people ending up better off by "leaving money on the table"? (whatever that would mean)

>People who choose (or act, if you don't care for free will) to take only one box are always leaving money on the table, full stop. The point of the game is to maximize winnings.

But once you pin down what "leaving money on the table" means, it's not at all clear that the concept coincides with something you want to avoid. If the people "leaving money on the table" have more money, then "I don't want to be right", as the saying goes.

>In my opinion, the resolution of the paradox is that's an impossible situation.

I disagree. At the very least, you can play as omega against an algorithm, with varying degrees of scrutability. How should that kind of algorithm be written so that it gets more money (in transparent boxes, how to get omegas to offer you filled boxes in the first place)? Your answer would require addressing the same issues that arise here for humans in that situation.

There are also statistical versions of the paradox, like merchants vs shoplifters. Obviously, they aren't perfect predictors, but they do well enough for the sort of "acausal" effects in the paradox to happen, ie people not shoplifting, even when they could get away with it. Here are some more real life examples:

http://lesswrong.com/lw/4yn/realworld_newcomblike_problems/

To be sure, people aren't predictable enough now to get the kind of scenario described in the problem. But they are predictable enough for the uncomfortable implications: even an accuracy slightly better than chance gets you situations were one-boxing is statistically superior.

(I do agree that in pracitce, whenever you see this kind of situation, you should assume there's some trick until overwhelming evidence comes in to the contrary.)


> But once you pin down what "leaving money on the table" means, it's not at all clear that the concept coincides with something you want to avoid.

In this case (which I have to imagine is deliberate on the part of Nozick or Newcomb), "leaving money on the table" means literally leaving money on the table. Taking one box always, always results in less money than the total amount of money available in the boxes to people who take both boxes. (Of course, the evidence to date is that people who choose both boxes always have less money available to them in the first place.)

But the equally justifiable decision-making method is to perform the action that has yielded the best observed results in the past for others, despite there being no way that one's actions now can possibly have affected the past (choice or determinism doesn't matter).

The nature-of-the-predictor stuff is just irrelevant nonsense in either approach to the problem, which is a happy coincidence because it is, in fact, irrelevant and impossible nonsense. :)

Edit: "there's no way that one's actions now can possibly have affected the past" is given in the original problem. Wikipedia's article quotes it as "what you actually decide to do is not part of the explanation of why he made the prediction he made."


Well, the idea is that in order to predict which box you will pick, the predictor is basically running a perfect simulation of you. Depending on what the simulated version of you does, then the predictor will either put nothing or a million dollars in the second box.

So the problem is this: you are given the choice to pick one or both boxes, but you don't know whether you are playing for real or whether you are a simulation who will unwittingly tell the predictor what the real "you" will do. If your mind is deterministic and the predictor is perfect then you will necessarily choose the same in both simulation and reality. Alas, you need your simulated self to "tell" the predictor to put a million in the box, which is why it is preferable to only pick up that one box. It's not that causality runs backwards, it's that unbeknownst to you, you're actually choosing twice.

Of course, that whole thought experiment is a bit silly. Even under determinism it would be borderline impossible to do this with a physical system, and agents generally have an incentive to be unpredictable when it benefits them, so I don't think it is massively important to decide correctly in such contrived scenarios.


>Of course, that whole thought experiment is a bit silly. Even under determinism it would be borderline impossible to do this with a physical system,

What about if you played as Omega against against a (physical instantiation of a) computer program that can play this game?

>and agents generally have an incentive to be unpredictable when it benefits them, so I don't think it is massively important to decide correctly in such contrived scenarios.

Under the original version of this problem, Omega stiffs agents who deliberately make themselves unpredictable eg by hooking their action to an unpredictable randomizer. But then, it's not even clear that agents would benefit from reducing Omega's confidence they'll one-box via deliberate unpredictability.


Your assuming something along the lines of free will. Remember the choice is made independent of what's in the boxes.

So, let's consider a deterministic system. If I write a program that uses pure logic to make the choice and it's non random then it always makes the same choice. EX: hard coding chose (A+B).

Then that choice has in effect already made based on the algorithm selection making the (A+B) prediction basically fool proof. I could of course then run that program after the fact and see (A+B), but barring a low chance random event it's going to give the result of my prediction.

PS: Consider chess, from a math standpoint every possible game already exists with players essentially picking just one game from the set of possible games. So, if you write two deterministic programs and run them the winner is already predetermined based on which algorithms where selected. If you then change one of the programs in such a way that it still chose the same move you know the winner before running the programs.


But that doesn't matter in the slightest. The money is either there or it is not there. It's not put in the boxes after you've made the choice. It's put into the boxes before you even know there's a game.

Whether or not there is free will or determinism, picking both boxes always nets the most money of what's on the table.


You're using a mathematical model that doesn't apply. The ahead-of-time simulations invalidate the idea that your decision can't affect the outcome, despite the final decision ultimately occurring afterwards.

An analogy would be asserting that you can't possibly shoot yourself in the back of the head when firing into the distance, and sticking to that position even after finding out you're in a pac-man-style loop-around world.

A much closer but more technical analogy is that you can't solve imperfect information games by recursively solving subtrees in isolation. Optimal play can involve purposefully losing in some subtrees, so that bluffs are more effective in other subtrees.

The fact that you are doing worse by two-boxing, leaving with a thousand dollars instead of a million, despite following logic that's supposed to maximize how well you do, should be a huge red flag.


How do "the ahead-of-time simulations invalidate the idea that your decision can't affect the outcome, despite the final decision ultimately occurring afterwards?" They're only simulations. The predictor is defined as being very likely to have correct predictions; it's not defined as God or a time traveler or an omniscient computer with knowledge of the universe's intricate workings.

The fact that to an observer who knows the contents of the boxes (say, the moderator or an audience) you always look like an idiot for taking only one box and leaving money on the table, should be a huge red flag.


But that's the thing you assume there something different about you and simulated you where in theory there might not be.

In other words if your hard coding something then hard coding pick B gets you 1,000,000 but hard coding pick AB gets you 1,000 assuming the predictor looks at your source code.

As to the game show you would gave a series of people where people who pick B get 1,000,000 and people who pick AB get 1,000 now which group looks like idiots?

Edit: Depending on the accuracy of the predictions, it's less about information traveling into the past as it is being the type of person that chooses B.


I don't know why you're talking about hard-coding and simulation and whatnot. The mechanism that the predictor uses is completely irrelevant and specifically defined to be unknown in the thought experiment description, aside from it disallowing backwards causality and things like time travel.

Every single person who picked only box B left $1000 on the table. That's a bare fact. You don't even need to know or care what the prediction is to know that.

In general when someone leaves $1000 that they could have had, no strings attached, that's a less desirable outcome than the one where they had the extra $1000.


Your assuming it's impossible to accurately predict what someone would chose when it's directly stated they can.

If your the kind of person that picks AB then you get 1,000.

If your the kind of person that picks B you get 1,000,000.

There are no other options.

PS: Consider, the 'prediction' is having you walk on stage and given the choice a random number of times greater than 20. Except, one of them will be randomly the one that counts.


I'm not saying it's impossible to predict anything. I'm saying that people whose choose box B are always choosing the inferior of the two options available to them, because the money is already on the table and no one is going to change that configuration based on the person's choice (stated in the problem).

As I have said, the prediction method or accuracy is largely irrelevant to the actual paradox, aside from a means to incentivize people to behave in an obviously irrational way :).

(I don't really think that, the other principle of decision for one-boxers is induction based on prior observations. The whole point of the paradox is neither side has a decisive argument against the other. The important point here is that free will/determinism, possibility of perfect simulation, etc. are not part of the problem this paradox is intended to illuminate.)


"I'm saying that people whose choose box B are always choosing the inferior of the two options available to them"

Except there are two occasions to chose B. On is on the stage and the other is as part of the model the predictor uses. And in that case you really want to be modeled as someone that chooses B.

In the end what happens on stage is irrelevant as 99.9% of the value comes from how your modeled .1% comes from what you do on stage. So, how do you get modeled as someone that chooses B?

Well, if their accurate the only way to influence that prediction is choosing B on stage.

And yes, with accurate modeling information can travel backward in time. Just consider people taking an umbrella to work because of a weather prediction. In this case the rain caused people to bring an umbrella before it happened.

Now, you can argue that picking AB is the rational choice, but if it consistently get's a worse outcome then it's irrational behavior. What makes it irrational? The assumption it can't influence what's under the table.

PS: The only counter argument is you have 'free will' and thus your choices can't be accurately modeled.


> And yes, with accurate modeling information can travel backward in time. Just consider people taking an umbrella to work because of a weather prediction. In this case the rain caused people to bring an umbrella before it happened.

The rain didn't cause this; ️the prediction of rain did. Comments like this, and your strange focus on simulation and modeling, lead me to believe that you are a little out of your element here. The questions raised and the paradox regarding choice are present no matter what the predictor's mechanism is, whether it is a perfect simulation or psychic connection with your mind, or messages from God.


Rain has no free will. In the face of a completely accurate prediction neither do you. And without free will the decision has already been made before you where on the stage. Even if you where not aware that you had made the choice otherwise you could not be 100% accurately modeled.

PS: The implications of not having free will are uncomfortable, but they directly fall out of having a completely accurate predictor. (And yes, this is often weakened to a semi accurate predictor.)


The rain could not have caused people to bring an umbrella, because people brought an umbrella before it rained. Regardless of whether or not the universe can unfold in any other way than the way it does, something cannot be caused by another thing that occurred after it. It's in the definition of "cause and effect."

Also, given that the entire point of the paradox is to illustrate a problem in decision theory, it seems a particular waste of time to deny that anything has a decision. Read the original statement of the problem. Read it closely. Don't read junk on the Internet or jabbering by Christian apologists desperate for credentials. The problem has absolutely nothing to do with free will vs. determinism.


What do you think is the point of it's not about free will? The only paradox is the assumption that you can make a choice that's not predictable. But, if conditions such that there will be rain or conditions such that you will pick AB exist then you will pick AB.

Sure, if you can lie to the oracle and say you’re going to pick B and actually pick AB then clearly that’s the better option, but if they can look past that lie and see how you think (aka read your source code) then that’s not a viable option. If you say to the oracle I am going to pick B because you know what I am going to do and something predictable changings you’re mind you still lose. The only option is to pick B and for that to be truth, and if it’s the truth you pick B on stage.

PS: As apologists, you seem to be stuck with the idea that thought is anything other than a predictable electro chemical process in your brain no different than a complex computer program. We can make pseudo random choices which are very useful in decision theory, but ‘free will’ does not exist. In the end we are no less predictable than the rain.


The predictability or non-predictability of a given decision is irrelevant; there's no need to assume that an unpredictable choice can be made. Choosing both boxes always gets the maximum amount of money available on the table.

The point is about decision theory, which has two approaches considered "rational" that yield different results. That's why it's a paradox. It's all spelled out in the paper: http://faculty.arts.ubc.ca/rjohns/nozick_newcomb.pdf

Go ahead, search the document for the phrases "free will" or "determinism." I'll wait.


That's not really the point of the thought experiment at all. And invoking randomness doesn't make the thought experiment easier, just more complicated. From the link someone posted above:

>(Incidentally, don’t imagine you can wiggle out of this by basing your decision on a coin flip! For suppose the Predictor predicts you’ll open only the first box with probability p. Then he’ll put the $1,000,000 in that box with the same probability p. So your expected payoff is 1,000,000p2 + 1,001,000p(1-p) + 1,000(1-p)2 = 1,000,000p + 1,000(1-p), and you’re stuck with the same paradox as before.)


It reminds me of the 'Battle of Wits' scene in The Princess Bride: http://c2.com/cgi/wiki?BattleOfWits


That is not the issue at all. There are two main ways of seeing the problem:

1. If the predictor really has such an awesome predicting talent, you have to wonder why. How much information must she have the means to process in order to make such successful predictions? Do you really think you could outperform her?

2. It doesn't matter what she predicts, because the result is independent of her prediction.

This is a problem about determining what is more important to rational decision-making: newly discovered information advantages, or previously known truths. Do you bet on the powerful new technology, or the tried and true past solution? Do you hire the person with a proven track-record of reliable success, or the unproven one with the ground-breaking earth-shattering new idea?


It's not a paradox to those who understand compatibilist free will, and how basic information-sharing means that events at two different points in time and space can be isomorphic. There's no impossibility being committed, and no retrocausality either.


Via Turing, we know that the Predictor would be able to decide programs, and since it can't, it can't be a Predictor.

The isomorphism between the program which defeats decidability and the two-box Newcomb problem is left as an exercise.


You assume that a human cannot be described with something less powerful than a Turing machine. There is no reason to believe that knowing what a human will do in this instance is equal to decidability of computer programs.


I almost objected, but you're right. The human has finite time with which to calculate. The predictor can easily be assumed to have more time. There's no problem calculating if a program will halt in finite time.


I assume nothing of the sort. I merely assume that humans, like computers, are capable of evaluating the truth of something and then deliberately reporting the opposite. That is all that is required.


I think this may be the proof. Start by assuming that the predictor is a decider P for a computational model of a human M in a situation encoded by w. We can then construct a Turing Machine S that decides A_TM.

S <M, w> {

1. Run P on w.

2. If P halts, run M on w; if M accepts, accept, if it rejects reject.

}

Since S decides A_TM we must have made a mistake somewhere in our proof, our only mistake could be that the predictor P is a decider. Therefore the Predictor is not a decider.


Full marks!


What I don't understand is why would anyone risk taking both A & B just to gain a measly $1,000? Just take the million and be happy with it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: