Medicine A will always save 200,000 people, with 400,000 dying.
Medicine B will save all 600,000 people 33.3% of the time, but everyone will die 66% of the time.
The expected number of deaths (the 'expected value' in game theory) is the same for both cases (200,000). Medicine A is easy to calculate (200,000 * 1.00 = 200,000). For Medicine B, the calculation is 600,000 * 0.33 + 0 * 0.66 = 200,000.
The idea is that the average number of deaths is the same for both cases; you are just trading the certainty of Medicine A (where you know you will save exactly 200,000) vs the chance to save everyone, but the risk of losing everyone.
The word "all" as a modifier to 600k is not in the text;
see for eg:
And here is the problem framed in terms of gains:
If you choose Medicine A, 200,000 people will be saved. If you choose Medicine B, there is a 33.3% chance that 600,000 people will be saved and a 66.6% chance that no one will be saved.
Which medicine do you choose?
Although the number of certain deaths is the same in both versions of the problem, people take the safer option
The lack of scale prevents an analysis of proportionality.
For example:
600k out of 500,000,000?
or
600k out of 1,000,000 or 1,000,000,000?
Given the downsides are equal, the omission begs the qeustion about the rate of success. The rate of success is not a trivial source of people's bias, because empirically "non-success" is often accompanied by adverse effects (a/k/a side effects) as well as additional cost. The result is a leading question. In other words, the language is manipulative.
But do we really need this study to tell us this? C'mon, that is stupid. And has nothing to do with Mandela.
The "Asian Disease problem" is a very standard experimental psychology/economics problem. It's alternatively phrased as this: "Would you like receive 4 dollars, or would you like a 40% chance to receive 10 dollars?"
The entire article is just a layer over this showing the supposed effects of asking the question in a foreign language (other comments have explained the likely problems of the experiment), and then another layer over that wrapping it in "this is totally relevant to you because Madiba".
I mean, yeah. Mandela was right. But not because of the foreign language effect.
> It's alternatively phrased as this: "Would you like receive 4 dollars, or would you like a 40% chance to receive 10 dollars?"
The problem is that the rational answer to this depends on a huge amount of background information, eg, the value of certainly having some money, versus the benefit of the additional money.
It's less of a problem when the effect is well known to hold in a variety of phrasings. It doesn't matter if it's people or dollars; it doesn't matter if it's 4 out of 10 or 8 out of 10.
Further, they account for stuff like that by using large sample sizes. Different people are choosing both options. It's just that the proportions hold despite this.
They've spent a good 30 years confirming this. If you really want to criticize it, it'd be worth putting together a list of studies investigating this and analyzing their methodologies for problems.
I understand that the wording and the numbers chosen by the experimenters are intended to create two equivelent sitations, but the fact is, they do not. The sentence:
"If you choose Medicine A, 400,000 people will die."
Tells you that 400k people will die if you choose medicine A. It says nothing about what will happen to the remaining 200k. I believe it is illogical to assume that the sentence implies the remaining 200k live. Note that if you were to ask me in real life, I'd probably be happy to make that assumption but that doesn't change the fact that it is an assumption based on missing information.
I understand your point in terms of formal logic (that saying 400k people will die does not necessitate that the other 200k can't also die). However, the initial sentence that starts the experiment is "Recently, a dangerous new disease has been going around. Without medicine, 600,000 people will die from it. In order to save these people, two types of medicine are being made."
When you follow that by saying "If you choose Medicine A, 400,000 people will die", it is not illogical to assume that they don't mean "oh yeah, the other 200k will also die, too" This isn't a Mitch Hedberg joke.
In these scenarios, we are expected to make many basic assumptions that aren't included in the scenario. We are safe to assume that there isn't an unmentioned side effect of Medicine B that will leave everyone paralyzed, or that Medicine A won't turn the survivors into zombies.
Are you referring to this?
"I used to do drugs. I still do, but I used to, too." (It's a lot funnier when delivered with good timing than when you read it.)
His comedy was very hit-and-miss, but some of it was spectacularly funny.
Ok, I finally see my mistake. It was skipping over the line "without medicine, 600k people will die from [the disease]" which allows us to account for the remaining 200k.
Assuming vampires and zombies are not involved, life and death are binary states. No one who is living has died, and no one who has died remains living. If 400k people die [from the hypothetical disease], then 200k people will survive [to eventually die from some other cause].
I'm sure this form of problem analysis gives psychological experimenters fits.
Medicine A will always save 200,000 people, with 400,000 dying.
Medicine B will save all 600,000 people 33.3% of the time, but everyone will die 66% of the time.
The expected number of deaths (the 'expected value' in game theory) is the same for both cases (200,000). Medicine A is easy to calculate (200,000 * 1.00 = 200,000). For Medicine B, the calculation is 600,000 * 0.33 + 0 * 0.66 = 200,000.
The idea is that the average number of deaths is the same for both cases; you are just trading the certainty of Medicine A (where you know you will save exactly 200,000) vs the chance to save everyone, but the risk of losing everyone.