Hacker News new | past | comments | ask | show | jobs | submit login
Game theorists offer a surprising insight into the evolution of fair play (findarticles.com)
55 points by molbioguy on Aug 14, 2011 | hide | past | favorite | 24 comments



The game is kind of weird:

Each player of the pair begins with a set amount of money, say $5. Each puts any part or all of that $5 into a mutual pot, without knowing how much the other player is investing. Then a dollar is added to the pot, and the sum is split evenly between the two. So if both put in $5, they each wind up with $5.50 ($5 $5 $1, divided by 2). But suppose the first player puts in $5 and the second holds back, putting in only $4? The first player gets $5 at the end ($5 $4 $1, divided by 2), while the cheater gets $6 ($5 $4 $1, divided by 2--plus that $1 that was held back).

It seems to me that there isn't actually anything to be gained from cooperation. If both players "cheat" completely (put $0 into the pool), they still get $5.50. In that sense, the Nash equilibrium (both are cheating) is also socially optimal. Kind of untypical for something where you want to demonstrate the advantages of cooperation.


It is a weird game, but experimental situations are usually contrived. They define cooperation as the absence of cheating. But the surprise is that people jump at the chance to fine the cheater, even though they have to pay the same amount as the fine:

You can fine the cheater by taking away some money, as long as you're willing to give up the same amount yourself. In other words, you can punish a cheater if you're willing to pay for the opportunity.


Another famous experiment that supports the finding that people are hyper-sensitive toward cheating:

http://en.wikipedia.org/wiki/Wason_selection_task#Policing_s...

This experimental evidence supports the hypothesis that a Wason task proves to be easier if the rule to be tested is one of social exchange (in order to receive benefit X you need to fulfill condition Y) and the subject is asked to police the rule, but is more difficult otherwise. Such a distinction, if empirically borne out, would support the contention of evolutionary psychologists that certain features of human psychology may be mechanisms that have evolved, through natural selection, to solve specific problems of social interaction, rather than expressions of general intelligence. In this case, the module is described as a specialized cheater-detection module.


I noticed this too. The example would make more sense if the rules were changed so that the pot is multiplied, say by 2. Then the optimal case is both put in $5 and end up with $10.


It has to be multiplied by less than two - if it's multiplied by two or more, putting money into the pot is always worthwhile (or at least break-even), no matter how much the other player puts in.


I thought the same so looked for the original article on Google Scholar. From Fehr and Gachter, "Altruistic Punishment in Humans":

... groups with four members played the following public goods game. Each member received an endowment of 20 money units (MUs) and each one could contribute between 0 and 20 MUs to a group project. Subjects could keep the money that they did not contribute to the project. For every MU invested in the project, each of the four group members, that is, also those who invested little or nothing, earned 0.4 MUs. Thus, the investor's return from investing one additional MU in the project was 0.4 MUs, whereas the group return was 1.6 MUs.

Your return on an investment of 1U is 0.4U, so it's always best for you personally to keep a unit. But the group return is 1.6U so the group as a whole is enriched for every unit invested.


I have not read the original paper, but it seems plausible that the experimenters would have tried different game rules and this is the version which gives the reported result most strongly. On that assumption, one way to look at the experiment is as a test of the limits of rational behaviour. The 'vengeful chump' might be interpreted as externalising their embarrassment at having failed to identify the optimum strategy. Unconsciously they know they should have cheated, but it is more gratifying to blame someone else. I'd like to read the entire paper if anyone can please show where it can be got (for free natch).


Furthermore, since "punishing" a cheater involves paying $X to force the cheater to pay $X, the cheater will still come out ahead no matter how many times he cheats. This game is... interesting.


That's the point of the game isn't it? To determine if people will irrationally pay to punish a cheater for the sake of revenge? At least that's what I got from the article.


Behavioral economics: exploring progressively less efficient ways to transfer beer money from grant committees to undergrads since 1970.


From the article by Robert Sapolsky (COPYRIGHT 2002 Natural History Magazine, Inc.) -- seems relevant to the Jonathan's Card experiment:

Think about how weird this is. If people were willing to be spontaneously cooperative even if it meant a cost to themselves, this would catapult us into a system of stable cooperation in which everyone profits. Think peace, harmony, Lennon's "Imagine" playing as the credits roll. But people aren't willing to do this. Establish instead a setting in which people can incur costs to themselves by punishing cheaters, in which the punishing doesn't bring them any direct benefit or lead to any direct civic good--and they jump at the chance. And then, indirectly, an atmosphere of stable cooperation just happens to emerge from a rather negative emotion: desire for revenge. And this finding is particularly interesting, given how many of our societal unpleasantries--perpetrated by the jerk who cuts you off in traffic on the crowded freeway, the geek who concocts the next fifteen-minutes-of-fame computer virus--are one-shot, perfect-stranger interactions.


What difference does it make that you're playing against different people? People engage and justify behavior based on types of action.

One punishes a cheater that one will never encounter again partly on the presumption that other people also do this to cheaters they encounter. Thus one engages in a behavior that, if performed universally, will reduce the likelihood that one will encounter a cheater.

This is almost exactly the same as iterated games where you play the same person over and over and thus confront the other player's action as an instance of a type of decision (a strategy). The fact that it isn't the same person doesn't mean you won't think in terms of types.

Yeah, you can try to free ride and just hope that other people punish cheaters for you and that you'll benefit without ever having to do it yourself (since punishing incurs a cost). But if the choice is between no one punishing cheaters and everyone punishing cheaters, then you choose the latter. If you're thinking in terms of types, those are the two choices. Even if it doesn't totally make sense in a particular context to do this, people habitually think this way.

Human beings think in terms of types and systems of actions, and choose actions at least partly based on what types and systems of actions they are endorsing. These game scenarios rely on that in the same that iterated games do. It's tit-for-tat all over again, just one level more abstract.


I'm surprised that the article, while otherwise well-researched, doesn't even mention the Zahavi Handicap Principle.

In Zahavi's view, altruism is a form of signalling: the altruist is doing so well, they can afford to lose a good deal of material benefits. The altruist then benefits from the high regard of the peers who witnessed the facts (e.g. potential partners of the opposite sex).

From this perspective, the crucial step in the experiments presented is not the punishment, but the subsequent public exposure of the in-game behavior.


This makes me wonder then why some cultures frown on public displays of altruism. Basically, do good but please don't brag about it.


To even the playing field maybe? If you brag about how awesome you are, you're putting the pressure on everyone else to be as awesome. Some people don't like that ...


Good point. The less-than-outstanding will look less less-desirable mate material. But they still want the benefits of altruism. So a morality develops that says, "Do good things for others (that includes me), but keep it to yourself less the rest of us look bad in comparison and fail to find suitable mates."


The experimental design deliberately excludes this type of signal. Anonymous, non-repetitive, closed book. It is not an argument against social signalling. Just that there is something else going on too.


I would put $0 in the pot and if my opponent put in more than me, I would pay him until we were even. This beats the game because it sets a cooperative standard while being fair immediately and also protects me. However, if my opponent put in more and could punish me before I could even it out, I imagine I would find it hard to "turn the other cheek." I also don't think that putting in the full amount would create the culture I would want to exist because such an action would be indiatinguishable from naïveté, and other than asking for my money back and getting it I would have no power to do good once the game ended


> If enough of them do so--and especially if the cooperators can somehow quickly find one another--cooperation would soon become the better strategy. To use the jargon of evolutionary biologists who think about such things, it would drive noncooperation into extinction.

I don't think this works. Although co-operating works great for the majority, that just means it allows a few to 'cheat' and get the biggest payoffs.


The wierd thing is, you could translate this insight into a ethically questionable business idea:

A website lists the wrong doers to humiliate them. Each crime gets its own list. Pay 5$ for "the jerk who cuts you off in traffic on the crowded freeway" or 1000$ for "the geek who concocts the next fifteen-minutes-of-fame computer virus" or 100,000$ for "the child molestor".

I hope this would not work out, but i fear it would.


How about catching corrupt officials in the act for 500 dollars?

Though I am sure that there is an unintended consequence somewhere in the idea. Sometime all we can do is watch the system in action, and try to fix it...if it let us.

For example, the US government is in a slow motion train accident that's taking a long time to happen. It's very hard to stop the train in time and fix the stuff that's broken.


How about an open database of vehicle license plates to Facebook profiles.


SuperCoorperators is an interesting book exploring these ideas

http://www.amazon.com/Supercooperators-Mathematics-Evolution...


The Economist had a good article about this same topic two weeks ago. It seems to be talking about unrelated studies.

http://www.economist.com/node/21524698




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: