what? no, it has nothing to do with simpson's paradox, which is a property that comes from conditional probabilities (of which there are none in this problem; each die roll is independently distributed).
> The three dice below are said to be "intransitive" -- no one of them is stronger than the others.
Except that this is a definition of "stronger" quite different from the one in common use:
> [Die] A beats [die] B on 58% of rolls.
> B beats C on 58% of rolls.
> C beats A on 69% of rolls.
By any conventional metric, one of these dice is the strongest, and it's C. If you released something like this into the wild, you'd expect to come back later and find that C had done the best and A had done the worst.
import random
dice = [
[3,3,3,3,3,6],
[2,2,2,5,5,5],
[1,4,4,4,4,4]
]
wins = [0,0,0]
for t in range(10000):
die1 = random.randrange(3)
face1 = random.randrange(6)
die2 = random.randrange(3)
face2 = random.randrange(6)
result1 = dice[die1][face1]
result2 = dice[die2][face2]
if result1 > result2:
wins[die1] += 1
elif result2 > result1:
wins[die2] += 1
print(wins)
This actually doesn't show much of a pattern of C doing better than B. It does show A being much weaker than either of them.
edit: The reason C and B look equally good in the code above is that B gets many fewer draws than the other dice do (because it's split 3/3 whereas they're both split 5/1). Those draws that don't happen are a win for B and a loss for [the other] B, but if you only count the win, it looks like B is benefiting. B wins about the same amount as C does, but it loses a lot more. A loses about the same amount as B does, but it wins a lot less.
If you had to pick a dice from the three, then your opponent got to pick one to go head to head with you, which one would you pick? C, I assume, since it is clearly the best by "any conventional metric". And yet, you would be likely to lose because your opponent would pick B. Same with any other one of your choices.
The point they are making is that the three dice don't satisfy the transitive property.
Being the strongest is never assumed to require being individually stronger than everyone else. That's not how these things are evaluated.
On a related note, the mechanic where your opponent gets foreknowledge of your strategy and they're free to make whatever changes they want, but you're not allowed to change your strategy and you don't get any knowledge of theirs, isn't really relevant to much.
Even with that mechanic, C is very clearly the strongest die. It's just that someone who chooses C first will lose as badly as someone who chooses B first. (And much less badly than someone who picks A first.) But someone who chooses C second will win by much larger margins than anyone else can.
I appreciate your advocacy for the devil, I do (it helps me wrap my Brian around these concepts more completely), but when you make a "never" statement, it only requires one counterexample to disprove. Consider the following 3 counterexamples to your argument...
Example 1) You're in a prison block with 100 inmates and 20 gaurds who are all average-sized and skill humans except for 1 inmate who is Mike Tyson (in his prime, the heavy weight boxing champion of the world). The prison warden says "We're going to have a boxing match. The prisoners may select a fighter to represent them from the inmates, and I will select a fighter from the guards. If the prisoner fighter wins, everyone goes free. However, if the guard wins, we add 2 years to everyone's sentences."
Here is just one case where it's very important to know that Mike Tyson is statistically the "strongest" (fighter in this case) over any other individual human candidate (gaurd or prisoner) in the set of candidates.
Example 2: A small zoo has only 3 types of animals: an Elephant, a monkey, and a wolf. Each animal is pregnant and will have a baby this month. The zoo attendants have a $1000 pool going and you have to bet on which baby will weigh the most when born. In this case, you clearly wager on the elephant as they are statistically "the strongest" over all other individual candidates with respect to the body mass of their offspring.
Example 3: Back to your tennis example (I play and follow tennis, not that that matters much). Hyperbollically, if you have 5 players and each plays each other 100 times, and player #3 beats all other players 90/100 times, whereas all other matches result in a 50/50 split. In this case, yes, player #3 is the strongest player and is favored to win over any of the other 4 players in the set in a head-to-head match.
Yes, very often we rate and characterize the "strongest" as being the candidate that is statistically more likely to beat any other individual candidates in a set.
None of those are counterexamples. You're arguing that it's possible for the strongest player to be favored to win against everyone else. I said that that isn't a requirement of being the strongest. Obviously it is true that, where one player is favored to win against every other player, that player is the strongest. That's the Condorcet criterion. But it doesn't come close to being true that, if one player is the strongest, that player is favored to win against every other player.
You said it's "never assumed" and I think in many realms it would be assumed. And if the top ranked player consistently lost to certain other players, there would be active debates about whether they are the strongest or not.
> Example 2: ... The zoo attendants have a $1000 pool going and you have to bet on which baby will weigh the most when born.
There are some games that have a single property that is clearly sortable, like weight. In this case clearly the there is one winner. (Actually, here you have a distribution of weight, like the average and the dispersion. So it may be more complicated.)
There are more complicated games. When I was young we had cards with the properties of cars, like weight, length, price, and other stuff. The game was to pick the card at the top and pick a property, and compare with the same property of the top card of the other player. Some cards were stronger than other. I don't remember the strongest card, but I guess there was one. Anyway, that card didn't win against all cards, it was just good against most card, but you could be unlucky if it was at the top of your deck and the other player selected one of the bad properties of the best card.
IIRC in Starcraft II, at some point Serral was #1 and Reynor #2. In a match between them Reynor had a slightly better chance to win, but against everyone else Serral has a better chance to win than Reynor.
B is actually the best if you play all three at the same time.
Intransitive dice are still surprising and interesting to me, I Wonder if there are 3 such dice which are equally strong when played at the same time ?
That code is buggy. It also counts how often each die beats or losses from itself, and that happens more often for the 222555 (9/36 win, 9/36 loss, 18/36 tie) die than for the other two (5/36 win, 5/36 loss, 26/36 tie)
I think “strongest” refers to the Gates/Buffet bet at the beginning of the article. There is no die you can pick that will give you an advantage over someone who picks second.
Contrast with a scenario where two dice are our normal 1-6 flavor and one die is all 6’s. In that case the all-6 die is stronger because it doesn’t matter which other dies it goes up against.
> I think “strongest” refers to the Gates/Buffet bet at the beginning of the article. There is no die you can pick that will give you an advantage over someone who picks second.
But there is a die you can pick that is uniformly better than any alternative. That die is C. If your opponent is playing correctly, C will give you the optimal outcome. And if your opponent isn't playing correctly, C will also give you the optimal outcome. This is not true of the other two dice.
If you pick C, and your opponent picks B, you lose 58% of the time. What is optimal about that?
> If Gates had chosen first, then whichever die he chose, Buffett would have been able to find another die that could beat it (that is, one with more than a 50% chance of winning).
It seems like you're thinking about this outside of the relevant context, which is a math problem, loosely inspired by real life games.
All your program does is pair A vs B 2/9 of the time, B vs C 2/9 of the time, and A vs C 2/9 of the time (pairing a die against itself the remaining 3/9 times).
So it approximates wins = 10000 * 2/9 * [58%+31%, 42%+58%, 42%+69%] = 20000/9 * [89%, 100%, 111%].
(Note these percentages are rounded, e.g 21/36 to 58%).
This is just averaging the win and loss percentages of each coin against the 2 others, not "a conventional metric" like expected value of a roll which are all the same as a regular die (3.5).
That's what the game is. Rolling a particular value doesn't get you anything; the expected value of a roll is not informative. If I gave you a die with all 6s, a die with all 3s, and a die with all 1s, and told you the rules of the game were that I give you $600 minus $100 for each point you roll, which die would be the strongest? Which die would have the highest expected value?
Note that your math is incorrect; B gets more wins from being paired against itself than either A or C do.
edit: here are results from sets of 1000 single-elimination 256-man tournaments where everyone brings a random die (in this case, draw/draw is equivalent to win/loss):
The round of 8 is much more lopsided, generally featuring >3.4 Cs and <2.9 Bs. Bs get a boost in rate of becoming overall winner because there are so many Cs at high levels. Cs have an even higher rate of being overall winner, even though they have worse performance starting from the round of 8, because their overall record is so much better. As die.
Obviously, in reality, people wouldn't bring random dice to this event. More people would bring C dice, because C is stronger than the other two.
Wow that figure is bad. The table compares only the situation where both dice roll the same labeled side, which isn't meaningful (you would be able to conclude that a d6 + 1 is a worse die than a d6 this way...). But at least the numbers on the bottom are right.
I’m trying to understand what the implication of the Gates-Buffet story is: is it supposed to be an exaggerated story like a Mathematician joke? Or is it being implied here that Gates instantly deduced this intransitive property just merely looking at the dice?
People like apocryphal stories about successful people. Celebrities, billionaires, professors. imo the story serves as a way to connect the reader to the article. The HN crowd tends to care less about these kinds of things, which is fine, but the audience is much broader.
Or maybe he just suspected there was something funny about the game, and that the second player had a winning play. Playing second he would also buy more time to figure out what's going on exactly.
It's really not that hard to work out that it's true when given 3 dice with this property, especially if you already know that it's possible or if your suspicion level is high due to the framing.
All you need to do is look at each pair of dice and compare. Especially for some sets of dice the comparison is trivial. Like look at the ones in the article with five 3s and five 4s, you don't even need to think to figure out which one wins more.
My guess is that if this story is true, Gates was just familiar with the ruse. Having read about intransitive dice before this article, if someone proposed a game like this to me, I'd definitely assume that something was amiss and try to see if I could determine that it was the case. It does seem like an explanation is purposely left out to make it seem more mysterious than it actually is, though.
The punchline of the story is supposed to be that Gates just knows that if the world's most famous investor proposes a bet, you don't want to take the other side of it. If Buffett wants to go second, then so does Gates.
It might just imply that Gates reads Martin Gardner (who had a column in a popular periodical) or the equivalent, but I don't see why he wouldn't have been able to figure expectation values in his head?