Hacker News new | past | comments | ask | show | jobs | submit login
Throwing dice gets to the truth about killing leopards (newscientist.com)
90 points by ColinWright on July 31, 2011 | hide | past | favorite | 24 comments



This reminds me of another technique I learned in a PoliSci class (too many years ago). The prof had been polling people about sensitive topics like race. His survey asked respondents only yes/no questions and asked them only to listen to each question and privately write down yes or no to each question. Then they were asked to count the yeses. By only asking 1/2 the survey group the sensitive question, you get a measure of the true ratio for the sensitive question. You end up doubling the sample size to get the same number of responses, but you allow people to hide behind their other answers.


But won't people soon get wise to the actual intent and clam up again? Not if the results are used carefully, Jones argues. "It's not going to be used to criminalise people," she says. Instead, the results could be used to target conservation outreach efforts and compensation for livestock lost to predators on areas where ranchers are especially likely to kill carnivores.

This is key. When you criminalize people for trying to meet their own needs, defend themselves and so on, you create very big problems. It has to be acceptable to survive, make a living and so on. There has to be a system in place for resolving conflicts of interest that adequately considers the needs of both sides.

Justice cannot be for one side alone, but must be for both. -- Eleanor Roosevelt

-------------

Finding the quote I wanted also led to these quotes and I like them:

I have always found that mercy bears richer fruits than strict justice. -- Abraham Lincoln

Justice that love gives is a surrender, justice that law gives is a punishment. -- Mohandas Gandhi

We win justice quickest by rendering justice to the other party. -- Mohandas Gandhi


I remember being taught this technique in my discrete math class last semester. At first blush, it seemed utterly ridiculous. One could still infer the result, the true signal.

It was only after some consideration from the human side I realized did I begin to realize the psychological appeal. I can feel in my gut the psychological appeal of hiding behind the throw of a dice but the mind still rebels and the mere paper of plausible deniability it provides.

EDIT: I say this not because I misunderstand the technique mathematically. Indeed, this was one of the more enjoyable topics covered. One could still infer the probability an individual was telling the truth depending on the strength of the fuzzing signal.

I am merely expressing my puzzlement at this quirk of psychology.


Because you aren't asking the same person the same question over and over again, and in fact only have a single data point for each person, there is no way to "infer the result, the true signal": the "plausible deniability" this yields isn't just a psychological trick, but is actually mathematically sound; in the end you can determine things about the population as a whole, but you cannot determine who was forced to say "yes" and who volunteered a "yes".


I've gotten a sufficiently large number of upvotes from this comment that I feel compelled to follow up after having gotten a (half) night's sleep: you are asking the pollee multiple /different/ questions, and if there are correpations between the answers (which the poller suspected to bd the case in this article) you actually can deduce information about each individual sith increasingly high probability ". If this what the person I responded to meant by "true signal", then he is correct.


My wife has used RRT quite successfully to gauge the amount of illegal abalone fishing in California. Before that, the Fish and Game department did little more than guess (and of course, underestimate) how much illegal fishing was going on.


You can't successfully gauge it when you don't know the real numbers to begin with in order to validate if your method is working.


Wouldn't this greatly throw off the results in cases of rare behavior? For example, if only 1 in 100 (1%) people were leopard killers, but the frequency of throwing a die is 1 in 6 (16.6_%), how would you prevent the results from being thrown off by dice-required yes answers?

I'm assuming there's a statistical solution to this - if someone could explain this or link to one I'd appreciate it.


Easy version:

If the sampling is large enought, 1/6 of the answers are forced "yes", 1/6 are forces "no", and only 4/6 of the answers are reals.

So to get the actual frecuency of "yes", you should remove the 1/6 of the forced "yes", but the total size of the sampling is only 4/6 of the original, so the formula is

  Actual_YES= (Measured_YES - (1/6)) * (6/4)

  Actual_NO= (Measured_NO - (1/6)) * (6/4)
  
(If Measured_YES+Measured_NO=1=100%, then Actual_YES+Actual_NO=1=100%)

More technical details:

You have to be carefull, because the number of forced "yes" and "no" will not be exactly 1/6 of the total sampling. For example, supouse that you ask 60 persons if they are "aliens" using this method.

If you get 11 "yes" answers, it doen'n mean that (11/60-1/60) * 6/4 = 2.5% of the people are "aliens".

If you get 9 "yes" answers, it doen'n mean that (9/60-1/60) * 6/4 = -2.5% of the people are "aliens"!

And each of this sampling results happens ~13% of the times that you make the pool.


No, it wouldn't throw off the result, but no matter the technique you would need a very large sample to accurately measure a behavior that rare.


You are right. The "signal to noise" will be terrible if the rare behavior is much less likely than the random event. The only "statistical" solution would be to calibrate things by making the random event similarly unlikely. (e.g. one could use a 100 sided die.) If this statistical solution destroys the psychological advantage of the whole thing, I don't know!


I'm concerned that some of the respondents wouldn't quite have understood the rules of the game, and that this might throw the results off significantly.


You could always ask them other questions you already know the answers to in order to measure that.


That would only, even theoretically, be a possible approach to explore if you had questions that you knew this group of people were exactly as likely to lie about as killing protected leopards. How could one know which questions these would be. They wouldn't. Despite this problem, I would still like to see such calibration questions added to each round such as "Does astrology work for you?" and "Have you even been taken by aliens?" The results to these questions would give some useful data for comparison.


You're assuming they lie. I'm addressing the worry that they might not understand the rules.


All right. That is a very good question to bring up. Rules like those described are not as straightforward to understand to ordinary people as they may seem to the experimenter. I had to read it a couple times to understand that the idea was to tell the truth if 2-5, and give a fixed response if 1 or 6. Surely there were participants that didn't understand and the number among rural farmers is going to be different than the number among western college psychology students upon whom most of these sorts of tests are developed and calibrated. 6 is an unlucky number, the number of the devil in some cultures. Other cultures have feelings about 1 (unity), 2 (dualism), 3 (trinity), 4 (chinese good luck, indian sacred number) and 5 (witchcraft). When talking of small effects, there may be emotions experienced that vary from person to person depending on their education, IQ, cultural background and environment. This can skew results in a way that is dependent on the particular population tested. A person from a culture that believes that 6 is an evil number and a bad omen may be very slightly more likely to change their response on a 6, seeing it as a warning. One can't take a test like this that is quite subtle and looks at tiny effects within noise signals that come not from rocket engines but from subjective human reactions and dump it on any population and assume results calibrated on a different population are a valid interpretation. That would have to be shown first, in cross-cultural comparison testing.


That's pretty smart. It basically anonimizes individual responses, without harming the statistical outcome.


I'm disappointed that the title of this submission got changed. I deliberately chose a title that kept the original flavor, but which tried to show why this might be relevant to this specific audience.

The title I chose was something like "Getting Feedback - Throwing dice get to the truth about killing leopards" or something similar. I don't remember exactly - I'll have to go and look it up.

I understand why the mods don't want titles changed radically, but I spent significant time and effort getting something that retained the essence of the original, but which also tied it explicitly to the HN audience without editorializing or sensationalizing.

Very disappointed.

http://www.youtube.com/watch?v=84zY33QZO5o


RRT is interesting; reminds me a great deal of the dining cryptographers' protocol https://secure.wikimedia.org/wikipedia/en/wiki/Dining_crypto...


The problem, which is obvious to anyone but a researcher with strong bias, is that 19% of all people have obviously NOT killed a leopard in the last 12 months and therefore this questioning method does not work.


Thank you for yet again demonstrating that when a layman says something is 'obvious' and refutes some researchers' work, that layman is really just full of it.


Your belief this method is scientific, valid, validated cross-culturally and for different forms of questioning shows that it is you who are the "layman", assuming by "layman" you mean person ignorant of science, gullible, and clueless.


The article says that they surveyed only "ranchers in north-eastern South Africa".


Excellent. Please state the sample size, population size, and citation of studies that have experimentally validated this method cross-culturally and for the specific unique cultural group of ranchers in north-eastern South Africa, who have their own distinct culture curiously quite different from college students.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: