Hacker News new | past | comments | ask | show | jobs | submit login

Do you take issue with the statistics of the study? Or do you just feel 50 seems like a small number in your gut?



Something did ping my radar, although it's hard to say because it's not published yet. What the news article says is:

But instead, the performance was largely similar, except when it came to the timing of events in the story. "The Kindle readers performed significantly worse on the plot reconstruction measure, ie, when they were asked to place 14 events in the correct order."

What I would like to know is: how many other performance measures did they test? How "significant" is "significantly worse"? If, say, they tested for 100 performance measures (unlikely, but I'm using a large number on purpose), then random chance means that there are likely to be some measures that are "significantly worse." If, on the other hand, they only tested 3 performance measures, then it's less likely to be random chance.

Basically, if you run an experiment and you test for a large number of things, you can't say much about the outliers. With large enough numbers, there are bound to be outliers. However, after you run such experiments, and you see those outliers, you can run more experiments to test if that was random chance, or if there really is some correlation there.


xkcd has a comic explaining the same thing.

http://xkcd.com/882/


While the XKCD comic has a lot of truth to it, it's mainly about many different individual experiments (as well as some poorly done ones.) When running large sets of correlations, standard operating procedure is to use one of several techniques to counteract this effect.


Each performance measure is a different individual experiment.


The Guardian article linked actually doesn't present the statistics of the study, which hasn't been published yet. Absent further information, critiquing the sample size sounds pretty reasonable to me.


You can't criticize the sample size without knowing the effect size


Withholding evidence isn't a defense against criticism. If you won't TELL ME your effect size, but you do tell me the sample size, I can certainly say, "I am skeptical of your conclusion, because of your sample size."


Criticizing the stats of an article that hasn't been published b/c of science news reporting on it is an exercise in madness.

Criticize the science news instead.


You should instead say "I am skeptical of your conclusion, because I don't know your effect size."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: