Hacker News new | past | comments | ask | show | jobs | submit login

This is fantastic.

Just curious - did you run any tests for statistical significance?




Assuming a truly random sample of 5000 unique, non-overlapping viewers for each experiment, the standard error for the experiments runs from .3% to .5%.

The true value for any given sample is pretty likely (> 95%) to be within +/-2 standard errors. So in this case, the difference needs to be more than 0.6%~1.0%, depending on the experiments you're comparing. In other words, these look like they're significant differences.

(For reference SE ~ sqrt(p*(1-p)/N) when N is small relative to the population size)


Do you exclude traffic and clicks received from HN ?


Heh...well, I didn't do the experiment. I'm just the messenger.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: