Hacker News new | past | comments | ask | show | jobs | submit login
The Probability and Statistics Cookbook (statistics.zone)
239 points by kercker on Nov 29, 2016 | hide | past | favorite | 46 comments



In section 13, I see they are still teaching the Fisher-Neyman Pearson hybrid (ie the null ritual). For a brief overview see [1]. To start you off: Fisher said the idea of power was nonsense[2], and Neyman-Pearson said a hypothesis is either rejected or not (there is no gradient of evidence for/against).[3]

[1] Gigerenzer, G (November 2004). "Mindless statistics". The Journal of Socio-Economics. 33 (5): 587–606. doi:10.1016/j.socec.2004.09.033

[2] 'The phrase "errors of the second kind", although apparently only a harmless piece of techinical jargon, is useful as indicating the type of mental confusion in which it was coined.' -Ronald Fisher. "Statistical Methods and Scientific Induction." Journal of the Royal Statistical Society. Series B (Methodological) Vol. 17, No. 1 (1955), pp. 69-78 https://www.jstor.org/stable/2983785

[3] 'no test based upon the theory of probability can by itself provide any valuable evidence of the truth or falsehood of that hypothesis.' -Neyman, J; Pearson, E. S. (January 1, 1933). "On the Problem of the most Efficient Tests of Statistical Hypotheses". Phil. Trans. R. Soc. Lond. A. 231 (694–706): 289–337. doi:10.1098/rsta.1933.0009.


Indeed, when I took my introductory statistics courses at UC Berkeley (STAT 200A and STAT 200B), we discussed the pitfalls of blindly applying these concepts---albeit at very high level.

Initially, I wrote this cookbook/cheatsheet in order to structure and retain the material in these courses, not to challenge them. Most of the content comes from the cited references, all of which have a very terse and mathematical presentation. It would be great to augment the current document with pointers to the literature that offer a critical discussion. As a non-statistician, I lack the historical perspective, but I always appreciate contributions from experts in the field. (The document is open-source: https://github.com/mavam/stat-cookbook)


I don't have time to contribute but the wikipedia page on NHST[1] used to be pretty good about refs. A lot of the stuff pointing at controversy/history has been slowly removed the last few years... it still isn't too bad though. Anyone interested can also try looking through old versions. There have been thousands of papers published on that topic.

[1] https://en.wikipedia.org/wiki/Statistical_hypothesis_testing...


How would you answer the question how big a sample has to be to measure an effect of a certain size?


I think I know what you are trying to ask, but this sounds muddled to me. Can you clarify what you mean by "measure an effect of a certain size"?


In the placebo group value A is x %. You want to know if in the treatment group value is at least x + 10 %. How many people do you have to test? (Without getting into details about study design. :-)

Anyway, you don't think power analysis à la Cohen is useful?


If that is what you care about for some reason, you would set x+10% as the null hypothesis, right? Not sure what that scenario is supposed to have do with power.

Also, this isn't really about what I think, rather I would hope people check the Fisher 1955 ref and go from there.

What I think though is this whole idea of testing vague/vagrant hypotheses (eg the example we used here) is wrong in the worst way possible. The null hypothesis should be deduced from some theory, or at least correspond to what you care about. I have shared this paper on the site many times, I think it should be standard reading in high school: http://www.fisme.science.uu.nl/staff/christianb/downloads/me...


You cited Gigerenzer, Neyman, Pearson as if they opposed the concept of power. Fisher might be in a different boat but he also claimed that smoking doesn't cause lung cancer, so he probably wasn't always right. :-)

Sample size, effect size & power are related concepts in the context of power analysis -- see also Cohen's "A primer on power", which is available on the Internet. The concept of power has nothing to do with "degrees of evidence" or vague hypotheses.


>"You cited Gigerenzer, Neyman, Pearson as if they opposed the concept of power."

Sorry for the miscommunication. The point is that power is a Neyman/Pearson concept, Fisher said it didn't make sense. On the other hand a gradient of evidence is a Fisherian concept, Neyman/Pearson said that didn't make sense.

What people have been teaching as stats is a mismash of the two that makes sense to no one who thinks these types of things through. Gigerenzer reviews this strange phenomenon and offers some entertaining commentary, it is a decent starting point.


>"The concept of power has nothing to do with "degrees of evidence" or vague hypotheses."

Yes it does. To properly assess the probability of incorrectly failing to reject a hypothesis you need to know how likely the data would be under various rival hypotheses. This depends on the precision of the various hypotheses. This is explained by Fisher in my original ref.


You may also want to check out figure 2 of this paper which further illustrates the relationship between a statistical hypothesis and the research hypothesis: http://rhowell.ba.ttu.edu/Meehl1.pdf


> Ten years later, I wrote at greater length along similar lines (Meehl, 1978); but, despite my having received more than 1,000 reprint requests for that article in the first year after its appearance, I cannot discern that it had more impact on research habits in soft psychology than did Morrison and Henkel.

is the author using a null value to inform this perception?


Doesn't sound like it. It sounds like Meehl is giving an order of magnitude estimate. He is saying that it is his impression that both Morrison & Henkel's paper and his own seemed to have little effect on practice.

Clearly he doesn't think it had exactly zero effect, since it affected him!


assuming 1000 reprints somehow implies there should be a discernible 'impact on research habits' seems like an example of what your referenced Figure 2(o) calls 'Estimating parameters from sample'

(o) http://rhowell.ba.ttu.edu/Meehl1.pdf


You have it reversed.

"Estimating parameters from sample" (on the right) would be his observation that there was little discernible effect. Thinking that 1000 reprints of the paper would have a larger effect on practice would more correspond to "theory" (on the left), although that is a pretty vague one.


but where does the 1000 figure come from? it reads arbitrary


Just because the ancients said something is no reason to take it as gospel.


Sure, but what is going on is that statistics textbooks have been teaching nonsense since the 1940s, and that nonsense has been taken as gospel.

It is no accident that statistics is usually presented as arising anonymously and forming a monolithic paradigm, when in fact the exact opposite is true. Stats must be one of the most controversial areas of intellectual activity around. Those refs are to start off the curious, some of whom will manage to break free of the brainwashing and think for themselves.


Cool, but I would call this a cheatsheet. Almost the inverse of a cookbook, which I think of as a set of "how tos" for tasks you want to accomplish.


Yeah. Don't suppose anybody has the equivalent of an actual "cookbook"? It would be a great resource for helping to skill up product owners on A/B and multivariant testing when they don't have a math-heavy background.


"The Seventh Edition of Introduction to Statistical Quality Control provides a comprehensive treatment of the major aspects of using statistical methodology for quality control and improvement." http://a.co/5xRV2UE

Covers: Hypothesis testing (single population), Analysis of variance (multiple populations), Control charts, Design of Experiments (A/B testing and beyond).

Edit: Used in the undergraduate industrial & systems engineering program at USC when I was a student. The 7th edition has various cook book style walkthroughs.


This was used at NC State in the industrial & systems engineering program, too. Very good book.


Ironically I used to title it "cheatsheet" but then switched to "cookbook" when the document was extending beyond +25 pages. I don't mind going back to "cheatsheet". Feel free to submit a pull request.


Quite useful indeed! If I may, I'd love to see references for each concept.

I appreciate anyone can look "sample space" or "parametric inference" up on Google, but it'd take some time to find some authoritative source (especially for people like who do not work with stats every day). It'd be awesome if I could see a "[1]" and a reference (or list of references), either online or offline, where the concept is defined.


I fully agree that tracing each concept to its originating source has great value.

From a presentational point of view, I wonder if it would pollute the plain/clean presentation of the formulas. Perhaps very small footnote-like citations could work, but it has to be unobtrusive.

The hardest part, however, will be coming up with the authoritative source for each concept. As it's outside my field of expertise, we would have to rely on the community to fill in these details incrementally.


I'm struggling to understand the CDF for the discrete uniform distribution (line 1 of the first page). I think the equation is missing a "+1" in the denominator as otherwise it doesn't make sense. Although it has been a while since I took a stats class, so I may be mistaken


Anyone have a recommendation on a good statistic textbook?

Everything I've tried has been absolutely horrible except for "An Introduction to Error Analysis" by John Taylor (yeah the classical mechanics guy). Unfortunately it's a bit basic...


If you want to actually think about what statistical models can tell us, as well as run them, Richard McElreath's book is amazing:

http://xcelab.net/rm/statistical-rethinking/


Discovering Statistics Using R by Andy Field. Fantastic book and very readable and at times hillarious.


I'm reading 'Lectures on Probability Theory and Mathematical Statistics' by Marco Taboga. It's a ~600 page book comprised of ~75 lectures on various topics in probability and statistical inference. What I like about it is that the lectures are about key topics and are condensed, so it's basically a refresher on important topics and a reminder that they exist. You can always read the chapter and supplement the reading with wikipedia. Additionally, there are solved practical problems at the end of every chapter so you're not just reading about theory, you need to put it into practice. You can find the .pdf online for free since it started as a web textbook, or you can do as I did and buy a hardcopy on Amazon for like, $12.


Elementary: Freedman, Pisani, Purves, "Statistics" (uses some unconventional terminology)

Intermediate: Rice, "Mathematical Statistics and Data Analysis"

Wasserman's "All of Statistics" is also a very good book except that it is too terse to be a primary text.


Intuitive Biostatistics by Motulsky

Goes over the commonly used methods in science but only explains the intuition behind the method and what you can and cannot expect from the results, drawbacks etc. Written in a very conversational and opinionated style, I enjoyed it.


I find this to be fairly involved but a good follow-up to introductory texts. Casella, George, and Roger L. Berger. Statistical Inference. 2nd ed. Duxbury Advanced Series. Australia ; Pacific Grove, CA: Duxbury/Thomson Learning, 2002.


Have you looked at "Statistics in a Nutshell"? http://shop.oreilly.com/product/0636920023074.do


I have this book, and it's not what I'd call a textbook.

It rarely explains how any of it works (you'd be hard pressed to find the formula for a probability distribution function, for instance), so it's just a one-stop collection for a lot of useful tests and the circumstances under which they should be used.

It's less of a textbook, and more of a reference for someone who needs to occasionally work with statistics and can't remember offhand when the T-test is appropriate or the procedures to undergo for a chi-squared test or whatever.


My comment on another branch of this thread may help: https://news.ycombinator.com/item?id=13061431



Or, if you want the good stuff (:p):

Bayesian Data Analysis by Gelman et al.


Nice! Did anyone make a mobi file so I can read it in my Kindle? PDF s*cks!


You can email your @kindle.com account with a PDF attachment, and it'll be delivered to your device. Google the details.


First, it just will convert it the word "convert" is in the email subject.

Second, this is almost useless for multi-columns PDFs. It just works fine with very simple PDFs. Everything with a minimal of complexity becomes junk.

Reading PDFs in the kindle is a terrible experience. The text doesn't reflow. You get lost in multiple columns. Even the next doesn't work fine.

Since it is something open, with the original content published in GitHub, I thought someone should have generated a mobi file.


Yes, but the experience isn't great IMO.


Kindle should be able to read pdf. If you don't mind me asking, which kindle version are you using?


Kindle support to PDF is really suboptimal. It doesn't zoom or reflow anything and reading multiple columns documents is quite an hassle (at least on mine, first version paperwhite)


I have the second version, and it is also terrible. You shouldn't buy kindles to read PDFs.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: