> For me, the takeaway from this affair is that there is no one-size-fits-all solution to make statistics impossible to hack. Getting rid of p-values is appropriate sometimes, but not other times. Demanding large sample sizes is appropriate sometimes, but not other times. Not trusting silly conclusions like “chocolate causes weight loss” works sometimes but not other times. At the end of the day, you have to actually know what you’re doing. Also, try to read more than one study.
For me the take away is that non-trivial independent replication of results still stands as the gold standard for experimentation. P-hacking shows up clearly when the experiment is replicated. P-values in my experience are only a starting point for further experiments.
Take the cold-fusion debacle. When independent groups attempted to replicate the results they were unable to do so.
Of course the other big take away is that people really don't understand statistics at a basic level, even the "scientific reporters".
> For me the take away is that non-trivial independent replication of results still stands as the gold standard for experimentation.
Agreed. Too bad funding agencies rarely if ever give you the money to do it :(.
The takeaway here is that statistics is arguably one of the most nuanced quantitative fields out there. It's really easy to shoot yourself in the foot, particularly with p-values.
I think every statistical test has its place, but my personal favorite lambasting of p-value testing is Steiger and Fouladi's 1997 paper on non-centrality interval estimation [1].
As an aside, Steiger was my graduate statistics professor several years ago, and probably the primary reason I know this paper even exists. If you enjoy the harsh treatment of significance testing in the paper, just imagine hearing it straight from the horse's mouth during lecture :).
Another good indicator is, that in Germany Bild and the Huffington Post reported about it. You generally want to avoid to write a story based on these two. Everybody knows, that Bild consists of sex, hate, tits and the weather forecast as the Ärzte put it in a song mocking people that get their information from that paper.
Even if I had the time to read several studies and expertise to understand them, I couldn't afford it. Out of curiosity I just went to the Journal of Nutrition (I know nothing about it, picked that journal randomly) and tried to download one random issue. It was $273. Going to multiple original sources isn't practical for most of us.
I doubt that the typical mainstream journalist writing a popular story on a scientific subject could justify this expense to her boss either.
It is very often possible for journalists to get papers for free. In many cases it's easy to email the author and they'll happily provide you with a copy within short timeframes. Some archives provide free access to journalists, e.g. the cochrane library.
I'm all for open access, because I think everyone should have the right to see publicly funded scientific research. But from my experience as a journalist who's written a number of pieces on scientific works it often isn't a big problem. I never had to pay for a journal article to do my work.
I'd hope that news outlets covering any kind of science pay for access to at least some major journals as a matter of course. The subscription prices are probably much lower per issue than the price to buy an issue individually.
Most journals also show you abstracts for free, which are overviews of what each paper says. If you find a paper that you want to read in full, email the author - most scientists are very happy to know that people are interested in their work, so they'll give out copies to anyone who asks.
This isn't to defend the paywalls - I think open access is important. But I don't think the paywalls are a valid excuse for journalists.
I think the prices for most of these journals individually is enormous - they are usually sold in bundles through individually negotiated, confidential pricing agreements that run into the hundreds of thousands of dollars. Far cheaper per journal, but probably including a bunch or journals that the institution normally wouldn't be interested in, and a ton of garbage.
Not to say that many/most papers can't be found with a little bit of effort, and I can't really imagine a situation where an establishment journalist emails authors about one of their papers and doesn't get it.
If you need regular access to the scientific literature you have or make an arrangement with an institution. You don't pay per article or issue. It is a real problem for unaffiliated or freelance folks, though.
Even as a freelance person it's not a huge problem for me. It can be annoying, but if I'm doing real work rather than just looking up one article on a whim, it's not a showstopper. Mostly I just use the local university library. Non-affiliates can't check out books, but anyone can walk in and get on the wifi. And once you're on their wifi, you get whatever database access they've negotiated for their IP range. Not every university library allows this, but quite a lot do.
Its a serious problem even in academia. I have had to email authors of papers or go begging on reddit.com/r/scholar more than once, even when I had very expensive university subscriptions available to me.
> For me, the takeaway from this affair is that there is no one-size-fits-all solution to make statistics impossible to hack. Getting rid of p-values is appropriate sometimes, but not other times. Demanding large sample sizes is appropriate sometimes, but not other times. Not trusting silly conclusions like “chocolate causes weight loss” works sometimes but not other times. At the end of the day, you have to actually know what you’re doing. Also, try to read more than one study.