Hacker News new | past | comments | ask | show | jobs | submit login
Risk of false positives in fMRI of post-mortem Atlantic salmon (2010) [pdf] (dropbox.com)
113 points by zitterbewegung on Nov 1, 2017 | hide | past | favorite | 41 comments



I was doing fMRI work around the time this paper was published. It astonished me that people would simply set an uncorrected voxel-level threshold and call it a day. No FWE-correction, no cluster-threshold - just an 0.001 uncorrected threshold. It was sad that this paper needed to be published to get researchers to start paying attention to that.

I'll be honest - when the paper was published I was thinking "no shit - why do we need a paper to tell us what we all learned in stats 101 about multiple comparisons??" And then realized the quantity of fMRI papers that used uncorrected thresholds.

Very similar feeling when the "Voodoo Correlations" paper came out. Except I was admittedly guilty of having presented correlation coefficients from clusters that had already been identified using thresholding. So that paper really did make me take a closer look at some of my figures/conclusions.


Let us be a little bit fair to the researchers who adopted the p=.001 or p=.0001 uncorrected approaches. Their approach wasn't completely unreasoned, and was even justifiable at one time given available methods.

There were mainly two approaches to multiple comparison corrections: Bonferroni and setting an uncorrected threshold. People here might say, well yeah, use Bonferroni.

However, Bonferroni is really only appropriate when comparisons are independent. Voxels (3D pixels) which are adjacent are highly dependent, and indeed the brain is generally correlated. This dependency makes Bonferrnoi correction (very) inappropriately conservative. Given the average dependence of voxels, some researchers estimated that the average number of true comparisons might be on the order of hundreds to a few thousand. In practice, researchers corrected with Bonferroni, either found a really strong effect, or reset using uncorrected threshold. Some reported results using both. People who read the results interpreted results that way too. Bonferroni = reliable, uncorrected = provisional

The contribution of the salmon study and other research papers is that they truly demonstrated that the typical uncorrected thresholds in use were insufficient to control false positives.


You're right. I definitely don't mean to sound like I was an enlightened graduate student. Nothing ever passed FWE using Bonferroni, so we almost always resorted to using uncorrected p-values with cluster thresholding, with the cluster and voxel thresholds set from using alphasim (which gets the probability of having a cluster of that size significant from a random dataset, given the smoothness of your actual images).

If I recall correctly, all the major neuroimaging packages (AFNI, SPM, FSL) had options for cluster-size thresholding at the time. Along with tools like alpha sim to estimate cluster-level FDR (but I think that ultimately had issues with it's algorithm, discovered only a few years later...).

I just remember thinking that if the salmon paper had a reasonable cluster-threshold, none of the spurious voxels would have been considered in the final analysis.

Granted, several years later, a paper came out suggesting that method would inflate false positives (http://www.pnas.org/content/113/28/7900.full).

I imagine the neuroimaging field, particularly the stats part, has changed rapidly since I left.


Sorry, I was writing to HN more than responding to you particularly. It is sometimes easy for non-scientists to underestimate scientists and think of them as fools, when in fact the problems are frequently hard.

I believe that you are correct that about the time of the salmon poster there were other methods available for multiple comparison correction. The work in the early- to mid-2000's was much more "wild-west" however.

Indeed cluster correction may have its own issues, re your link. I think that a good approach these days is to eschew whole-brain approaches for theory-drive, a prior i ROIs, then supplement those analyses with a whole brain exploratory analysis.


I was unaware of the salmon paper, but I remember a bit later being very puzzled by a study about comatose/vegetative patients that included brain dead subjects as controls (and off course healthy subjects as well).

I suppose it was meant to convince statistically naive readers that the dead salmon thing didn't apply to their methodology.


Hey all. I’m the author of the poster and subsequent paper. Happy to answer any questions that you might have.

Glad to see this on Hacker News so many years after the poster first went up!


I presume someone else has already thought to share with you zach weinersmith's interpretation of your results http://www.smbc-comics.com/comic/fmri


Yeah! One of the other authors on the paper sent it my way right after it was published. Freaking amazing - I love it.


Did you actually hold up the photos for the dead fish to see? How did it feel like holding photos of people for a dead fish to see?


We were doing testing for an fMRI experiment that we planned to run with humans later. So, yes, all of the stimuli were presented to the salmon. It only felt a bit ridiculous at the time...


I would like to see a study of which regions of the brain are active in scientists forced to show photos to a dead fish.



Is JSUR a recognized academic journal? How have you been doing for citations of this work?


We were the inaugural article in what was to be an entire journal dedicated to surprising/odd results. The salmon paper went through a pretty solid peer review as part of publication in JSUR, so we felt good sending it there.

Our paper came out and got a lot of attention, which was good press for the journal. After that the JSUR founders found that they didn't have much time available and the journal folded a few years later. In the end, we were the only paper it published.


All the more special for your work to be the whole history of a journal I guess.

But it's a shame they couldn't continue, I would have loved to have something filling that niche.


Did it help? Did people understand the message you were sending and take corrective action?


Yeah, actually, we feel like it did serve as a solid illustration as to why proper statistical correction is necessary. We had a lot of emails saying what a useful tool it was for lab meetings and classes.

We did a review of the literature as part of our paper. In 2008 something like 30% of papers in major journals used uncorrected stats. In 2012 it was under 10%. The field was certainly already moving in the right direction, but I think we managed to help things along.


Thank you! This was one of my favorite papers while I was in grad school. I still talk about this all the time when I review papers.


Glad that you found it useful and/or humorous!


Is there a reason why you were scanning salmon? Are they useful as phantoms for other sequences?


My grad school advisor and I were always scanning interesting things when we had sequence testing to do. We scanned other objects like pumpkins and such as well. We scanned the salmon since we thought it would look interesting on a high resolution T1 scan. It looked really great in the end: https://www.wired.com/images_blogs/wiredscience/2009/09/fmri...


Have you written anything else that's even half as funny?


I would love to say that I have, but I fear that I may have peaked in 2010.


What did you do with the Salmon afterward?


My wife and I actually ate it for dinner that night!


Do you have any other papers that would give me karma I mean that are interesfing?

Also, which FMRI papers have real results that you think are significant?


Josh Carp also wrote a paper on methdological misbehavior in neuroimaging a while back:

https://www.ncbi.nlm.nih.gov/pubmed/22796459


Sorry man - the karma train is probably done. We wrote a few other papers on fMRI reliability, but nothing with the popularity that the salmon paper had.

There are a huge number of fMRI papers that have significant results, statistically and in terms of impact. I have been out of the field for about five years now, so I am not up-to-date with the latest work. I am sure there is some amazing stuff going on.


"Task: The task administered to the salmon involved completing an open-ended mentalizing task. The salmon was shown a series of photographs depicting human individuals in social situations with a specified emotional valence. The salmon was asked to determine what emotion the individual in the photo must have been experiencing."



Slight spoiler, but if you liked this movie watch

Derren Brown - Pushed to the Edge. (It's very educational and fun)


'This is a true story' - first scene of Fargo.

(it's not a true story)


"A body is found in the frozen North Dakota woods. The cops say the dead Japanese woman was looking for the $1m she saw buried in the film Fargo. But the story didn't end there."

https://www.theguardian.com/culture/2003/jun/06/artsfeatures...


This is absolutely hilarious!


This is and remains one of my all time favourite scientific research papers.

Understandable, obvious, and not just relevant, actually providing a meaningful contribution to their field by virtue of quantifying existing techniques as not sufficient to fully eliminate spurious data from the 'noise floor' inherent in fMRI machine data.


I assume this was posted in response to the recent fMRI study from the front page: https://news.ycombinator.com/item?id=15597068


Yes you are correct :) it reminded me of that paper.


Ahh such a classic :)

tl;dr: correcting for multiple comparisons is important.


The linked Dropbox page appears to be password-protected. Here's the original poster: http://prefrontal.org/files/posters/Bennett-Salmon-2009.pdf


I think it was just asking you to register for Dropbox. If you close that popup the PDF is already open behind it.

Here's a link from my server that should just work: http://prefrontal.org/files/papers/Bennett-Salmon-2010.pdf


Huh! When I click "Cancel", the PDF disappears and then there's just a centered box saying "This file is password-protected. Bennett-Salmon-2010.pdf · 0.94 MB". However, the Download link works just fine and lets me download and read the PDF. Dropbox is weird.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: