We were doing testing for an fMRI experiment that we planned to run with humans later. So, yes, all of the stimuli were presented to the salmon. It only felt a bit ridiculous at the time...
We were the inaugural article in what was to be an entire journal dedicated to surprising/odd results. The salmon paper went through a pretty solid peer review as part of publication in JSUR, so we felt good sending it there.
Our paper came out and got a lot of attention, which was good press for the journal. After that the JSUR founders found that they didn't have much time available and the journal folded a few years later. In the end, we were the only paper it published.
Yeah, actually, we feel like it did serve as a solid illustration as to why proper statistical correction is necessary. We had a lot of emails saying what a useful tool it was for lab meetings and classes.
We did a review of the literature as part of our paper. In 2008 something like 30% of papers in major journals used uncorrected stats. In 2012 it was under 10%. The field was certainly already moving in the right direction, but I think we managed to help things along.
My grad school advisor and I were always scanning interesting things when we had sequence testing to do. We scanned other objects like pumpkins and such as well. We scanned the salmon since we thought it would look interesting on a high resolution T1 scan. It looked really great in the end:
https://www.wired.com/images_blogs/wiredscience/2009/09/fmri...
Sorry man - the karma train is probably done. We wrote a few other papers on fMRI reliability, but nothing with the popularity that the salmon paper had.
There are a huge number of fMRI papers that have significant results, statistically and in terms of impact. I have been out of the field for about five years now, so I am not up-to-date with the latest work. I am sure there is some amazing stuff going on.
Glad to see this on Hacker News so many years after the poster first went up!