There's a fair bit of junky data and quite a few analysis methods that cannot compare data accurately between magnets, position wrt isocentre, or even processing software revisions. But your claim is a lot bolder than that.
> There's a fair bit of junky data and quite a few analysis methods that cannot compare data accurately between magnets, position wrt isocentre, or even processing software revisions. But your claim is a lot bolder than that.
What is the prevalence of each of those types of errors? Since AFAIK many of those errors occur in 30%+ of papers, unless you're assuming close to 0% independence, my claim doesn't seem especially bold...
The methods I am familiar with, mostly dealing with volumetric measurements, registration, and automated segmentation were either robust or had well characterized limitations.
I take some offense to blanketing the entire field of MRI based on a couple articles pointing out that some (admittedly a lot) fMRI experiments are statistically unsound.
How would you estimate the prevalence of inaccurate research in the field, if not by multiplying each (known) source of error by its estimated prevalence, and also using some estimate of the independence between errors? (And of course including general sources of error that affect all scientific research and aren't specific to any particular field.)
And as far as independence goes, I haven't seen any research to suggest these errors have anything less than 100% independence, although if there is any research suggesting that these errors (or any other errors in science research) are correlated then I'd love to see that.