>"Because positive results are being published as fast as possible with little checking on them."
Why do you think this is so? It is because they think the stats are telling them the probability their theory is true, etc. A significant p-value means the result is "real", etc.
Replicating a study is not something that is likely to get you published, and putting in twice the work to run an experiment twice is unlikely to give you twice the 'return' on your time investment (assuming the funder even supports spending twice as much).
Why, though? Assumably the replication would be the interesting part, and presumably both teams work would be equally valuable showing the same result.
Innovation is overrated; maybe we should just pay scientists more to do inglorious but immensely valuable work. I was certainly repulsed from lab sciences because I had no desire to do the work that's in demand—pulling results out of your ass to get more funding.
They are likely to not get cited as much, which would lower a journals impact factor, reducing the incentive to the journal. This makes it worse for the scientist as there's work without a high "impact" paper, and since they get largely to choose the work they do if it's less "fun" it might be done less. The funder also has to choose between funding new work or replicating previous work.
Something I'd be very interested in is paying independent labs to replicate studies. This removes the incentive issues surrounding the journals and researchers, leaving the funding issue.
Why would you need to cite a paper that confirms a result? If several people have confirmed it, do you need to cite them all every time?
If these were the case you'd have the opposite problem. People would be hugely incentivised to reproduce highly cited papers again, when what you want is to reproduce papers with low levels of confirmation.
Yes, we want to know what observations we can "hang our hats on", so that we can come up with actual theories (not vague crap like "this drug makes that disease happen less"). The more replications published the better until it is being done in undergrad/high school classrooms.
You can't rely on anything in the current half-assed, no replication environment. In many areas it is literally not worth coming up with an mathematical model to explain the data because it is all so questionable.
I'm not really sure how that addresses the issues raised.
You're saying it's better overall if more replications are published. I'm talking about the problems that are stopping us from getting to that point. The current incentives do not align well with the desire to have more replications done, and simple changes could easily backfire.
> (not vague crap like "this drug makes that disease happen less")
I simply do not agree that if you think this and find some results that point in that direction you should not publish. I see absolutely no reason to save electrons to improve some average of papers, I'd much rather work isn't re-done repeatedly. Perhaps publishing something vague with some backing (e.g. we think X does Y and the data is at least consistent with this, and we can't think of what else makes sense) gives enough for someone else to build on and do a more rigorous investigation.
In my initial post I distinguish between two steps. The first is data. If you collected some data and can describe the methods well enough for others to replicate it then go ahead and publish it. There is no need to start theorizing about things like "the drug caused the difference" vs "there was a confounder that caused the difference".
Explaining the observations is a separate step from coming up with a reliable way to generate a pattern in the data.
No one gets particularly large amounts of glory for confirmatory research because if it's successful than the first paper is still "The first paper to describe X".
I don't understand what studies you are referring to. Each additional paper should be improving whatever parameter estimates are being made, it makes no sense to only look at one result and ignore all the others.
While they'll all get swept up in meta-analyses, systematic reviews, etc. a huge portion of a paper's citation count can come from narrative sections, where that first paper will reap a lot of citations in "First describe by...", "As established in..." etc.
The Nobel Prize goes to the people who discovered something. Not the people who confirmed that they were right.
Not to mention that if your study is irreproducible, because you made a mistake in experiment design, or failed to control for some variable, running your experiments twice may not necessarily catch this issue.
Having someone else run the experiments is also required to get past something we have to deal with a lot in the tech world. The "it works on my machine" result.
You may have run the experiment twice, but can anyone else? Does your setup depend on something not specified in your paper? In the tech world, how many times has the setup documentation you wrote turned out to be missing some detail when a new starter actually tries it?
No setup will ever be perfect, but we can look at reducing common points of error.
>"Because positive results are being published as fast as possible with little checking on them."
Why do you think this is so? It is because they think the stats are telling them the probability their theory is true, etc. A significant p-value means the result is "real", etc.