Hacker News new | past | comments | ask | show | jobs | submit login

I believe this practice should be inculcated in most research projects. However, if I put my adversarial hat on - I see one challenge. Scientists could still perform their experiments beforehand with the hypothesis, procedures and results. During submissions, they will _phase_ these two-stage information and give an impression of not performing posthoc rationalization. However, in reality they would. Such a system could be business as usual for the scientists unless you control the power for scientists to perform their experiments which seems impractical.

The idea is extremely great but I believe this could be difficult to implement at a publication level.




For short experiments you could game the system, but for an experiment that requires two years of work, it’s unlikely that you’d hold your results for two years to maintain the illusion.

Also you could tie the release of funds to publication of expected methods.


Global. Warming. Exponential. Curve.


This is a risk, yes. I don’t necessarily think it’s a big problem.

First off, you’re describing premeditated, wilfully fraudulent behaviour. I suspect a lot of the problems are either less deliberate (you react to new information by making changes to your hypothesis or protocol without stopping to think what that does to your experiment’s reliability), crimes of opportunity (“this is so close, there’s no harm in fudging it a little bit, right?”) or a combination of both. It’s ok to enact rules that fight pervasive small issues while doing nothing about rare malicious behaviour!

Even the malicious behaviour would become much harder though. First off, the strategy you described only works for setups where the data collection stage is relatively short. If it takes a long time, then the fraudster’s timelines start looking suspicious. Second, that type of behaviour could really harm you if caught.


It would be difficult in practice to maintain the secrecy needed to pull this off.

While in many cases scientists currently keep what they're working on "mostly secret" until it's published, in fact lots of their friends and colleagues have a pretty good idea of what they're doing and would be in a position to ask hard questions if they tried to take your approach.


Wait, there are groups that don’t do this already?

“We thank reviewer 3 for the insightful suggestion...” (ok now copy paste the results that we cut for submission)

Ha ha only serious. Otherwise you get “helpful suggestions” like “run another few phase III trials first”


That would make it an explicit cheat. Right now, not publishing negative results, tweaking the procedure description a little and things like that lie somewhere in a gray zone and can be socially acceptable in some lab to some extent. However, if this would be explicitly defined, breaking this would be an explicit straightforward lie, which would make it harder to make socially acceptable in some limited circles, people would have to go to greater length to hide it and this kind of thing would not happen that often.


Doing that would require researchers to overtly and explicitly lie, which would act as at least some disincentive.


If caught, the dishonesty would be truly obvious and harmful to a reputation




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: