> But none of these is a valid excuse to not publish your findings in the scientific record
Are journals even going to publish most no-result studies? (I.e. a result of no significant correlation found.) Journals pick and choose the best articles to peer-review and publish out of all submissions.
It feels like there ought to be some other system for tracking no-result and unpublished studies, something easy and low-friction -- not the trouble of an entire paper, but maybe something more like the length of an abstract?
Then if you think about running a study, you can run a search to see if others have already done something similar and found nothing... and reach out to them personally if you need more info.
Journals are only one part of the equation. Before you can publish you need someone who is going to collect all the data, do a comprehensive analysis, then write up the results.
That's not a trivial amount of work in the least. If you're a professor trying to establish your lab, would you use a few months of your post-docs time writing up experiments that don't work, or working on new ideas that might work?
Indeed. I think people don't understand that precariousness doesn't just apply to the private sector. If you want to advance in your career, you have to be focused on things that aren't complete dead-ends.
> Are journals even going to publish most no-result studies? (I.e. a result of no significant correlation found.)
Yes they are. Not all journals of course, but some are very explicit that they will publish based on quality and not based on outcome, e.g. Plos One. And you always have preprint servers as a backup publishing option.
There are many reasons for publication bias. That you can't find a place to publish your findings is not one of them.
The Center for Open Science actually has a web framework built around this concept they call pre-registration. Check it out website: cos.io , site to preregister: osf.io
Speaking of which, why is there no central database of all articles with metadata? A "journal" would simple consist of the articles tagged by a particular set of people as peer reviewed.
It depends on the country but an approximation for US biomedical research is https://pubmed.ncbi.nlm.nih.gov/
although that's only journal-published information.
Oops. Yes. What I meant to say was that US granting agencies often require published results to be catalogued through PubMed. Not that PubMed only catalogues US-based research.
Exactly. I could see a PI saying "well, you'll finish your PhD in 5 years if you're lucky and most of your experiments are successful. In the meantime, could I get you to spend 6 months writing up all these unsuccessful experiments? You'll get zero credit for it and it will delay your PhD. Thanks!"
Are journals even going to publish most no-result studies? (I.e. a result of no significant correlation found.) Journals pick and choose the best articles to peer-review and publish out of all submissions.
It feels like there ought to be some other system for tracking no-result and unpublished studies, something easy and low-friction -- not the trouble of an entire paper, but maybe something more like the length of an abstract?
Then if you think about running a study, you can run a search to see if others have already done something similar and found nothing... and reach out to them personally if you need more info.
Is there nothing like this?