Hacker News new | past | comments | ask | show | jobs | submit login
After 350 years of academic journals it’s time to shake things up (theguardian.com)
91 points by Hooke on May 15, 2015 | hide | past | favorite | 36 comments



"Indeed it has been said that Democ­racy is the worst form of Gov­ern­ment except for all those other forms that have been tried from time to time."

-Winston Churchill

-----------

While I broadly agree that the dissemination of academic research can and should improve, it is not clear to me what will amount to a superior (while still robust) system. Nor does the article clearly articulate a viable alternative.

The mathematical sciences have provided remarkable utility for the last 350 years in part because of the robust feedback loop of, 'hypothesis-experiment-disseminate'. I believe this, 'science is broken' hype is largely just that. No one throws out a codebase before a superior (and regression tested!) alternative is ready to be deployed.

My 2 cents: 1) Let's push for publicly funded research to be made publicly available, regardless of what journal it was published in.

2) Let's think about incremental changes to peer-review that can improve dialogue and speed-up review times. Instead of waiting 6 months for a late paper reviewer to drop a series of comments in my lap, why not a wiki or github-like editing and annotating environment where we can (anonymously) generate comments and a fruitful dialectic. Then, let's include the comments (again, anonymously) with the paper. Show how the sausage is made-- scientists have little to hide.

FWIW; I am a practicing scientist in the field of computational engineering.


It feels like we've lumped together a bunch of different use cases and metrics into the current publication system.

1) Authors need to get peer review, feedback to make their case stronger and improve the communication.

2) Researchers need a way of filtering the publications so they can find the gems in any field or ones very useful to their own work.

3) Then there's the need to evaluate how well particular people are doing in their research, which boils down to metrics including the prestige of the journal (immediate) and the number of citations it'll eventually accumulate (long term). I'm not aware of any distinction between citations of a work in an introduction or review and citations of a work that provided some essential part of the current method. These are pretty big differences since the former is more strongly influenced by reputation while the latter is its utility.

Here's how one alternative system would work:

Start with a github-like editing and annotation system for soliciting and coordinating feedback. This also allows readers and possible evaluation systems to see the types of contributions people make.

References would get more sophisticated with categories like "background material", "thought provoking previous work", and "indispensable to this work."

Publication initially goes to general repositories like arXiv where anyone can read it. Have an amped-up Google Scholar search where you can easily bookmark works of interest, tag them, and share them with colleagues. Different HN or reddit systems would spring up to bring better work to the attention of more researchers.

"Journals" have a panel of editors who select a subset of work to display under their banner. This is not mutually exclusive so particularly awesome work would be featured in Nature, Science, and more specialized journals in this alternate universe.

There would be a diversity of impact metrics including ones based purely on "my work would be impossible if not for this work X". Affiliation between authors could be factored into some metrics so greater impact is given to referenced work with more degrees of separation from the author's current and previous institutions.


I like the concept of semantic citations, e.g. "cited for methodX", but it's hard to make a general enough system out of this that doesn't disrupt the flow while reading. And in a way, we already do it, except that it's not being done in a consistent fashion that enables datamining of it. Getting publishers to standardize some markup for this would be cool, but I don't think the outcome justifies the added effort for everyone.

As for several journals publishing the same thing: there are many problems withh that idea, not the least that Nature/Science doesn't want your paper with an 11 page methods section, they want something much closer to a popsci article. So you have to write several. This is already happening today, when someone gets to publish their supercool finding in Nature they also write one or more longer, detailed (i.e. proper) papers that are sent to more specific journals.


"I like the concept of semantic citations, e.g. "cited for methodX", but it's hard to make a general enough system out of this that doesn't disrupt the flow while reading."

The different types of citations would be different markup tags and the display to the reader wouldn't be any different than the current reference notation.

Regarding the difference in ideal paper format between journals, I think there are already multiple detail levels prepared. Our Nature paper, for example, has a pretty significant supplemental information section. The standard could be three levels of detail: overview, standard, and detailed.


If the reader can't see the semantic markup, then what's the main purpose, and how does this purpose justify the additional effort required, both to implement this system in a standardized way across journals, and by authors who have to tag their citations with this semantic markup?


It's for the algorithms that process papers to produce metrics. Display systems can choose to ignore it and show them all as [1] or might use color, hover message, or whatever to show the distinction in a way that doesn't alter the standard reading experience if that's desired.


Then I think there's no way that justifies the additional effort required both from people writing papers and for journals to implement this.


A reasonable compromise IMO is for Journals to have a very temporary lock on new publications. Public access after say 1 to 2 months seems to meet the vast majority of "Open Science" requirements without much change required.

2) IMO a much more stringent form of peer review is necessary. P<0.05 is only useful in very limited situations. I would argue that unless you republish your exact experimental design beforehand you would need to drop that to P<.01 to cover for a lot of marginal research out there. The old do 10 studies and you can likely publish one of them is harder when you need to do 50 or fudge the numbers harder.


The peer-review before an article is published is just a sanity check, like Fizz-Buzz during an interview.

The more valuable check is what happens after a paper is published, when your peers read the article and discuss it and critique it and try to replicate it and publish responses to it. What makes peer-reviewed journals valuable to science is the that lots of practitioners read it; the value of peer-review is that it makes the journals more interesting for practitioners to read because it doesn't waste their time on obvious crap.

Publishing a paper is not a stamp of truthiness. It's a communication to the community that something is interesting. Maybe it's right, maybe it's wrong, time will tell. Truth emerges from a large body of research and publications from a large number of researchers. Any of the individual papers along the way may be missteps; it is only in hindsight that a particular paper can be picked out as seminal or definitive.

That's OK.


It's also a stamp of relevance. Like the front page of HN vs new.


I'm not sure P<.01 helps overall, though I wouldn't cry if it happened.

1) small p-values require larger n's which will price some real studies out of happening, 2) it only makes dishonest research marginally harder and a little more obvious, 3) a stricter p-value may increase the misconceptions about how important they are, we already have enough B.S. flying around because "P<0.05, it's scientific now!".

I haven't totally taken a side but I find the idea of dropping p-values altogether to be interesting as well: https://whyevolutionistrue.wordpress.com/2015/03/05/psycholo...


> 2) IMO a much more stringent form of peer review is necessary.

That disregards the cost of review. Reviewing and repeating experiments uses resources that could be spent elsewhere. On the long tail of papers that are not high impact even if correct, that's probably not a good allocation of effort.

What I think would be better is that as a paper crawls up the impact curve (probably measured by citations), then additional resources should be spent validating and repeating it.

In theory, that's how the system already works since everyone wants to build on top of or refute famous papers. In practice, I don't know if it does.


Repeating experiments is not a wasted effort. "More than half of biomedical findings cannot be reproduced – we urgently need a way to ensure that discoveries are properly checked" - Elizabeth Iorns is co-founder and CEO of Science Exchange.

Science which can't be repeated, that is problematic.

[1] http://www.newscientist.com/article/mg21528826.000-is-medica...


I am not suggesting people need to necessarily spend more time doing the review, just accept less BS.

In many cases the bar is far to low.


1 - Publicly available archival-quality publishing for all science done in the US would cost at least millions a year to administer. The idea is nice, but is just an idea without the funding to make it happen.

2 - Peer review problems could probably be addressed on a per-venue basis. Eg in many areas of CS the max turnaround time is a few months. And there's nothing stopping you from publishing peer reviewer's comments. But I am not convinced that enforcing this universally is a good idea, for either the author or the reviewer.


> Publicly available archival-quality publishing for all science done in the US would cost at least millions a year to administer. The idea is nice, but is just an idea without the funding to make it happen.

This is what government is for and a million a year is a rounding error to the government.


Agreed. But I think this is important to point out. It's not as if scientists are lazy or greedy -- they spend the most time and get no benefit out of publications in journals in the status quo. There just aren't compelling alternatives to the traditional model in many disciplines, and most of the time it's due to want of desire rather than want of money.


Someone down voted this? Archival-grade hosting is pretty expensive -- it means the cost of publishing a paper is the cost of serving that paper (and keeping backups, maintaining the software infrastructure, etc) for... ever. Typically both for- and non-profits place this cost in the three-to-four figures range. It's not as simple as "slap it up on arxiv" -- arxiv would break in a second if every scientist started using it as the de facto platform for every paper.


There's an interesting asymmetry in peer review. If I invite an outside scientist to review my work once it is complete, that's external peer review, and it confirms that my work is sound. If I invite the outside scientist to collaborate on the work from the beginning, so they can catch mistakes before they ruin the work, that's not peer review -- I'd have to go out for peer review at the end.

Why the focus on catching errors when they've already been committed?

Some journals (https://osf.io/8mpji/wiki/home/) have started "registered reports", where you send your methodology out for peer review before running the experiment, and the journal commits to publishing the results if the methodology is sound. This seems vastly more reasonable, though obviously it's only possible if you know exactly what you want to do in advance.

So why not switch toward reviewing work earlier, through registered reports or collaboration, and end the artificial bottleneck at publication? It would prevent wasted time on flawed experiments and remove some of the bias against negative results.

(Of course, the other problem is that peer reviewers aren't very good at detecting errors -- even serious ones. Medicine has moved towards standard checklist for common types of research, requiring papers to report every important methodological detail so review can be more thorough. But I don't think you'll improve the overall quality of peer review without changing the current "fine, I'll review it in between writing three grant proposals and grading 150 exams" voluntary review culture.


When collaborating, people become attached to their work. This inevitably clouds their judgment when it comes time to review, especially given that financial ties are often involved.

External peer review reduces those problems.


Wow, that Registered Reports thing sounds like a fabulous idea. I think it could be an interesting solution to the problem that it's really hard to publish things that don't work. Several times I've read papers thinking "hmm, this might be worth trying", and then talking to the authors you hear "no, no, it's a crappy method, try that other thing instead".


I just submitted my first peer review for a journal and it was obvious to me within the first page who had written it. Not necessarily a condemnation of the process (I think it remains the best we have available) but now that I see behind the curtain a bit, it's striking to me how peer review is at best a semi-blind process. (Likewise, looking back at peer reviewer's comments on my own work, I can now guess who most of them were with the benefit of a few more years experience in the field, mainly because everyone tends to cite their friends).

The biggest current problem seems to me to be the ridiculous time lag between submission and acceptance or rejection (up to a year in many cases). If there were more of a financial incentive, I can imagine someone making a peer review platform that circumvents having to add track changes comments on Word and manage everything via emails (which is the norm in my field). The thing I'm imagining would be both a document sharing and editing platform and a CMS that allows journal editors to manage scheduling and assigning tasks, sort of like a customized version of Asana wedded with Google Docs. Anyone know if something like this already exists? I'm coming from a social sciences and humanities background so I don't know what the norm is in STEM fields.


One thing I noticed is that there's a lit review at the beginning of most research papers. That's a lot of work just to get people up to speed.

Thought not a complete solution, I envision a github-like environment for the research and links to wikis and other repos for the lit.

People could openly submit "bugs" with methodology or conflicts between studies (as a replacement or supplement for peer review). You'd be encouraged as an author to publish research that was well followed/starred, but most importantly you'd be flagged if you had a bunch of open bugs.

(Bug resolution then would have to be approved by not just the researcher, but perhaps an Stack Overflow-like "close" voting system.)


This is a nifty idea. The only problem I see with it is that it makes papers far less self-contained. IMO some of the best-written papers let you understand something of their impact even if you're not too familiar with the field. Taking away the lit review hurts that.

You might argue that a single collaborative lit review would do -- so readers could just print out that plus the paper. But often that lit review section is tailored to your specific goals and audience. To do it well, I think you'd end up having hundreds of such lit reviews for a given topic. In which case, why even bother?

(I didn't mean to sound so harsh, by the way! I do quite like the idea of bringing some of software's collaborative tools over to research.)


Yea that's true. And I even enjoy that self-contained aspect of papers, especially when I'm reading up on an unfamiliar field. But perhaps they could be more high-level.

Some authors go crazy. Say for instance, by explaining philosophy-of-mind concepts in an addiction paper.


+1 to github-style journal.

Also there is "The Self Journal of Science" that aims to fix peer-review and exterminate bad publishing habits --- http://sjscience.org/

And my marginal, but an open source project --- http://papers-gamma.link


Also you have PKP, that even organises a conf about this stuff http://pkp.sfu.ca/pkp2015


Ideally the lit review is less a review and more of an outline of the current state of knowledge and, most importantly, where the gaps are (that this paper fills!). In practice you're often correct though, the lit review is more "look at all the papers I read".


> “Authors still create journals in prose-style — do we really need to produce all that text?”

I used to think that the answer was no, but recently I've become convinced that the answer is yes, we really do need to produce all that text. First, 'all that text' is really a misnomer, scientific papers are pretty dense (c. f any letter to Nature). Second, the audience here is not a machine, it's a human, and humans are inherently biased toward stories. The best papers have a story (in the journalism sense, no the fiction sense), and they use this story to make a convincing argument. Without the story, you have data and data is not an argument. As much as I'd like to believe that the data would be enough, we have to remember that we're not producing arguments in a vacuum. The audience comes to the table with a lot of preexisting ideas, biases and beliefs and unwinding these is non-trivial. Think about evolution and Origin of the Species. Looking at the data now, it seems so obvious, but even the data along with stacks of arguments was not enough to convince people for a very long time. It's work to re-write a person's mental models - all that text is needed to get the information in there.


I hope people who care about this issue are aware of the Open Science Framework [0], which is being developed by the Center for Open Science [1]. It's a great project that's helping to open up the entire process of doing science.

[0] https://osf.io

[1] https://centerforopenscience.org


I work for a company that is trying to change this - academia.edu. We're all about open access to the world's academic research. Peer review is definitely something we're working on!

We're also hiring in SF, if this interests you (jon at academia.edu)


My privacy expectations for peer review are quite high. Perhaps higher than in any other case.

However, I expect the entire world to know any time I accidentally follow a link to academia.edu. And I especially expect that academia.edu will try their damnedest to figure out who I am and sell that information to someone. Put simply, academia.edu feels like linkedin.com.

This juxtaposition is going to be a huge issue for academia.edu if they try to get into the peer review software game.

I would sooner not participate on an editorial board/program committee than use academia.edu to submit a blind review. I simply would not trust a platform associated with that site or its parent company to take privacy seriously.

I suspect this attitude is fairly common. Linkedin/Facebook/Academia.edu exist solely because privacy isn't perceived as more valuable than the services provided. But every academic I know is extremely serious about protecting the anonymity of reviewers.

If you want such a product to succeed, you'll need an extremely strong pro-privacy pitch (backed up by The Truth) along the lines of "your data is used for exactly nothing other than displaying peer review results for this particular paper to the authors and the other board/committee members".

Especially since most academic's perception of academia.edu is "that site that lets me see who reads my papers."


1) Have a good idea X. 2) Reach out for funding and technical knowledge. 3) Investors and new co-founders want "Facebook for X" 4) Good idea is now a social network embedded in X's problem space.


> We're all about open access to the world's academic research.

Is this really true? academia.edu seems to me to be a walled garden, essentially linkedin for professors. It's true that I can download one paper at a time -- after I have signed up -- but can I download all of them in bulk? Can I easily rehost all my papers elsewhere? Can I view papers without signing up? Can I download your social graph?


Sorry to be critical but whatever the solution is going to be, I hope it will not involve some version of Facebook for academia. This would make things only worse. The infrastructure of science cannot be allowed to be under the control of a company. Too much power in one place, too much conflict of interest.


I think the Internet has done that already. Peer reviewed now represents a much larger critical constituency.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: