Hacker News new | past | comments | ask | show | jobs | submit login
Let’s make peer review scientific (nature.com)
279 points by return0 on July 6, 2016 | hide | past | favorite | 141 comments



Shameless plug: I'm working at the MIT Media Lab on the PubPub project (http://www.pubpub.org), a free platform for totally open publishing designed to solve a lot of these problems:

One is peer review, which, as some have already mentioned, needs to be done in an open, ongoing, and interactive forum. Making peer review transparent to both parties (and the public) makes everyone more honest.

Another is the incentive of publication itself as the ultimate goal. Instead, we need to think of documents as evolving, growing bodies of knowledge and compilations of ongoing research. Every step of the scientific process is important, yet most of it is flattened and compressed and lost, like most negative results, which are ditched in search of sexy click-bait headliner results.

Another is the role of publishers as gatekeepers and arbiters of truth. We need a medium in which anyone can curate a journal, and in which submission, review, and acceptance procedures are consistent and transparent.

Another is the nature of the medium itself. It's 2016, and these dead, flat, static PDFs are functionally identical to the paper they replaced! Insert your favorite Bret Victor/Ted Nelson rant here: we need modern, digitally-native documents that are as rich as the information they contain.

Another is reproduciblity. We should be able to see the code that transformed the raw dataset, tweak it, and publish our own fork, while automatically keeping the thread of attribution.

The list goes on and on...


How are you planning on addressing the fact that none of this matters in the slightest if academic career growth/incentives/reputation specifically revolves around publishing boring pdfs in established journals? What do I gain by sending my work to your platform where the first question anyone in my "target audience" will ask is "why didn't this get published in a real journal, whats wrong with it?", and I will get essentially zero resume kudos points for it? Keeping in mind that it took me at least many months to do the work, I don't exactly have an abundance of papers to throw around, and my officemate almost certainly will submit work of similar quality to a traditional journal and reap the benefits of that.

Just saying, my decision to publish in traditional journals isn't as much as a decision as it is a requirement of the very career I am attempting to pursue.


Great question, and one that certainly doesn't have a straightforward or trivial answer. It's definitely more of a social challenge than a technical one - making publishing free/open won't do anything to fix incentives on its own.

My hunch is that change to this system will come from the outside. It's too risky of a career decision for a tenure-track professor to start publishing on PubPub (or any open/new system). But, there are lots of people who aren't playing that game. Lots of people who are doing science outside of academia, at a corporate R&D position, or for the sake of education, etc.

The most important step is to show that open publishing works. If we can work with these early adopters and show that conversations are more rich, or results more reproducible, we can start to go to universities and grant agencies and advocate for them to require open publishing. The first day that a university hires a professor or an agency rewards a grant based on the history of openly published work, will be a turning point. I hope it will be similar to the first time a software dev was hired for their Github profile, rather than their CS degree.

Today, software companies hire on experience. A university degree can show that, but so can major contributions to an open-source project. I hope science can become the same. Whether you're a PhD out of a great program, or an high-school drop out that has committed her life to rigorous experimentation, your demonstrated experience should be what you're hired on, not the list of journals that have found it in their interest (many of them are for-profit) to include your work.


Perhaps offering some sort of crowdsourced funding mechanism and a reputation system would go a long way toward correcting some of these incentives?

For example, giving authors / organizations a Bitcoin address where they can receive funds from individuals / organizations who want to support their research.

Also, awarding reputation to authors based on the level of peer review their research has successfully undergone (number of peers, level of rigor, etc.), and conversely awarding reputation and funding to those who perform peer reviews. Allowing users to contribute to a peer review fund for individual articles or in general.

All that to say this is very exciting and opens up a lot of new possibilities.


> For example, giving authors / organizations a Bitcoin address where they can receive funds from individuals / organizations who want to support their research.

That's a fantastic idea. Maybe we could call this "depository" of money to conduct research something like, hmmm, what's a good word… a grant?

> Also, awarding reputation to authors based on the level of peer review their research has successfully undergone (number of peers, level of rigor, etc.), and conversely awarding reputation and funding to those who perform peer reviews.

Sounds fantastic as well! Maybe these authors could create like, a website or curriculum vitae where they could list their accomplishment to establish their reputation. You know, they could have a section in their medium of choice that could be titled something like selected peer reviewed articles where they'll list their publications along with their coauthors and the journal it appeared in. Maybe these journals could devise some kind of ranking to measure reputation. Maybe they could call it something like… amount of impact or maybe just impact factor for short. I think this could work really well.

> Allowing users to contribute to a peer review fund for individual articles or in general.

Maybe a general fund should be created to support science! Maybe a national science fund or something, governed by a so-called national science foundation who can vote scientists, engineers, and the like onto their board to steer the allocation of funding.

I really think you're onto something very good here!


> Maybe we could call this "depository" of money to conduct research something like, hmmm, what's a good word… a grant?

Nah, that word is already in use for stagnant allocations of academic welfare to work on bullshit instead of transformative techniques (e.g. CAR T-cells, which NIH refused to fund for years). Need a new word to signify "money that is actually intended to produce results" instead of "a pension for irrelevant tenured senior PIs to pay non-English-speakers below-minimum-wage to work on topics that became irrelevant a decade ago".

> Maybe they could call it something like… amount of impact or maybe just impact factor for short. I think this could work really well.

Ah yes, impact factor is such an amazing tool. It allows "executive" "leadership" types to predict (very poorly, but who cares?) how many citations a paper might receive if it survives the months or years between submission and publication in a major journal. Trouble is, JIF is massively massaged and the COI that Thompson Reuters has in equitably enforcing it is ridiculous.

WARNING: Non-peer-reviewed work ahead! If you're not careful, you might have to apply critical thinking to it!

http://biorxiv.org/content/early/2016/07/05/062109

> Maybe a general fund should be created to support science!

That's a great theory. Perhaps it can be as well executed as the CIHR fund (where study section has given way to "ignore everyone who doesn't suck my dick directly") or NSF (whose yearly funding is dwarfed by the R&D funding at a single company). This approach is working out very well!

You know, if I didn't know better, I might think you were the sort of researcher that fails to look at the details and just submits your most fashionable bullshit to whateve journal at which your pals happen to be editors. I might get the impression that you're the cancer which is killing grant-funded science, which prizes large labs over large numbers of R01 projects, which believes that O&A is an entitlement to take out mortgages on new buildings instead of to pay for the costs of disbursing and administering a grant. But, since the evidence isn't before me, I won't.

It would be nice if you thought a little more carefully about what you wrote. The devil is in the details.


> or NSF (whose yearly funding is dwarfed by the R&D funding at a single company)

If the worst thing you can say about the NSF is that they need more money, that makes it sound like GP has come up with a nice way to allocate the available funding towards particular research projects.

> It would be nice if you thought a little more carefully about what you wrote. The devil is in the details.

Details like how to get "crowdfunding" to put up enough money that "independent scientist" can be a full time job and not just a hobby for the odd few who somehow already have most of the needed lab facilities/equipment?


Also: I still haven't heard (from either you or the previous parent poster) how journal impact factor can possibly be justifiable as a metric for relevance.

Anyone surveying the actual citation distributions at major journals will immediately note that a metric assuming near-normality cannot possibly summarize non-normal distributions of citations. The latter describes nearly all journals, thus even if JIF were not manipulable by stacking, self-citation, and negotiated exclusion of items to decrease the denominator, it would still suck.

https://quantixed.wordpress.com/2016/01/05/the-great-curve-i...

Look carefully at the details! This metric is among the most frequently emphasized by researchers who comprise study sections, and it is objectively terrible.

I'm not whining "just because" -- many of the lines in my CV end with NEJM, Nature, or Cell (no Science paper yet). I'm saying that at least one of the commonly accepted metrics for individual investigators is broken. That sort of detail corrupts the entire rest of the system.

I'm also not saying that a direct public-facing system wouldn't have huge potential problems (although it is nice to see attempts like experiment.com seemingly doing OK, and the funders realizing, hey, there are a lot of shades of gray between "utter bullshit" and "exactly the right experimental design for the question being asked").

One of the nice things about talking directly with folks at NIH, for example, is that they recognize there are serious issues with the incentives in place. If they are willing to collect the data and evaluate (publicly, e.g. in Chalk Talk postings) the findings, doesn't that suggest room for the current system to improve?


I take it you're not familiar with "crowdfunding" sources like the AACR, LLS, ASCO, or other professional societies?

As someone who is funded by several of the above, and who noted that their review processes were substantially less bullshit-intensive yet no less rigorous than NIH review (which has many benefits, efficiency not among them), I'm going to go out on a limb and suggest that it's possible.

As far as the NSF, they do a good job with what they have, but what they have is not commensurate with what we as a society could stand to spent on science. Even NCI is a far cry from that: https://pbs.twimg.com/media/CmLJzKQWkAAl372.jpg:small

Distributions are similar for various other avenues of funding, and it is quite clear that the overhead & administrative costs requested by many recipient instutions are far out of proportion to actual needs, so the impact of the funding allocations is further reduced.

Thus it appears that a direct conduit from potential patrons to researchers is, in fact, desirable. Otherwise, services like experiment.com would not exist. They're not at the level of an NIH study section (duh?) but they have consistently produced a small stream of usable results that belie their supposed irrelevance. Once upon a time, the Royal Society existed for just such matchmaking: find a rich patron and a promising young scientist and line them up. You've likely noticed that many if not most major universities and "centers of excellence" rely upon exactly this model, supplemented with NIH or NSF grants, to exist. Further modularizing the model so that an administrative hand yanking out bloated "indirects" at every turn is not mandated might not be the worst thing, or (alternatively) being more transparent with said O&A requests, might at least bring some of the bullshit under control.

The public clearly wants accountability. The masses may be asses, but if we want their money, we really ought to be transparent about what we're doing with it.


The difference between professional societies and crowdfunding is that professionals, not the crowd who donate directly, decides which projects to fund. In this sense, I do not see a great qualitative different to government funding agencies --- if you do, please elaborate.

EDIT: And to clarify, in the societies I know, general members do not directly take part in grant decision processes. Rather, the decisions are made by a small panel, possibly together with external reviewers. This is fairly different from crowdsourcing.


It's different from crowdsourcing, but the source and sink for the funds also tend to be more closely related. Ultimately I don't really believe that major initiatives (eg P01-level grants) can be adequately reviewed by anything other than genuine peers.

But by the same token, an exploratory study requesting $30k for field work or sample processing could very well be evaluated by less skilled peers. Actually, I think I'm going to try and shop this to a friend at NIH. I'll fail, most likely, but at least I won't just be whining.

For example, pharma and big donors use the LLS review system as a "study section lite" to hand out grants larger than a typical R01. The paperwork and BS isn't really necessary at that level and just gets in the way. If something like this existed for "lark" projects, inside or outside of NIH/NSF, perhaps more diverse and potentially diversifying proposals would be worth submitting.


To some (fairly large, in the case of ASCO or ASH or AACR, perhaps smaller for LLS or AHA) degree, the dues-paying professionals in these societies are the crowd. I would say they are a middle ground between something like an experiment.com or similar at one extreme, and NIH (which has inordinate purely political input -- ask your program officer!) at the other.

We shan't discuss scams like Komen here, but genuine research foundations can exist along a continuum.


The paperwork burden for an NIH grant (relative to a society grant) is often a large scalar multiple. The accountability is often on a par with, or less than, the typical society grant. It mystifies me why this should be so.


I feel like those fields with highest facility needs / costs would come last if at all. There are many fields that require pretty small amounts of resources for example: Computer Science (I did most of my research on a personal laptop, with other equipment costs <$5,000), Mathematics, Philosophy, Economics, Psychology.

All of these seem very possible to crowdfund with the ultimate goal of unhooking them from perverse incentive systems of typical universities.


at least three of the above are in fact supported by experiment.com backers, although largely as a "bridge" to more traditional scholarly outlets. That said, if you go out and get extramural funding for your work, generally that is the definining characteristic of a successful PI, so...


I wasn't saying the current system is great, there's a lot wrong with it yes I agree. The impact factor thing is also a pretty silly metric to me as well, I agree with you there. The point I was trying to make, albeit sarcastically, was that the system the guy proposed is what we have today just without the extra hoops to jump through. Like I have absolutely no interest in maintaining a Bitcoin wallet or whatever nor do I want anything to do with them. I'll take my funding in dollars or euros or something real and tangible please.


Mostly agreed, although I did consider using an HPC allocation to mine bitcoins & hire work study students. But then it turned out that if you study interesting stuff and write the ad correctly, you'll have to beat them away with a stick. For good measure, I convinced one of our corporate patrons that they ought to pay for one of the students.

As far as extra hoops, it's not clear to me whether endless NIH paperwork bloat and ICMJE declarations are more or less onerous than crowdsourcing type stuff. I tend to think there must be a happy medium, but I could just be naive.


Oooo sarcasm. You're probably right though, the old system seems to be working out pretty well. Besides, science is all about never questioning existing institutions right?


It was sarcasm yes but my point was that what you proposed already exists, just without the crypto currency bullsht and extra hoops to jump through.

I didn't say the current system was flawless. I'm just saying your proposal is the current reality already.


I think it's a political problem. Most researchers and professors tend to be employed by the state. Lobby the state to give very high preference to open access publication. In an ideal world I'd read a sentence like this in a hiring guideline for state universities: "only research that is available to all citizens of this state shall be considered as a contribution by the candidate"

Additionally for research project funding I want to read a similar sentence in the proposed outline that you can typically download for the grant proposal. "Only list your research projects and open access publications in the Previous Related Work section" (or high preference will be given to ...)

Basically state funded research that is not available to citizens should be heavily penalized. I think that is logically very consistent. In fact I'd love to GPL the process and make it a requirement to OA everything as soon as a single cent of money goes towards the research.


Christ this would be brilliant. The trouble is that you have otherwise-decent people like Kirsten Gillibrand completely in the pocket of RELX. You have massive COI and crap like the PACE trial who WILL NOT, under any circumstances, release their data, and who consider academic inquiries regarding said data (the foundation of any scientific publication) as "vexatious".

In short, you're better off trying to get policy changed at NIH and other, major funders of research to push openness (Google and major tech companies already are quite good about this), and only resorting to legal challenges when the satanically evil Elsevier and friends attempt to buy off legislators.

If you were wondering whether academic institutions typically stand up for scholars (as opposed to faucets of grant money and indirects, who can behave as they please), kindly refer to the following. http://boingboing.net/2015/12/11/what-will-it-take-to-get-mi...

Academia is not scholarship. The institutions of academia are in fact opposed to scholarship if it might make their senior "leaders" (often far removed from any scholarly pursuits they might once of had) look bad. Academia responds wonderfully to incentives, though, and that's where I do believe you have nailed it.

Decision: Accept with Minor Revisions

;-)


I think the answer is closely tied to funding. The whole prestige of impact factor, national academy membership, etc are basic heuristics that help the current funding process. Grant review committees comprised of already established scientists and administrators are more likely to fund what will show up in boring paywalled PDFs.

Given another way scientists could make a living while pursuing research, I think you'd see some would stop feeling pressured into publishing in the traditional way and experimenting with altmetrics, open access, open post-pub peer review.


> where the first question anyone in my "target audience" will ask is "why didn't this get published in a real journal, whats wrong with it?"

One would think that this wouldn't happen simply because it is (I think) a combination of two of the most well-known logical fallacies, "argumentum ad populum" and "appeal to authority" https://yourlogicalfallacyis.com/appeal-to-authority

I am extremely dismayed by the fact that, in a career where (of ALL careers!) the evidence should ostensibly speak for itself, papers (and their information) are essentially pre-rejected or pre-accepted based on reputation


Conferences and journals are a filter. If your paper can't pass a few rounds of anonymized review (for a conference) then it probably isn't worth the readers time.

As an example, arXiv is has a lot of great papers but it also has a lot of bad ones. I'm unlikely to go there to find a paper unless someone referred me to a specific one.


I've never had a paper rejected from a conference, and I've seen some pretty shitty ones at conferences. I think this statement is pretty field-specific.

That said, in the CS and EE fields at least, the use of 10-12 referees for a short, dense conference paper is a really good idea. It would appear that this works primarily because of... drum roll ...the social norms and incentives in place within these specific fields.


As an academic in biomedical sciences and as someone who tried to create a journal that did all the things the OP said, thank you.

The old crappy system is mandatory. You can't decide to do otherwise and still have a job.

The impetus for change needs to start at the funding source level, NIH. They need to make a journal that one is required to publish in order to keep your funding.


They have one (several, actually). The trouble is that they're called Cell, Nature, and Science; they're all for-profit; and they answer to no one.

The Wellcome Trust is walking the walk that NIH talked for the past decade. We will see how this goes. Cross fingers.


With ELife?


No, much bigger. They're going to start requiring OA.


Peer review also needs a reboot. It's epically unhelpful and slows down publishing dramatically, Think they should mandate post publication review.


The review model in the WTC's new world order is open, as it currently is in the F1000 journals, and the Wellcome Trust pays the APC so there is no excuse for grantees not to publish. This slowly kicks away all the bullshit excuses that are presented for lack of scientific productivity so that the funder can objectively assess "what are these people doing with our money, and does that advance real, non-press-release science?" It's brilliant imho.

https://wellcome.ac.uk/news/why-were-launching-new-publishin...

Momentum is important. There's no reverse on a submarine. One of the biggest funders in the world has taken away a standard excuse for failing to openly report results, negative or otherwise. It is free to readers and free to authors funded by the Trust. And all of the reviewing process is out in the open.

This is a huge step. The only other funding body I can think of with similar weight is NIH, and (at least internally) they're starting to move in the same direction.


I'm a new professor in the US so I'm happy for you but it looks like we'll have to wait for the NIH to do its job.

F1000 is entirely post-publication review where reviewers identities are known? If so, I love it!


The NIH does a pretty decent job all things considered. With an organization that size, there is considerable inertia, but newer blood is starting to fix some of the older problems.

Best of luck -- work on stuff that matters, if you can :-)


IMO, the change has to come from the grant financiers, and a case can be made that they get a greater more bang for their buck by adding requirements such as pre-registration, open data publishing (so long as they don't want to keep the data proprietary, of course), and publishing guaranteed regardless of outcome. The fruits of what they purchased can be of more use to the field in this manner, and even they themselves may find they spend less money if the 'failures' are documented in such a way that they don't inadvertently repeatedly fund them due to the fact that the initial results were hidden away in a filing cabinet.

I think the same case could be made for placing more emphasis on funding replication -- instead of thinking 'we learned it, why revisit it?', you're checking if it's actually valuable information worth spending more on, or just a fluke.


It could work more like the open source software project model: you would get your respect from having several "accepted pull requests" to the literature. The difference is that your PRs would be better framed in terms of identifying your specific contribution.

As it stands now, the publishing model is the equivalent of writing your own version of the project -- with no easy metadata about your specific innovation -- and asking it to be accepted or rejected as a whole.

With that said, it's understandable why to keep doing the latter version until academia has standardized on the former.


In mathematics, Perelman posted his beautiful 3 part proof to Poincare to arXiv and within a day there was allegedly hype (or so reputed professors report, while I did go into advanced alg. top. later, I was 14 at the time). Professors (the same professors who'd referee) were independently reviewing Part 1 within 24 hours and this was in 2002.

Then you have a Fields Medal winner Voevodsky who also bitches about having a paper stuck in peer-review limbo for three years.[0]

In cosmology (I am not a physicist in the slightest, but I have enough mathematical background s.t. I can watch lower graduate level classes without too much difficulty in a recreational capacity), pre-prints get e-mailed around daily, such that if Lenny Susskind posts something on evening-{n}, morning-{n+1} will have most of the field commenting on it.

In biological sciences (again, just from hear say mind you) I understand it's different (e.g. in medicine, specifically, my uncles/father/their colleagues and students are all concerned with the impact factor of their papers and/or their doctoral students papers).

I also realize some journals have a "no preprint" policy. But a lot of heavy hitters (Nature, Elsevier, Springer[1]) allow the circulation of pre-prints now. So it's not a binary decision between one or the other in some cases. (And in the instances where you are able to submit pre-prints to online, if you've made a mistake, a colleague could inform you and you can issue a retraction hopefully before the $month+1's Nature Methods hits physical press.)

---

Side-bar, skip over if you don't care about proof theory, type theory, constructive logic, univalent foundations, or category theory

[0] Vlad on Univalent Foundations http://www.math.ias.edu/vladimir/files/univalent_foundations...

Interestingly enough, for you category theory fans, Vlad has recently become a huge proponent of Homotopy theory[0.b]. Everyone knows Wildberger has kookie ideas w/r/t infinitesimals but Vlad too shares the same 'feeling[0.a]' that analysis (and to a lesser extent axiomatic set theory) just doesn't 'feel' foundationally correct. He's a huge proponent of Coq, Agda and dependent typing instead. Princeton's IAS put out the seminal text that's worth a read. No advanced mathematics required (or even advanced CS knowledge, it's all self-contained).

[0.a] I can't really put into words what the 'feeling' is any more than describing why some people find algebraic proofs more elegant than analytic or geometric. But in the same way that there's mathematical beauty [0.a.i] well agreed upon, this 'gut feeling' too is agreed upon amongst those who fall into a certain category (pun not intended)

[0.a.i] https://en.wikipedia.org/wiki/Mathematical_beauty [0.b] https://homotopytypetheory.org/book/ -- think of this as the best minds in aggregate sit down at Princeton to write one of the best texts on Proof theory, type theory and logic. Practical Foundations for Programming Languages (Robert Harper) is a good companion text.

[1] https://en.wikipedia.org/wiki/List_of_academic_journals_by_p...


My god I love coming here and reading the travails of real scientists. The journals that have a "no preprint" policy are primarily Cell Press (link) rags and Annals of Oncology.

http://crosstalk.cell.com/blog/lets-talk-about-preprint-serv...

Note the insane amount of FUD. "We're fine with preprints, as long as we can control them!" But that's kind of the point, isn't it -- wrestling back control from journals with a massive inherent COI! Allowing the public (lay or scientific) to objectively determine what's real and what isn't, with visible salvos from both camps as needed, instead of the invisible and un-accountable hand of an editor or anonymous referees? (I sign my reviews.)

Physics doesn't seem to have suffered from preprints -- if anything, it has thrived more than expected thanks to it. I don't hear about a replicability crisis in physics, for some reason. I hear about it all the time in biomedical sciences, perhaps because the review process is so incredibly shady for the "top" journals (in terms of impact factor, a statistically indefensible and criminally manipulated metric). Papers that are outright rejected or in need of major revisions are simply yanked and resubmitted elsewhere (I am guilty of this, by the way; rejected from Nature, so we sent it to NEJM with almost no edits and they accepted it a week later). Senior authors act rationally, so if perfecting a paper will cause a delay and potentially deny them priority on a discovery, who can argue for making the edits instead of resubmitting elsewhere?

All of this goes away when preprints are the default. Priority is established by deposition, rather than publication; the discovery often (though perhaps not always) is itself enough to prompt review from rivals, even if only in self-interest; and the preprint status makes it clear that the item has not been formally reviewed. Many or most resulting referee comments are public, often submitted as related deposits. From a scholarship perspective, I cannot for the life of me see the downside, other than reticence to boldly state the truth and face retaliation from shysters. It's a shame that the latter is an issue, but given the even larger amounts of $$$ at stake in physics, it seems that biomedical researchers are mostly just cowards using this as an excuse. (JMHO)


What's wrong with PDFs? PDFs are great because it's a downloadable format and when I create one I know that what I see is what other people will see. The same can not be said for a website format where people use different browsers, different devices, and different versions of different software.

I just don't see what's wrong with using PDFs. If I want to read one of the papers on your website on my phone when I don't have Internet, how do I do it? If I just keep the page open on my phone's browser it's just going to go away at some point as my phone's OS clears the browser's cache to make room for other apps that I'm actively using. With PDFs I can just open the pdf I've downloaded.


any standardized downloadable format (to include VMs or AMIs) also fits these criteria. PDFs aren't inherently bad, but on occasion they are not the best tool to communicate results.

For example, Sean Morrison's recent paper on live-imaging stem cells in vivo benefited greatly from inclusion of movies showing the live-processed data. Their point was that live adult stem cells can be imaged, and this revealed niche biology. The only way to support this was to provide the proof, and MPEG did this nicely.

Agreed regarding permanent vs ephemeral, though.


I agree with most of your points, but disagree strenuously with this one:

It's 2016, and these dead, flat, static PDFs are functionally identical to the paper they replaced!

There's a lot of value in static PDFs and how we format documents for publication. $CURRENT_YEAR doesn't invalidate good ideas from ($CURRENT_YEAR-n).

Just in terms of technology, static PDFs are:

- Easily distributed. - Easily archived. - Readable forever, even as platforms change incompatibly. - Readable on any device.

20 years from now, do you really want to have to run an old browser, or debug shoddy JS/HTML/whatever, just to read a paper from $CURRENT_YEAR-20?


> [W]e need modern, digitally-native documents that are as rich as the information they contain.

It's best to separate interactive parts from the non interactive. When I read papers I always print them because I can take notes, underline, draw shapes, comment etc. all in-line with a pen that I'll buy from a closest little shop if I don't have one with me. It is more convenient that any software tool out there, because there is basically no limitation to how I'll use the pen on the (actual) paper.

But how will I be able to print a video, or a graphic that has some controls to be fiddled with?

But they can be provided separately, and if I have the mediums, I'll use also them.


If I might give my opinion on which features PubPub should implement next:

- adding "Math in-line $$" in Formatting menu

- enabling upload of LaTeX files

- enabling upload of PDFs, to store on PubPub existing research (combined by via.hypothes.is, one can annotate PDFs, which is in the spirit of PubPub)

- include some interesting features of SJS like grading, download as PDF, add to personal library...

I am impressed at the job you've done for a graduate project ! Best wishes :-)


Shameless promotion: SageMathCloud (https://cloud.sagemath.com) does many of those things, is also open source under the same license as PubPub, and built on similar technologies to PubPub (React, CodeMirror).


Why reinvent this wheel? Why not instead (also) partner with people like Overleaf who already have many of these features?

https://www.overleaf.com


that appears to be totally proprietary


not really

https://github.com/overleaf

Some parts of it are (duh?) but overall the system is astoundingly transparent. I think they ended up charging for the sync-to-git option, but if you look at the above repos, it appears that this feature can actually be pulled on the user end rather than just pushed from Overleaf.

Overleaf integrates nicely with existing journals and with preprint servers to some extent; I've bugged them for some time to support MarkDown (I don't care to use LaTeX for manuscripts which don't have much math) but as yet they have not implemented it. It may be an issue with the backend, although even that should be surmountable.


not sure how to parse all that. At the least, it's not straightforward, and my original comment is 100% true: "that appears to be totally proprietary" (emphasis added)

If a project is actually FLO, you'd think they'd make that clear to someone visiting their website.


They're a corporation. Some of their decisions are made accordingly. The most interesting bits, though (version controlled dumps of the LaTeX you're working on) do have open implementations.

Ultimately, it (like Paperpile) is a service that is built from mostly open components, integrates nicely with OA workflows, and has no real lock-in. I'm not sure what you're afraid of -- a paper will unsubmit itself? LaTeX will go closed source? Since your entire version history is there for the taking at any time, there's not much downside, and for our manuscripts & tech reports we've seen huge upside.

Ymmv!


Haha - not my project, I just work on it for now. But thanks for the great ideas!


I still don't... get what PubPub is. I definitely think that peer review should be updated and science should be using the internet better, I just have no idea what PubPub has to do with that mission. Is it just... a place where people can post things, and other people can comment, and other people can curate posts? So like any blogging platform?

Even if used as intended, it seems like the only comments would go on things that people are already seeking out; there hasn't been a "random pub" button since it got overtaken with spam. So the use of comments to "review" is already throwing up a huge barrier to discovering new science and leveling the playing field for unrecognized researchers, exacerbating the problems of the current system. The inability to anonymize comments or posts gets rid of the most promising way the internet could affect peer review. It's hard to see what about the platform promotes evolving documents as opposed to static publications. And complex, digitally native papers are already being promoted by larger journals, with better resources to implement them, and more staff to help potentially unfamiliar scientists with the process, another aspect that could make the platform a source of exclusion and bias.

I don't mean to sound to critical, I just follow both science publishing and projects coming out of the Media Lab pretty closely, and have been baffled by this one for a while. https://www.sciencematters.io/ is doing cool things with switching from static publications to a set of growing datapoints, updated in real time, http://meta.com/ (formerly sciencescape) is dipping their toes into curating papers in a really interesting way, and even http://the-artifice.com/ has a unique peer review system that I think could be implemented well in the hard sciences as well. PubPub is just like... Medium? With a little bit of GitHub, and the ethos of arXiv?


So I'm very interested in going beyond PDFs, but I can't for the life of me figure out what PubPub intends to do. I get that there's an online collaboration engine, as is offered by many places like Authorea, ShareLaTeX, Overleaf, and FindusWriter. What does PubPub do differently here?


Plugins and views!

We're currently working on a significant architecture refactor that will enable 3rd party plugins to view arbitrary data assets inline, like a reactive spreadsheet or a D3 visualization.

We're also working on deeper, richer hyperlinks between documents that would let you reference or quote other publications in the system with arbitrary resolution (down to the section/paragraph/sentence/number).


OK. Color me skeptical...


That’s fair. We think it’s important to be able to reference anything within the system with arbitrary resolution, so that’s our goal. Whether that implements well or not is certainly a technical challenge, but hard problems are the fun ones, right?


I think the point being made was that already existing sites didn't change the status quo, and so you'll need something much better to achieve what they failed to.


"anyone can curate a journal"

What does the word "anyone" mean? If you literally mean anyone, then you are getting rid of the idea of "peer", in which case you seem to be excessively idealistic. Even Wikipedia eventually had to impose limits.


> Another is the nature of the medium itself. It's 2016, and these dead, flat, static PDFs are functionally identical to the paper they replaced!

Well, many of the existing publishers (at least ACS, Nature group, Wiley, Science group) are already pushing hard towards something "more" than PDF, mainly for DRM reasons. And scientists in general are resisting greatly, because PDF is such an established format, supported on all devices and platforms, can be created by the tools we use, and can be shared freely amongst peers. The barrier to adoption for a new format is extremely huge.


I'm also concerned about long term readability. Some of those documents might have at least a few hundred years of people wanting to read them before them. Can any of those innovative new data formats stay around long enough, especially if they're proprietary? We might change computation platform and hardware paradigms a few times as well, just think back to 20 years ago. A lot of measured raw data is already lost due to this; it's about impossible to parse any more. Pure text can survive far more easily.


Hi! I really think what you are doing here is great and I hope you will succeed!

Is there a way to test the platform without creating to much garbage? Like a sandbox version maybe?


Everything is open-sourced, and we've tried to document as much as possible to make it easy for people to host their own instances! Alternatively, you can just create arbitrarily many drafts without publishing any of them.

https://github.com/pubpub/pubpub


Running my own instance is a bit heavy just to try the platform :). But thanks for the tips, I can create draft and delete them without trashing your instance.

The fact that it is open source is a major positive point. I suppose you have heard of SJS [1] which in a way is very close to what PubPub is, except I do not believe in SJS because of it being closed source: it is way too unsafe for a scientific community to take the risk of adopting such a media for a new journal or a conference as the platform is not open and there is a risk of Elsevier buying it to shut it down or closing it even more, for instance.

[1] http://sjscience.org/


> and there is a risk of Elsevier buying it to shut it down

Yup. I'm still stung by Mendeley's sale to Elsevier.


If you don't mind I have a few questions :).

– PubPub doesn't seem to have math support, is that right? I think that should be a very high priority.

– LaTeX import is also really important (but this supposes that math is supported), is that on the roadmap?

– Related question: is there a way to work offline in my usual text editor and collaborate with my usual tools (git, svn, whatever) and then import my paper in PubPub?


Math support already there: http://i.imgur.com/tHKpNrB.png

We're keen on getting good offline support added, but (as you point out) there's still a lot to be done on the web side of things. We're pushing out some big updates that hopefully clarify the project and how to use it in a couple weeks.

Welcome to ideas on what good offline support would look like, or PRs if anyone wants to take a stab at it.

Our goal is for PubPub to be a public utility for scientific communication. It'll be non-profit and open source for as long as it lives (still a grad student project at the moment...) and free for anyone to publish whatever they like. I don't think we're nearly smart enough to know exactly what that looks like, so please do feel free to contribute ideas, code, or inspiration into what a public tool for science communication should be (this comments page is already wonderful in that regard).


Ah, thanks. I tried math with single $ signs.


Of course! Importing from markdown/latex/git, export to PDF/XML/etc, and lots more are coming (not in that order). We're still in very active development, so there's a lot left ahead on our roadmap.

If anyone is serious and passionate about helping out, we're hiring!


Had I not landed a position in academia, this is really the type of project I would have love to work on if I had to do engineering rather than research ;).

But I really hope I can contribute as a user in the future :).


JOVE, the journal of visualized experiments, is also a very interesting model.


I skimmed through the site and am left with the question: where do I submit my paper?


This is great job! Congrats !!!!


No need for shame with such a worthy plug+!!!!


This issue of scientists being OK with totally non-scientific processes is all too common. When I was an academic my Department used to make an enormous fuss about the year to year jitter in student teaching evaluations. My colleagues (all scientists) would sit around discussing why they were a heroes because their evaluation went up 10% from last year or what they had to change because it went down 10%. I used to just sit there thinking if this sort of analysis was in a paper they were reviewing they would have ripped the authors to shreds.

Peer review fails on all levels. It does not functions as a source of quality control (everything gets published eventually) and even worse it rarely improves the quality of the paper being reviewed. I have published dozens of papers over the years and on only one occasion has the review process improved the paper - in most cases the reviewers demands made the papers worse by forcing me to remove important information or include irrelevant details (citing the reviewers publications mostly).


Peer review is broken not because the process of review is broken but because the peers are broken. Reviewers, like all academics, have screwed up incentives to get there name on as many papers as possible and those papers cited widely, regardless of the quality of the science involved. They will always find a way game the system to their benefit.

Recently heard about reviewers of a friend's paper blocking publication until the reviewer could publish his research on the same topic first. The whole system is just too screwed up to be fixed.


Yes this sort of bad thing happens, but more in the real high stakes games of grant reviews than publications. You can easily kill the grant of one of your competitors by just giving it a mediocre review (without any basis of course). I would see this happen all the time to my colleagues where they would get a spiteful review on a grant that would kill it and there was nothing they could do.


At the NIH, at least, I know that enormous effort is put into ensuring that there are not conflicts of interest between the reviewers and those being reviewed.

I'm not particularly surprised to hear that some places do not put in that effort - it is a slow, painstaking process - but there are places where this is less of a problem.


I am in Australia where the pool of potential grant reviewers is pretty small for most areas. When you have a grant success rate of 15% it is all too easy to nobble your competitors via the review process.


Why is science viewed as a competition there?


For exactly the same reasons it is everywhere in the world: because there isn't enough funding to go around - and the grant application process is explicitly designed to be competitive.


It doesn't seem to be everywhere though. I know in the US, research is often collaborative and a single grant may fund many researchers. Those researchers do not seem to be competing internally for funding (though, I have seen conflict as to whose name should appear first on a publication!). I presume the same is likely true in Australia. Why is some research able to be collaborative while other must (apparently) be competitive?


Note also that collaborations compete with other collaborations for program grants, and R01s (the measure of a "real" PI) are inherently not collaborative. They are by their very nature competitive. And they are the metric by which principal investigators are judged.


Many real PIs support collaborating researchers and students though.


Right, but students are written into the grant. Ultimately, nobody gives a shit which post doc or student you put on a project; the assumption (often flawed) is that they're all the same as far as the modular budget is concerned.

If they were special, the reasoning goes, they'd have their own F32 or T32 to work on the project.


You are quite mistaken, imho. The pay line for NIH grants is around 15% and for NSF grants it's more like 10%.


It is a competitive everywhere, just in Australia you are playing in a much smaller pool. There just is not enough money to fund all good grant applications.


Because there's a finite pool of money.


I'm only shocked that this is the first time you'd heard of adversarial delaying. When that fails, sometimes PIs will call up their editorially inclined pals and suggest a "bundle" issue to sneak in their lab's work. There are many ways to avoid ceding priority for discovery.

If preprint deposition is the default, this all goes away. The people who lose from such a shift are editors of major journals, publishers of journals, and senior PIs who know how to rig the game. A cynical mind might propose that these are the major forces resisting such a shift.


> and even worse it rarely improves the quality of the paper being reviewed.

That might be your experience, but I think it has significantly improved the quality of a large fraction of the papers I have been an author or co-author on. I have encountered one bad reviewer in the past, and in retrospect perhaps I should have taken more notice of his report. As a reviewer, I've only had one negative incident, which was when my comments were overridden by the editor. My experience might be specific in my subject, as we only have one referee per paper.


I am curious as to what improvements the reviewers made to your papers? What areas did they improve?

In the case of the one paper of mine that was improved one of the reviewers noticed a connection between our data and a paper published 10 years before in a different field that made our work much more interesting. Of course after incorporating this the editor rejected the paper despite all three reviewers saying it should be published.

The editor of the next journal I sent the paper to was happy especially as the paper has since picked up over 1000 citations, but such is the vagaries of peer review.


The suggestions have included alternative explanations for the results, useful checks on the data analysis procedures, sensible ways to restructure a paper, useful plots which could be included, places where the text was unclear, indicating where our claims were too strong, how to strengthen the paper by comparisons with theory.

One referee's questions were useful to reveal that an undergraduate's paper (of which I was a co-author) had some fairly serious flaws which required it was withdrawn. Fortunately this has been the only "rejection" I've suffered so far, other than papers speculatively submitted to Nature or Science which were later published in mainstream journals. In astronomy and astrophysics we are fortunate to have only 3-4 major journals with similar impact factors.


I wish that was the way the biological sciences worked.

My personal favourite peer review process story was from one of my first papers where I wrote up the whole thing and my supervisor added his name and sent it off. This was in the pre-electronic days where you physically posted off the manuscript. Serveral months when by and we heard nothing. After around six months of silence my supervisor decides to ring the editor and find out what is happening (proactive he was not). The editor tells him "oh didn't you know that paper was published a couple of months ago" - not a single change made or reviewers comment to deal with. I wish all my subsequent papers had gone like that :)


I think that example is a bit unfair, teaching really isn't scientific when you get to the core of it. You cannot expect professors to be able to conduct double blind studies with student grades and so a more overarching non-scientific discussion is probably more amenable to actual improvement in performance.


You can actually conduct scientific studies on teaching and grading, but that is not the point I was making. What concerned me is how a department full of scientists wasted their time discussing the importance of results which were statistically meaningless. The scatter was just noise yet we wasted hours (days) of our time discussing the supposed importance of this noise.


Buried in the middle of this (wonderful) article is the heart of the problem -- academia has foolishly placed its metrics into the hands of editors and publishers, who have corrupted the living hell out of it. Ctrl-F "Cochrane" and witness the exchange between an editor, who benefited from the status quo, and a scholar, who did not and does not.

Academia != scholarship and has not for some time. There is no longer a good reason for traditional for-profit journals to exist. (Before someone says it: SpringerNature likes to pretend that editorial independence is possible, but they'll have no choice save to fire their "news" guys if their board asks for this).

Please recall that the entire point of the World Wide Web was to share physics papers. The arXiv exists because of physicists (who quickly noticed that HTML wasn't a good substitute for LaTeX when writing math-heavy papers). The problem is not technological. It's social. And until the incentives are fixed (the Cochrane Collaboration in Britain has gone a long, long way to address this, and now the Wellcome Trust is going even further), nothing of any real import will change. In the USA, NIH could make a lot of positive changes (and in exchanges with mid- to senior executive level directors, I honestly believe they're trying to do so). But it will take time. Academia moves with glacial speed, when it moves at all.


Perhaps it's a question of incentives. What exactly do you get out of reading a quite complex piece of work and saying your opinion?

The closest I can come, as a non-academic, is perhaps reading other people's code.

It's bloody hard. You need to concentrate, and it's not like reading a newspaper article at all. Even small errors are not easy to spot, and it's even hard to know whether the code is structured in the way the comments say.

It somewhat makes sense to do the exercise if I have to be using the codebase. If I'm just commenting and giving a thumbs up/down, it's quite easy to reduce effort, come up with some generic comments, and see how the other reviewers do. Which is a recipe for letting errors through.


Reviews shouldn't be opinion, but facts. Those facts can then be disseminated to larger audiences. The incentive is to ensure that only factual information is claimed in publication so that everyone else can be as informed as possible, thereby (hopefully) allowing us to all make better decisions (and, perhaps, this could even accelerate the above described process).


Every time you use the word "should" you beg the question.

Why, exactly, are incentives aligned to make it so? Or is the problem that the incentives are not in fact set up to encourage the desired outcome?

Anything else is wishful thinking. People seldom do what they know to be right; they do what is convenient, then repent. It follows that one must make the right thing the convenient or rewarding thing if one wishes to see humans doing it.

By and large, people are somewhat predictable. This can be used for good as well as the more familiar evil uses.


Reviews should be facts by definition of a review (as I use it), so the question is not begged.

The incentives are 'aligned' that way because it is the only way to benefit. Publishing into an echo chamber (or void) is not science, by definition.


You've been out of academia for a while, I take it?

Publons (mostly uncited) are the coin of the realm. I don't usually bother submitting a methods paper unless I know a few citations are in the tank, but some people publish just to get their metrics up. A lot of people, really. The more I think about it the luckier I feel not to have to continually publish jackoff papers. If I wasn't so lucky in terms of collaborators... Well, I guess I'd GTFO of academia, really.

Lots of folks have to "show productivity" with bullshit papers that don't matter and no one reads. Needless to say, reviewing such papers can be a rather dull affair.


I'm enclined to think that an open review process by using a web system with a space for open discussion/commenting could be a good system. This wasn't possible before the web.

Actual system is publish or not, but it is possible now to publish everything and use a rating system. This removes the risk of plagiarism and anteriority disputes.


Physicists have access to this if they combine the arXiv with Scirate.

https://scirate.com/

Very few choose to, and very little discussion actually happens. There is something of a chicken-or-egg problem, but the site is sufficiently established in one community (quantum info) that if folks wanted to discuss papers they could. So I take it as weak evidence that they just don't have the incentives to do so.

If a central discussion forum were anointed as the One True Forum, I weakly predict physicists would spend time defending themselves from "drive-by criticism" (i.e., criticism that is vague and takes little thought, but consumes large resources to defend against) but little constructive would get done.


arxiv is not open. At least not as open as it should be in my opinion. But I fully agree on the chicken-or-egg problem. The main problem is bootsrapping such web service.

I don't think that "anoiting" a forum/web service would be a good strategy. We need to keep open the possibility of a new and better system to take over.

My feeling is that two fundamental requirements must be satisfied. 1. Anybody should be able to submit a paper. 2. Anybody must have an opportunity to become a recognized expert in a field. By anybody I mean that academic or career credentials must not come into play.

Obviously this begs for a rating system on the articles and on the "experts" too. Obviously recognized "experts" should have a bigger weight in the rating than random members. This makes it different from reddit and hacker news. But the idea is similar. Members should also be able to filter papers by rating and be notified only when a paper reaches a desired ranking. This is another difference from hacker news and reddit.

I would be very interested in participating in a project of this kind.


One could start with a model similar to that in which most people using systems which this new system is meant to replace already find themselves.

You should be able to select peers (though, maybe you should be given some as well?) and "experts" to follow. You would only see their activity when you want to, but you could also participate in the larger network. As a practical matter of implementation, I think allowing groups to run segregated networks (ie. their own "node") which can later be joined to each other makes sense for getting actual adoption and for reducing development, infrastructure, and support costs.

I think there are likely many problems with such a crude model, but that is where we find ourselves it seems. Waiting to think up something better might be prudent, but that might be difficult without trying something out.


You just described LinkedIn. A website people only update every few months/years when they're looking for a new job.


The content focus is different, at least.

In addition to collaborative document authoring (which I don't believe LinkedIn provides), data, simulations, and analysis code could be shared and built-on.



Isn't 'the scientific paper' a kind of dated concept? I'd love to see the whole scientific process from start to finish in a 'notebook' form. Why summarize findings in a paper at the end of it all, when all steps (including conception, data gathering, successes, missteps, comments, and corrections) could all be viewed as a whole? It would seem to be more in line with the intent of science.

I'm not a scientist, obviously. And, I realize that practical science has as much to do with reputation, fame, and the climbing of academic hierarchies as it does with research. But, it would be so cool to see science more as an evolving, collaborative process.


Because a summary is very useful.

Imagine you are reading about 5 years of day-in-day-out work by a team of 1500 physicists (eg. CERN, LHC, etc...).

Would you rather read the entire as-it-happened stream of research, going back 7500 person-years - or would you prefer to read a summary, in the form of a paper?

I agree that ideally the stream should be publically available as well, for those that want to consult it - but having regular summaries, written at intervals is very useful.


I disagree. Something similar exists on the wikipedia and it's still open to biases and agendas, it's just a matter of who is more persistent and which side the admins take.


But you don't need the 'gatekeeper'. Imagine if you had a 'trustability' factor that people could cryptographically sign as themselves. You could build a 'trust web', where papers link to other papers by an author. That author's trust ratings could go up based on the citer's evidence.

There's clearly tech waiting to be built here, which might or might not be a startup.


There's already a "trustability" factor: it's called a name. When I search for a new paper and find something interesting, I look at the first author, the name of the PI, and the name of the university or laboratory associated with this paper.

If I'm familiar with the work of the first author then I already have an idea of how much I can trust his/her work. If I don't know this person then I look at the coauthors and especially the PI. If the PI is a heavy hitter in my field then there's my trustability factor right there. If I don't know the PI but I know the laboratory or university it's affiliated with then there's my third line of a trustability metric.


You don't consider the journal? You're a better man (or woman) than I am, if so. Otherwise agreed, reluctantly. This implicit trust is frequently manipulated to relax standards by experienced PIs, and in biomedical work that is a source of numerous problems.

In an ideal world, work from an unknown author would receive the same scrutiny as that from an established lab, and vice versa. Obviously this isn't the case, but the further we drift from it, the more likely we become to see sloppy results wasting everyone's time.

It can't just be about the narrative. That simply isn't science. That's storytelling, and confusing the two has caused a great deal of harm to science in the public eye.


I'd be lying if I said the journal didn't matter strictly speaking, but in general as long as it's a "real" journal (as opposed to one of those obviously fake journals I keep getting spammed by) then I don't really care.

Don't get me wrong though, just because the paper has a heavy hitter as the PI or even as first author, doesn't mean I treat the content with less scrutiny. It's more that I know my time is extremely limited so I'm more willing to spend more time dissecting a paper from an established first author or PI than an unknown person (unknown also meaning that it's not a paper someone recommended to me or a paper someone cited in another work).

It's more like, I'll _make_ time to read papers with certain authors on it because I know that they do good work. I'll also _make_ time to read something if it's published in Nature et al for example. But a random paper from a random author in a random group? It's not that I don't trust this person's work, it's more that I'm unfamiliar with that person's work and there's other things I could spend my time doing as well. I'll download the paper, throw it on my iPad, etc, and I'll get around to it eventually.


The problem is not about proving your identity, it is about having the facilities, funds and connections for publishing papers which the gatekeepers seem to have. Cryptographic signing solves a different problem. And it is the least concerning one.


Yeah, it also means minorities can be silenced via the majority. While this is good for protecting against incorrect/poorly researched work, it can also prevent novel ideas from gaining traction. Though, this problem exists in the system we have now.


A fully open system can have a ranking on articles and reviewers (voters). Wikipedia has no ranking on articles and no rankings of contributors/reviewers (at least initially). So these are different systems.

I'm convinced that random tourists vote can't have the same weight as recognized experts in the field who know the subject well and whose judgments are appreciated in their form and content. I'm convinced of this because it is the fundamental principle of the referee's review.


Had the same kind of idea - apparently "Facebook for Scientists" is apparently a thing.

http://www.sciencemag.org/careers/2014/08/researchgate-faceb...

https://www.researchgate.net/


researchgate is a fucking cancer. Dark patterns rule there


I've got a simpler solution - scientific paper should not be considered valid until another team replicates the findings. That would quickly get rid of all of the fake results, plus would weed out all of the research that nobody really cares about. Also require that all data is shared, and that until your results are not verified or replicated by a team from another university/country you don't get any more funds.


That's not going to work for Astronomy, or results of multi-year studies.


Long-term studies or not, it would be useful to have an up-to-date system that clearly indicated whether or not the results of some particular study had been replicated, and how many times they had been replicated if at all, and if there were any conflicting results between studies.


That is what Cochrane does in the UK. And indeed it is great.


it will, if you require that the data is shared and can be analyzed by another group. In some cases it will slow down the progress initially, but I think in the long term we would be getting much better results.


You're already moving your goalposts. Your first post said replicates the finding. If I simply post my results and someone else looks at the data and agrees with me then are they really replicating my finding or did they, oh what's the word… peer review my work and agreed?


I really dislike the idea that "Your code runs on your data" is a form of replication.


It's a minimum standard and one that many published works cannot meet. Universal enforcement of this minimum would itself be a big step forward.


It is a minimum standard, and a useful one. It mostly annoys me when certain groups of advocates seem to treat it as an end in and of itself, and evidence that something is "repeatable" and will end the problems in obtaining scientific evidence.

I come from a field where something isn't a reproduced until it's also found in an entirely different study on a different population.


Oh, same here. Two of my current manuscripts have now been replicated in 3-5 independent clinical cohorts. The thing is, if the field won't even recognize that "ability to produce the same outputs given your inputs" is critical, good fucking luck with things like "actually replicates in a separate population".

Neuroscience is particularly horrendous about this.


You can fabricate data nearly as easily as you can fabricate results. It is easier to detect fabricated results from raw data than it is from the final results, but fraud adapts to the methods used to detect it. If you create a new way to detect fraud, fraud will change to avoid that method. Detecting fraud is an arms race.


At some point it becomes easier to just run the damned experiment. Similar to buying a safe based on making its contents not worth the trouble to crack it.

This would be a good first step. "Photoshop for biologists" (as a workshop) is very much a thing.


What will you do for a study of a national birth cohort?


Clone them?


I personally prefer reproducibility over peer review.


I'd be loathe to choose. That said, reproducibility is the part that's obviously broken in many cases. Peer review is broken in more obfuscated ways.


I personally prefer not choosing between the two.


The peer-review is mixed with a reductionistic structure. Each scientific area is specialized. The articles are reviewed by people in the specialized fields. Often they find something that is not entirely in their field of knowledge. That is because they are trying to "science" something new.

So they can produce theories that seem a solution from their field of knowledge, but are not valid in another field. The peers will not see this problem. Another problem is that engineers that try to apply this science find practical problems. But because they are seen as "of lesser knowledge" their practical criticism is often rejected.

The best example I know is the "theory" that magnetic fields can bump into each other, producing energy. In this model the fields are made of flux lines, which can bumb into other flux lines. The flux lines will then break and reorganize, producing energy.

Yet, as I write this, you may already think that this is bullshit, because flux-lines are imaginary lines to describe a magnetic field.

Now, with this in mind, look at magnetic reconnection. This is a theory made scientists specialized in astronomy, and not in the area of electromagnetism.

I believe that this problem is in every (specialized) area of science.


We could allow everything to be published and then use a continual monitoring approach. After all the limiting resource these days is not paper but reviewers time, so lets distribute that. What is wrong with model like HN uses for open review and comment on work? Even a voting system to help identify the most 'important' works.

Edit: Haha great minds chmike


Hypothesis: peer-reviewed articles aren't rigorous

Prediction: a flawed article will pass peer-review

Testing: https://svpow.com/2013/10/03/john-bohannons-peer-review-stin...

Analysis: Peer review is not rigorous


Isn't there are standard just below rigour, though, that is still worth checking for, something like 'no obvious errors within the current paradigm'?


I'll preface by saying I'm only a sysadmin with a love of science, but I learned the most about peer review and scientific publishing while working in biotech at a genetics company. At some point I realized I actually had to have some understanding of the science to properly admin it, and began reading lots of papers, and I slowly started to realize just how bad many papers are.

There are a few key issues:

1. Scientists who "collaborate" with other scientists but do a small fraction of actual work get their names on papers as number fodder. Anytime I hear "I've been published in over 1000 journal articles" now I generally become more skeptical.

2. Lack of reproducibility. Not only in the methods and the documentation of the methods, but also the fact that most things just simply aren't even tested by a third party.

3. Publishing. They have far too long locked up information the public deserves to know, which is bad enough, but then they do a bad job of it and allow bad science in. The largest part of the problem imho because they created the situation that semi forces the scientists into questionable paper writing tactics.


There's a peer review scandal currently underway involving a top economics journal.

https://gborjas.org/2016/06/30/a-rant-on-peer-review/


www.Examine.com does a great job analyzing nutrition/supplements studies.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: