1.) You have a lot of nice tech, but think long and hard about the social dynamics and how you can get from A->B. By far, the tech will be the easy part. Convincing established scientists and up-and-coming scientists who need publications in quality journals is much, much harder. As another comment says, try finding a niche first and build out. Particularly some less established fields.
2.) You mention hiring engineers, designers, devops, QE. This points to solving a social problem with technology. Consider marketing, or even social scientists who can guide you in understanding the communities you are attempting to influence. Become embedded in those communities, build trust, attend (and maybe sponsor) conferences, and then branch out.
3.) Sort of a pet peeve of mine (among many): You talk about making the studies accessible to the lay public (in terms of understandability). I feel this is a bad trend. Scientists need a way to talk to each other in their own language, without needing to lay out all the context every time. Academic publishing is that avenue. I have published many articles that will not be understandable to anyone without at least 2 years of graduate school in the field. They aren't groundbreaking, but can still be important. Science is very incremental in many areas. Although I 100% agree that it should be open and free.
For the lay public, there are always books, blogs, and pop-sci news that put it into context.
Anyway, I wish you success (truly, we need improvement over the status quo). It's easy for people to criticize, but think of all the things we don't criticize (which means we approve of them!)
On your point 3.
I don't disagree, I just observe that most disciplines of science use their specific journal writing styles to force a style of writing which is generally more about conforming to the norms of the discipline rather than any other benefit.
Stuffy science can't answer this: Why would a youtube video with a DOI not be an acceptable form of publishing. (ignore the archiving issue for the discussion).
You don't have to tone down content complexity, you only see that in sci-coms on social media because they are focusing on engagement rather than just using the platforms for their network effect.
> Why would a youtube video with a DOI not be an acceptable form of publishing
I can, at least personally. For some stuff video might work, but text has a much higher level of precision and information density than video. When writing an article, I will often edit sections many times until it has the right amount of information, references, and flow. That is much harder in video, and scientists don't know much about video editing as is...
Also, when learning something, it typically takes multiple articles. I regularly have 5 articles open on my desk (yes, printed out), where I'm cross referencing equations, methods, etc, with one another. That is much harder to do on a computer, let alone with video.
Again, maybe for some fields it makes sense (maybe?). But it's hard to beat ease of use of text.
(Also, there's nothing wrong with conforming to a particular style. It's about consistency across authors, which helps the reader. Software projects have style guides, don't they? As well as traditional media outlets. Why shouldn't scientific publications?)
> Also, when learning something, it typically takes multiple articles
This is where I think video would be most helpful, actually. To relate it to software, when I'm learning a new language or a new tool, a video helps me navigate and learn my unknowns. But when I'm already an expert on a topic, video is intolerable because it takes too long to get to the point and often doesn't have the information I'm looking for.
Likewise for science, a video recording for materials and methods would probably be incredibly helpful for replication purposes. Also review papers might lend themselves better to video format because their purpose is to summarize existing research in a way that's digestible to someone relatively new to the field. It doesn't fit all purposes but it would make the science more accessible
About videos: I would argue it have a overlapping niche position with (either in-person or virtual) conferences. In optics community, SPIE and OSA do publish recordings of some (mainly hybrid) conferences. Those videos are indeed useful for learning about quasi-real-time works in progress from fellow colleagues (in slightly more depth than the PDF abstract) and save you from traveling.
Please not video. Video is a horrible way to communicate anything non-trivial: it's slow & difficult to navigate around; it's difficult to extract from (eg. I can't just copy a reference); it's difficult to skim; and it's very time-consuming to edit/revise.
Sure it's not easy to write good documents, but that's half the point - good documents take effort and time to create. The world is full of very low value papers. If the author isn't willing to devote the time and effort to write well, I'm unlikely to devote my time on reading something that is probably trash!
Many of video's advantages are actually disadvantages. Video is easy to produce, so the web is filled with low-value video. Video is easy for low-literacy people to produce, but scientific literacy is less about language and more about thought (I'm perfectly happy to read a idiosyncratic or ungrammatical paper, if it is clear and the contents are valuable).
Scientific writing should be all about communicating with the reader. So unless a video is better for me, the reader, stick with text. Please.
A reference would be the video referring to another paper (or video). That's difficult to extract from the video unless the publisher is going to fully annotate it - and if that's the case, why not just publish the annotations and images? Much faster to read!
There's already a peer-reviewed video-oriented journal: the Journal of Visualized Experiments http://jove.com/
The downside is of course it's not open-access by default, and it can be expensive even if you don't pay for open-access since they'll send a freelance videographer if you don't have equipment/time/confidence to make the video yourself.
On the career / publishing perspective: I think you're overlooking the fact that an upload to such a peer-review platform would usually be considered non-archival, so the paper could still be published in a prestigious journal. It's basically a pre-print server with a comment function (which already exists in some sort, btw, researchgate.net), and most journals have accepted that they can't stop people from making their papers available through such channels.
The bigger threat to this model is probably places like arXiv/SSRN adding similar functionality, but since this seems to be a non-profit project, it would probably mean mission accomplished.
Someone cribbing our approach would be mission partially accomplished.
The idea is to displace the journal system and advance open science and academic publishing. If it accomplishes that goal, mission accomplished. I'll be a little sad if I didn't get to participate, but the whole world of software development awaits and there are other neat projects to work on. If they crib our approach, but execute poorly and don't succeed in opening publishing, that's not mission accomplished, but there's no reason we can't move forward and execute better in that scenario. If they crib the approach and show why it doesn't work, that's also mission accomplished. It shows that this isn't the right solution to the problem.
Or maybe we succeed - it remains to be seen. Any of those outcomes - cribbed and failed, cribbed and succeeded, or we fail/succeed in a way that shows whether the concept works clearly - we would consider mission accomplished.
I don’t disagree that 3 seems valuable, but at the end of the day scientists are just people and should not shield themselves from having to validate and communicate their work given it’s impact on normies.
Consider there is still plenty of ways for scientists to communicate undistracted; a site such as this does not mean other forums are obsoleted. Private channels will still exist.
1) Yep. This I know. The tech is the easy part, by far. Any suggestions on which fields might be most open to this approach, and where to approach them?
2) Good point. At the moment, there is a lot of technical work to do, so that's where my head is. But if we get big enough to where we can afford it, community moderators who can help guide the community towards a collaborative approach and gently nudge back some of the egos I've already been told (many times over) are a problem were already on the list. Social scientists are a great suggestion that I had not thought of.
3) This may be an area where we disagree - at least partially. I understand the need to talk to peers in a mutual language, and I'm not arguing against that. My thought is that this platform could help provide space for translators. Papers and responses would be in the technical language of the field, but I've thought about including a (heavily moderated) peanut gallery below for translators to help the lay public understand the work.
One of the issues that has been front and center in the pandemic is that a lot of bad vaccine science was easily accessible to the public, and when it was done intentionally by snake oil salesmen, intentionally made easily understandable. But the good stuff was locked up behind paywalls or hard to understand. There were translators on social media who did their best to answer the concerns but there weren't enough of them. There are a lot of people not currently in academia who have enough of a foundation to tell snake oil from the real deal and who could help translate, with access. Right now, they don't have access. So you wind up with PDFs of "1000 papers showing why vaccines are harmful" getting shared around and no easy way for people adjacent to the communities where they are being shared to counter them.
Additionally, science is supposed to be an open process - we're supposed to trust it not because of appeals to authority, but because, in theory, we can replicate the reasoning and experiments that got us to our current understanding. Now, of course, replicating the experiments is mostly impossible for the lay person at this point, but someone who's interested and has the time should be able to follow along. We can't say, in the same breath, "You should trust science, because it's open and you can replicate it" and "Oh, you can't understand this." We need to create space for scientists to talk to each other, and for translators to help the public understand and follow the conversation in real time.
We would all benefit from a greater understanding of all aspects of science, and from the increased trust that come with understanding.
I find the idea that papers are judged by a simple up-vote/down-vote terrifying.
Popular academics will post their new paper on Twitter, which will immediately lead to a round of upvotes from their friends and followers. People who are currently out of favour will be downvote-bombed.
I see no suggested way of stopping this -- and I hope noone thinks the answer is academics are "above that".
I'd suggest changing from "people may leave a comment if they wish when they downvote" to "any vote requires at least 200 words of justification". In general I think it's important reviewing requires thought and justification.
This is helpful feedback, and requiring responses with votes was already something we were pondering. And I'm increasingly leaning in that direction.
One thing that wasn't covered in the blog post was the intention for downvotes to cost reputation, same as on SE right now. That should help disincentivize brigading to some degree.
The field system may also help counter that tendency to some degree. By choosing which fields you publish in, you can choose the subset of people able to review and vote.
But there is also a degree to which we will try to counter this with culture. The cultures of internet communities can be extremely stick and resistant to change, both good and bad. Just think about Reddit vs StackExchange. Roughly similar systems (though with very different goals and guidelines), very different cultures. If we can establish a strong culture on the platform from the get go - a culture of low ego collaboration and seeking good science above all else, that can potentially help counter some of the worse tendencies of wider academia as more and more of it moves into the platform. At least in the early days, before we can afford active moderation. In the long run, active moderation, and tools for community moderation, can help as well.
I don't get the positive reactions. This sounds like: let's do to scientific publishing what social media have done to society. And that's make it worse. For fistful of dollars.
- Academics advance based on your publication score. They will avoid your platform if it doesn't help them get grants or tenure. If they've got it, they will avoid it even more.
- Only a few people will put time in reviewing and handing out points, where unfortunately reviewing means: three seconds of scanning and gut feeling. You won't be able to keep even the most blatantly fraudulent papers out.
- There will be a few good articles that score well, and you'll defend your platform by pointing that out, but below that there will be a mass of highly rated papers that don't deserve it. They'll be there because a group with an agenda put them there, or because groups actively keep papers of "competitors" down.
- As said by others: the "average citizen" isn't part of this. Never has been, and doesn't need to. Open Access is for fellow scientists and very specialized citizens.
- "With over 10,000 academic journals, it’s impossible for the lay public to track which journals are reputable and which are not." That's not true. Most researchers only look in a small number of journals and can distinguish wheat from chaff. But you won't have 10k journals: you're aiming at millions of reviewers.
- Why use voting by god-knows-whom if you can use the same "trust" mechanism that's used today: citations? Because you'll only become a record keeper. There won't be a need to fund your system.
"The ultimate effect of pay-to-play is that the traditional peer review and refereeing process has broken down. If a dishonest researcher gets rejected from a reputable journal, they can take their paper to a pay-to-play journal and have it published there. With over 10,000 academic journals, it’s impossible for the lay public to track which journals are reputable and which are not. As far as the public is concerned, a published paper is a valid paper."
Ok, but I take some issue with this.
Most scientific papers are not intended at the general public, but at experts working in the specific area that the paper covers. Also, often these papers are not even understandable by a phd outside the area, let alone a lay person.
Which journals and authors are reputable are known to experts in the area. "Pay to play" papers published in obscure journals will not get much attention or be taken seriously.
Agree. When I was in graduate school the "bad journals" were well known. The pay to play journals are very cleverly disguised, but it's extremely hard to accidentally publish to or cite from one of these journals.
There are many, many problems with the almost cult-like atmosphere of reviewers in good journals but pay-to-play isn't one of them.
So here's the experience driving that observation: I'm adjacent to a number of anti-vax communities and during the pandemic (and before) there were PDFs of "1000 papers showing why vaccines are dangerous" being passed around. Honestly, snake oil salesmen don't even have to go through pay to play, they can just self publish and the effect is the same. If it looks real enough, a lot of people don't have the background to tell the difference. They look at the abstract and the conclusions and go from there.
If we're going to beat that kind of misinformation, the real stuff needs to be open and available so that translators can help people understand it. There were people in academia doing this on social media during the pandemic, but we need many, many more of them. There are a lot of people outside of academia who have enough of a background that they could help - if only they had access.
Edit to add: Also, I've heard from some folks in academia that they've been bitten by open access journals that started out well intentioned and turned into pay to play with bad reputations. And that in some fields, it is hard to track which ones are good and which ones are going bad.
Yes, but the point of publishing is not to inform the public or media - the point is to communicate scientific results to others working in one's field.
Expecting it to be well explained to the general public is like asking all patches posted to LKML to include a lay-language explanation of exactly what the patch changes, without using specialized CS terms.
You are correct they shouldnt be intended for public to take seriously. I think one issue is that the news media frequently spins these articles so the public ends up needing to decipher the accuracy.
Hey HN, I've had this idea for an open scientific web platform for years. It seemed like such an obvious thing to try. I kept waiting for someone else to build it, but to my knowledge no one else has. I finally saved up enough to take some time away from work and build a proof of concept. I'm looking for feedback on the PoC from scientists, scholars, researchers and academics to try to figure out if there is merit to this idea. I would deeply appreciate any and all feedback, and help getting this request in front of more of the target audience.
Consider finding an academic niche in which to really make this product sing; a community that is receptive to the idea. If the PoC can be shown to work in one community, neighboring fields can then point at it and say, "see, this thing is working over in community X. Journal Y is going to take three months to get this reviewed, and this paper is kind of a throwaway -- let's try X and see how it goes."
arXiv started in a narrow niche that absolutely needed it, perhaps the same will happen here.
Long-term, I'm not sure how this system will pay for itself. Keep costs low and dig in for the long haul. Academia can have very low switching costs (a professor can try your service on a whim), but has enough inertia that it can take decades to turn the ship (deans need to know a prospective hire's impact factor in top-tier journals).
Also, as a referee -- pay referees what their time is worth. For real. You'll get real good referees if you do.
The other side of it is reputation. Academic publishing to a questionable journal (even if it's a good journal) can be a black mark on your career. You not only have to get over the hump of at least break even profit but also convince tenured professors stuck in their ways that your journal will gain and keep a positive reputation.
This task is insurmountable for the most part. At least with the current way academia works. Current journals have been around for decades, sometimes even reaching the 100 year mark. Back in the ye olde days it was easy for anyone to publish anything (even non-academics). These days these journals are so deeply entrenched in the minds of academics it would be like convincing 10% of fortune 500 companies to switch from excel to your new cool software. Even it's it better they won't because of this mental block.
Could the solution be "don't actually tie it to the journal"? Have the journal treat itself more as an implementation detail than as an end in itself; you didn't publish a paper in the Journal of Lasers, you just published a paper about lasers.
This is still a mental shift for academics to get over, but I wonder if it's a more feasible one.
I'm not saying to _not_ try it. It's just going to be a dumpster-fire-money-pit that has an extremely high probability of either bankrupting you or producing nothing of significant value. This isn't a shot at OP. The best way to force change here isn't yet another journal, but rather fix the access issues of current journals, convince them it's a good idea to be less exclusive (ever noticed certain initials tend to be published more than others?), and moreover stop the university-journal death-bond that currently is in the market.
> Consider finding an academic niche in which to really make this product sing; a community that is receptive to the idea. If the PoC can be shown to work in one community, neighboring fields can then point at it and say, "see, this thing is working over in community X. Journal Y is going to take three months to get this reviewed, and this paper is kind of a throwaway -- let's try X and see how it goes."
> arXiv started in a narrow niche that absolutely needed it, perhaps the same will happen here.
That's very similar to my thinking - I'm actually hoping to start with the file drawer problem. When we're ready for open beta, put out a call for file drawered papers. It's also been suggested that we start as pre-print, while being clear about the ultimate intentions to become a journal, and go from there. Maybe even reach out to the existing pre-print servers to partner with them in some way.
> Long-term, I'm not sure how this system will pay for itself. Keep costs low and dig in for the long haul. Academia can have very low switching costs (a professor can try your service on a whim), but has enough inertia that it can take decades to turn the ship (deans need to know a prospective hire's impact factor in top-tier journals).
Noted. The thought is that it could be funded through donations, grants, and eventually by the institutions if it gains enough traction. But yeah, it's becoming clear that grants at the beginning may be the only way. We'll see how much we get in terms of donations once we hit open beta.
> Also, as a referee -- pay referees what their time is worth. For real. You'll get real good referees if you do.
...well... the idea is actually to crowd source refereeing and review. And for people to do it because, well, it needs done. Everyone wants good reviews and everyone knows refereeing needs done, so we ask everyone to share the job equally and try to make it as easy and low friction as possible.
The pre-publish review system is purely editorial review, aimed at helping authors prepare for publishing. The refereeing system is post-publish, it happens through the voting.
> Here other scholars who have enough reputation in any of the fields you tagged the paper with can see the draft and offer feedback on it.
Speaking for myself, I don't trawl through preprint servers looking for papers to review.
As it is, editors have a hard time getting people to review, and even after agreeing to review, to get the review back in reasonable time.
> This system gives scholars an enormous amount of control over who they solicit feedback from.
How does it handle peer review rings, where authors collude with reviewers to get bogus positive reviews? This is already an issue now.
> It gives reviewers an incentive to give solid, constructive review feedback - and rewards good reviewers for their efforts with recognition of their contributions.
It offers reviewers an incentive to write a review that the paper author likes. I expect if reviewer writes "this paper duplicates work by Penzias & Wilson (1965) and offers nothing new - reject" then the author will reject it. While "Fix the typo on page 4, otherwise, go ahead and publish" will be accepted - even if the paper really is a waste of time.
It's a lot easier to get surface-level proofreading responses than substantive domain-specific responses.
> Down voters will be strongly encouraged to post a response explaining their downvote.
Why aren't up voters strongly encouraged to post a response explaining their upvote? Why should upvotes not be equally visible?
What's the negative consequence should someone not provide a response?
> It is not a pre-print server. It is an attempt to replace the journal system with something open to its core, scholar lead, and collectively managed by the scholar community.
Any reason to not develop this as an overlay journal on top of existing pre-print servers? Thus reducing a lot of the costs?
Why would anyone use this system given PubPeer, and PeerJ, and eLife, and overlay journals, and Publons, and all of the other attempts at breaking the current system, why would someone choose this one?
Hmm... well Peerage of Science was an overlay on top of the journals. It was basically a system the journals would pay for to mediate their own review. Maybe similar tech-wise, but very different goals.
> Speaking for myself, I don't trawl through preprint servers looking for papers to review.
> As it is, editors have a hard time getting people to review, and even after agreeing to review, to get the review back in reasonable time.
Could we change that culture? The hope would be that we could build a culture of people helping each other review, for the simple reason that they want that help themselves when it comes time to for them to publish. With the reputation being more of a bonus.
Keep in mind, review here is purely editorial feedback. It is not refereeing - that happens post publish through the voting system.
> How does it handle peer review rings, where authors collude with reviewers to get bogus positive reviews? This is already an issue now.
See above - review is not refereeing. It could be an issue in refereeing in the votes for sure though. The hope is that the overall weight of the community would override those rings. And in the long term we might even be able to identify them through analysis of the database. If we see the same group of people always up voting each other's down voted papers, we can flag that and take corrective action.
> It offers reviewers an incentive to write a review that the paper author likes. I expect if reviewer writes "this paper duplicates work by Penzias & Wilson (1965) and offers nothing new - reject" then the author will reject it. While "Fix the typo on page 4, otherwise, go ahead and publish" will be accepted - even if the paper really is a waste of time.
> It's a lot easier to get surface-level proofreading responses than substantive domain-specific responses.
That may turn out to be true, but the thought is that the post-publish refereeing system will help incentivize people to take critical feedback seriously at the review step. After all, your goal is to publish something that will be well received. If you get good constructive critical feedback and ignore it, you risk being downvoted to oblivion and losing reputation.
> Why aren't up voters strongly encouraged to post a response explaining their upvote? Why should upvotes not be equally visible?
> What's the negative consequence should someone not provide a response?
It's been asked before and it's definitely on the table. It's something I'd love more feedback on - should responses be required for all votes? Just down votes? Or (as I'm defaulting to) should we follow the StackExchange model of just encouraging responses on down votes?
Seems maybe you would argue in favor of requiring responses on all votes? What's the reasoning?
> Any reason to not develop this as an overlay journal on top of existing pre-print servers? Thus reducing a lot of the costs?
We don't want it to be an overlay, because we want to ultimately replace the system. But it could make a lot of sense to work with the existing pre-print servers. I see no reason why we couldn't start as a pre-print and gradually evolve towards being the final publish, as long as we're clear about the end goal so that we don't just become "yet another pre-print".
> Why would anyone use this system given PubPeer, and PeerJ, and eLife, and overlay journals, and Publons, and all of the other attempts at breaking the current system, why would someone choose this one?
- PubPeer is just a comment overlay, can't read the papers (as I understand it).
- PeerJ is still using the manual model, hence overhead and fees to publish ($1200) rather than diamond open access. This creates equity in access to publishing issues ( a lot of people cannot afford those fees ).
- Elife, similarly, charged $3000 per article. These fees create massive equity in access to publishing issues.
- Publons I wasn't previously familiar with, but it seems to be about just highlighting review work done for the journals, and not anything to do with open access.
None of the attempts to break the current system (that I've found) seem to really be able to replace the journals by doing (almost) everything the journals do. At least, not with out just being a journal and facing the overhead problems. The one thing this project doesn't do that the journals do is curate reviews/refereeing, but the idea is that crowdsourcing could actually work better in the long run - though it's definitely going to bring its own set of problems that will need to be dealt with.
But thanks! I hadn't heard of Peerage of Science, and Publons sounds vaguely familiar but I wasn't really familiar with it. I'm constantly learning more about what other efforts have come before and currently exist (there are a lot of them). And thinking through all the questions and counter points is always very helpful!
The problem is doing a high quality review of a 30 page maths paper takes several days. In your system an author could then just reply with "nope", and my review disappears. Alternatively, my "downvote" after I spent two days finding mistakes can be cancelled out by one upvote from their office-mate.
Well, the idea is to attach responses to downvotes. So you spend several days finding mistakes, you write it up at the review stage, they say "nope" and reject your review. You don't lose any reputation. When they publish, you downvote and post your review as a response (you can just copy and paste - you're right, right now it disappears, but we can make sure reviewers retain access to review comments - that's easy). Sure the office mate can upvote, but other voters will see your critical review, see the mistakes you outline and can affirm you.
I guess what it comes down to is this: do you believe the average actor in science/academia is a good actor? IE. Are they honestly trying to be unbiased and advance good science/good work above all else? Or do you believe the average actor is a bad actor (OR that the incentive structure is so strong that even well intentioned actors are corrupted to be mediocre actors)?
If the average actor is a good actor, the system works and work will be rated/rewarded correctly. If the average actor is a bad actor, or the incentive structure is too strong, then this system would absolutely fail in the sort of worst case scenarios that have been described here.
> do you believe the average actor in science/academia is a good actor?
I think you've over-simplified your model. I'll build on your analogy.
There are many roles. As you've argued, the current system is fundamentally broken, which means most if not all of the roles are bad roles.
What happens if a good actor is stuck in a bad role?
My simplistic view of modern scientific publication is it started to reduce knowledge hoarding by coupling prestige to publication. ("I'm sorry Mr. Newton, but as Herr Leibniz published first, he gets the credit with creating the calculus of infinitesimals.")
The loop is now tightly coupled, leaving us with "publish or perish", and with coarse and unreliable publication metrics used to determine career advancement.
As a result, many people deliberately game the system, with methods like "least publishable unit" and "salami slicing". How is the author order determined? Who gets to be on the author list?
Are those the method of good actors? Or bad actors?
To be clear, my goal is not simply to point out that there is a grey area, but rather that there are competing goals. Your use of "good actor" and "bad actor" oversimplifies the issue by projecting it onto the single axis of "good science."
I recently reviewed a paper. It was 20 or so pages long. Nothing about it was all that new - it was about the level of a senior undergrad project. Most of it was a tutorial/walk-through about how to use the software. Had it been the late 1990s, it would have been cutting edge.
My review was something like "this should be a 1-2 page software announcement. Nothing seems wrong, but it isn't worth the time for anyone to read all these details in a journal paper, and it isn't worth my time to review it."
Is the full 20-page paper good science? Bad science? Is it the result of a bad actor? Or an uninformed actor? Should my review be acted upon or ignored? If published, would my post-publication review make a difference?
I might put that in the middle category - the category that might warrant a quick review with just that criticism and maybe a post-publish response, but no voting.
"Bad" here is used to mean dishonest, very poorly done methods, or conclusions way out of step with the actual empiracle results. "Good" is used to mean simply "this was well and honestly done work, with conclusions that may be reasonably drawn from the results". Nothing more. There are massive gradations with in "Good" and a large area between "Good" and "Bad". The intention is for "Good" papers to be upvoted, "Bad" papers downvoted, and everything in between to not be voted on, but to rather receive responses where warranted.
The reasoning behind this is that in a lot of ways we've gotten too into the first discovery pieces of this and forgotten that science works in the aggregate, it must be a collaborative enterprise. Any empirical experiment or theoretical work could be biased or flawed in any number of ways, and in ways which we can't necessarily catch in review. That's why replication is so important, but in so many fields it's not happening now due to ideas about what represents a "substantive contribution to the field". Hence, the Replication Crisis.
So the goal of using such a simple system and metric, in a lot of ways, is to take us back to brass tacks. This is about building humanity's collective knowledge base - together. All else is secondary. So a "Good" actor in that context is someone who does the thing that contributes to humanity's collective knowledge about the world, including but not limited to:
- replicating previous work
- getting a null result
- getting inconclusive results
- helping a peer polish their work so that it's best communicated
- helping a peer catch mistakes or spot unexplored avenues for further study
In a lot of ways, the prestige seeking is part of the problem, because it works against collaboration.
The nice thing about a reputation system where votes are granted for "Good" work in this context, is that it's not zero sum. And indeed, it does a much better job of incentivizing a lot of non-glamorous grind work, because there's a lot of that, and if it's well done, then it merits votes. It probably won't get as many votes in a single go as the ground breaking stuff, but it's a valid path to building reputation.
This system definitely does incentivize "Least Publishable Unit", but... I'm not necessarily convinced that's a bad thing. In software engineering, there are a ton of benefits to breaking up a large knowledge base or problem into its smallest digestible chunks. If all of those "Least Publishable Units" are well organized in a single database, that might actually be a benefit rather than a cost.
By the way, I'm still chewing on your other unanswered comments. You're raising a lot of good points and giving me a lot to think about and I really, truly appreciate it.
> "Good" is used to mean simply "this was well and honestly done work, with conclusions that may be reasonably drawn from the results"
Many high school science fair projects would fall into that category.
Would that be allowed? Or will there be some gating system?
Will you require ORCID?
Will you allow anonymous users? If so, what prevents nyms from being fly-by nattering nabobs of negativism? SO-style permissions based on rep? Being vouched for by other users?
> valid path to building reputation.
I don't know what to do with my SO or HN rep. As others have pointed out, unless this rep helps/hinders career advancement, I don't think people will care.
While I suspect you're still chewing on my comment that if it does help/hinder career advancement, then many will try to game it,.
> I really, truly appreciate it
Thank you for that comment. I don't want to come across like I'm trying to rain on your parade.
> Many high school science fair projects would fall into that category.
> Would that be allowed? Or will there be some gating system?
I think that's up for the community decide. There have been high school science projects that got published before and were considered significant contributions.
The idea is that anyone can register - though they will be asked to use their real name. To be determined how to verify that.
> Will you require ORCID?
> Will you allow anonymous users? If so, what prevents nyms from being fly-by nattering nabobs of negativism? SO-style permissions based on rep? Being vouched for by other use
We will integrate with ORCID and provide space to link ones account, but my understanding is that ORCID adoption is still limited (growing, but limited) so we don't want to require it.
Yes, it's an SO style reputation permission system - only tied to the fields you publish in. So when you publish a paper, you gain reputation in the fields you tag it with. That then grants you permission to review and referee papers tagged with the fields you have reputation in.
The current thought is to aim to set the threshold so that a graduate student who's published a few times can review, but the refereeing (voting threshold) would be set closer to late postdoc or early professorship. Possibly higher, to prevent labs from attempting to game the reputation system.
What those numbers are is yet to be determined. The current plan is to initialize the reputation system using citations - 1 citation = 1 upvote, essentially. It's not perfect, but it's the closest analog. That would allow researchers with established records to help seed the community and carry over the reputation they've already earned. This won't work with out that (because no one will have enough reputation to vote). We've got some analysis to do to figure out where the permissions thresholds should lie (and it might turn out that they need to be in a different place in different fields - which complicates the system somewhat, but isn't unmanageable).
> I don't know what to do with my SO or HN rep. As others have pointed out, unless this rep helps/hinders career advancement, I don't think people will care.
Yeah, at this point my SO rep seems kind of pointless too. That said, I'm pretty sure I got one of my first software jobs on the back of my SO rep a decade ago. They kind of squandered that aspect of the site by mismanaging the job board/cv aspect of it. Inspite of that, SO still works. It's still filtering content appropriately, and it's still a very supportive community where you can go and get the help you need. It's still one of the best sources of technical information on the internet. So even though the reputation is little more than internet points, it still seems to be doing its job. Which, to be fair, it was always just a mediator for the permissions system first and foremost.
Where Peer Review is concerned there are kind of two modes of operation to think about (...okay, maybe four).
There's the "Trying to get traction, no one cares about the reputation yet." At that point, the reputation can't do much in terms of incentives (beyond what any internet point endorphin system does). But it's still the system that facilitates matching papers with the appropriate reviewers and referees, it's necessary. It's what allows this to be crowdsourced - that plus the evolutionary field system.
Then there's the "Established, the institutions have started to take the reputation seriously and count it as another citation metric." At that point, it can start to provide incentives. And that's where we can start using it to help encourage good science and counter some of the negative incentives currently built into the system - things like offering reputation bonuses for replications.
The middle ground might be "Just starting to gain traction, where it's not meaningless, but not super meaningful."
I suppose there's a fourth option "Succeeds beyond my wildest dreams" where Peer Review becomes the standard place to publish and we manage to get the entire literature into the database. At that point we can do all kinds of cool things - automating literature reviews, detecting and flagging P-hacking, incentivizing replications.
Somewhere in this growth curve, we hopefully gain enough funding to hire some folks to help detect and counter the bad actors. But as long as they are the minority, the system should be self correcting to a large degree.
My thought for field evolution is that anyone can propose a new field, but they have to propose a parent and some percentage of the users in that field with referee reputation have to approve it (quarter? tenth? with a minimum base number?). That's to prevent small groups with agendas from creating echo chambers.
There's the counter point of adherents of popular ideas can sometimes stifle valid challenges to those ideas at first. But I think ideas tend to be popular (or should be) because they have a certain weight of evidence and theory behind them. And it should be challenging to overthrow those at first. As long as the average actor is a good actor - ie interested first and foremost in the truth and good science and willing to affirm good science that goes against their biases or supports findings they may not like - then this will work out in the long run.
Yes, the part I'm still chewing on is that first stage. Getting traction is the hard part. If it's big enough that people are trying to game it, then hopefully we have enough funding to work to counter that gaming in the areas where the system can't self correct. There's a degree to which trying to foresee what those ways will be now is a little bit pre-mature optimization. The gamers will come up with things we can't think of now, and thinks that seem like they would be obvious problems might turn out to be much smaller issues. Assuming the underlying assumption - that the average actor is a good actor as I defined it early - is true. If it isn't, then yeah, this system is fucked and won't work.
> I think that's up for the community decide. There have been high school science projects that got published before and were considered significant contributions.
I wrote many science fair projects to mean many in each science fair, not the rare few that produce novel and publishable results in the scientific literature.
Follow the instructions from a book about science fair projects. The result will be "well and honestly done work, with conclusions that may be reasonably drawn from the results" - exactly as you described. How many publications of "I synthesized aspirin from salicylic acid and acetic anhydride" do you want?
> up for the community decide
If this becomes a craze among high school science fair entrants, then the community will be high school science fair entrants. It will then be up to the to decide if tenured professors are allowed.
More to the point, I've long found "community" to be a difficult word to understand. How is it different from "users"? Are there users who aren't part of the community? Why aren't there multiple communities? We know there are multiple types of users (eg, HCI's "persona development".)
With no sense of what you want your community / user base to be, you can't tell if you've ended up with what you wanted.
> my understanding is that ORCID adoption is still limited (growing, but limited)
Last fall ORCID tweeted that many/most of the Nobel Prize winners had an ORCID. https://twitter.com/ORCID_Org/status/1445448209782894592 . One of the comments correctly pointed out "surprisingly a lot of researchers have their ORCIDs only as void placeholders required by their institutions to have."
That includes me.
> The current plan is to initialize the reputation system using citations - 1 citation = 1 upvote, essentially. It's not perfect, but it's the closest analog.
There are a lot of E. Smiths in the world. (This is what ORCID is trying to resolve.) Oh, and my name is $NOBEL_PRIZE_WINNER and email address is minecreeper666@washington-hs.state.us.
(Email addresses change as people move between institution. Someone's current email address might not be on any of their papers.)
Do preprints count as publications?
I tried to get around the publication system by having blog posts which were essentially preprints/publications. That failed - there's a disdain for the grey literature.
> But it's still the system that facilitates matching papers with the appropriate reviewers and referees, it's necessary.
Which is why it seems like you could start as an overlay system over existing preprint servers. They've already resolved issues about user id, audience, etc., which you could use a a starting point.
> That's to prevent small groups with agendas from creating echo chambers.
Real-world example: what PZ Myers refers to as the "panspermia mafia" - "They use their connections to promote a small family of fellow travelers."
> that the average actor is a good actor as I defined it early
Again, I urge you to consider that "good actor" is overly simplistic. If you truly think the system is broken, how are so many good actors not able to change it?
I would say academics are just people, and in many cases they want to be employed and successful.
At the moment, while the system is very bad in many ways, it used to work (in my opinion, I haven't done proper research into this) because:
* People who run "good" journals want their journal to be considered good, so they publish good papers.
* People want to get into "good" journals, so they want to write good papers that will be accepted. The existence of these papers is then used as evidence for promotions and grants.
* Academics want to read "good" journals, so they get their University to pay for a subscription to get physical copies in the library.
Previously this cycle held reasonably well -- it's been broken by the reduced cost of online journals, and journals wanting higher payments.
I think a good future system needs to (somehow) recreate this "virtuous cycle" -- making reviews as important as journals could be a good idea, I could imagine following a reviewer I thought was excellent, and a group of reviewers could create their own "virtual journal".
I would personally trust "Person X who I know thinks this paper is good" much more than "100 people thought this paper was good, 40 thought it was bad, so +60".
The way I'm thinking it would work is that journals can effectively create a team of reviewers on the platform and when you submit a paper, you could request review from their team.
Right now I'm thinking you could choose to request open review from everyone in your fields, review from one or more journals, or both at the same time. The journal editors then give their feedback through the review system, but it's still up to the author to choose to publish. If you successfully satisfy the journal, you get their stamp of approval on your paper when you publish. If not, you can still publish at will, but your risk their review team down voting you and you don't get the stamp of approval.
> I would personally trust "Person X who I know thinks this paper is good" much more than "100 people thought this paper was good, 40 thought it was bad, so +60".
I can definitely understand where that hesitation would come from - especially with experience in places like Reddit, where the crowd is not given guidelines and voting systems decay to the lowest common denominator.
My understanding of the research on the topic is that it actually supports the idea that the judgement of the crowd will often be better than the judgement of any individual -- when it's the right crowd, given the right guidelines. That's the model behind StackExchange and it's worked very well there. And that's the theory behind this (proposed) platform. The reputation and field model is geared towards identifying and creating groups of "the right people" for each topic/discipline/field and allowing people to submit their work to those groups.
It could create a similar virtual cycle:
* People want to support good science and get good review feedback, so they give good review feedback.
* People want to support and highlight good science, so they upvote good science, downvote dishonest science, and post responses with critical feedback on mediocre science.
* People want to have their work to be upvoted and receive positive responses, so they take review feedback seriously.
That's the theory anyway - the jury seems to be very much out on whether or not that can work with academia.
I hate being so blunt because I love the ideas you're proposing and have advocate for many similar things when I was still pursuing an academic career and now from outside of it (albeit in a scientific community where there is a very blurred line except for institutional affiliations).
But you're not addressing the fundamental problem: there's such an over-saturation of PhDs and so few tenure-track positions in so many fields that there is one and only one objective for an early-career researcher: publish, publish, publish. That's how you build the portfolio to get called back for campus interviews for professorships; that's how you build your network and slide into funded grants before building your own, with connections to the program office or review committee that will help you get said grant funded. Although there are many worthwhile activities for an early career researcher to engage in (and, thankfully, a slowly growing recognition of the value of these activities, such as open source development and data curation), only a small subset of them feed into the ultimate goal: a tenured professorship.
The critical shortage of reviewers these days, at least in my field, is partly due to social dynamics since the pandemic, but also very critically due to the perceived lack of value that serving as a review provides while you're in the heat of the rat race. It doesn't sound like your a practicing research scientist (meaning: you work in a job where producing research and publishing on this content is a primary metric against which your success is measured), but you should really talk to some across a diversity of fields. It takes a _lot_ of effort to provide a useful review. Effort that instead could be funneled into ones own research, for greater impact on your career. And if you're the author of a good paper, you don't need the other reviewers to "help" you with the process. It can be tedious and annoying sometimes, but hell - what difference dos it make if _one_ of your papers is getting held up, you've got 6 others ready to go by the end of the year, right?
The problem is that it's not a "culture" issue. It's the fundamental practice of the modern scientific enterprise. The incentive structures are highly warped - a problem that many of us run into time and time again when we try to push through open science and reproducible science initiatives (and I'd definitely classify your effort in that group). When the fundamental practice and institutions change, the culture will change.
If you'd be willing to bend and constrain your idea to more of a "Reddit of peer-reviewed literature" and focus on expert intersectional communities like ScienceBlogs, you might have something really novel here. There's a ton of value in the open-sourced curation mechanism. I've been involved heavily with /r/science in the past, and while the ecosystem of sub-reddits surrounding it definitely do a good job of "curating" science, it's not really the same thing as what you're proposing. It's a definite niche would could have a lot of value.
But don't expect a lot of scientists to buy-in when you're not offering them anything in return. And again, to make this very, very clear: if your project doesn't help an academic get tenure, I don't see why they'd participate.
I don't see how this system can do that. I stopped caring about Stack Overflow up- and down-votes long ago, I've grown tired of reading stories about the background politics and voting for Wikipedia, I can easily see how the proposed system can be gamed, and I don't see how it can defend against systemic gender, racial, or other social biases.
> for the simple reason that they want that help themselves when it comes time to for them to publish
Which leads to collusion. "You scratch my back, I'll scratch yours."
My reviews are anonymous. I want to make it harder to retaliate against me, and make it harder to use what authority I've gained to influence things, and I trust the editors to mediate.
> review is not refereeing
Shrug. Okay. The question is, why do people want to get points? If it's good to get points, then there will be collusion to get those points.
> the post-publish refereeing system will help incentivize people to take critical feedback seriously
I have not found that system to be all that effective.
> Seems maybe you would argue in favor of requiring responses on all votes?
That is not my argument. I don't know one way or the other. I observed that "transparency" is described as being important for negative votes, but with no reason given why it's not also important for positive ones.
> We don't want it to be an overlay, because we want to ultimately replace the system.
The advocacy objects to a system with "high fees", "paywalls", "pay-to-play", etc.
Preprint servers don't have fees, paywall, or pay-to-play.
Thus, there is no reason to suspect you want to replace preprint servers, nor reason for why it's important to do so.
> None of the attempts to break the current system
How much do you know about eLife and Michael Eisen?
How hard is it to get a waiver for the $3000 fee? "To ensure that eLife’s publication fee is not a barrier to publication we therefore offer a simple way for authors to apply for a fee waiver." https://reviewer.elifesciences.org/author-guide/fees#elife-p...
It's also hard understand what you mean by "the current system", since I didn't realize pre-print servers were part of it. eLife requires preprint publication to bioRxiv or medRxiv.
> there are a lot of them
My question isn't "how does someone well-informed in all the different options choose yours?" but rather "given that most people aren't well-informed about the different options, why would they think to publish using your project?"
If you ignore the web3/crypto aspect, they’ve got a decent start on what seems like a reasonably similar platform. They do all their dev and community building out in the open, so you can join their Discord and listen in on their community calls, etc. Might at least give you some ideas on what to target or avoid.
I believe all scientific publishing should be open access, but it doesn't address the fundamental problem with scientific publishing - there just just too much of it!
Due to the well known pressures of academia, and of CV writers everywhere, we live in a world where many people (apparently) work on hundreds or thousands of papers; in a world of "minimum publishable work"; in a world where every Masters and PhD is expected to have published. And the vast majority of those papers are almost pointless, low quality, and/or almost content free! (I admit my own publications into that majority.)
There is no way that any human is genuinely across all the literature of their field. And the situation is getting worse and worse, rapidly.
I don't know how we'd do it, but perhaps we could move towards a situation where we have (say) "papers" and "notes" - where "papers" are very high quality and published only when something big and important comes up, and where "notes" are used to record everything else. Someone new to the field, or looking for anything important, need be familiar with (only) a few hundred papers; and all those "new technique", "minor variation", "new species", "confirmation", and "we did stuff" publications would be "notes". Of course, this would depend on a truly scathing response when anyone puts forward low quality "papers".
PS. It was once that case that publications divided along similar lines as journal papers and conference papers. No longer seems so much the case.
Theres an issue with this "one review fits all" approach.
Scientific communities are often splintered into sub-communities with different views and allegiances.
The scientific community is not a set, it's a graph with different connectedness and weights.
A imho better system would:
* use public key crypto to give everybody a pseudonym under which they can create, sign and publish articles
* a publication graph that is used both as a reference system and a review system;
- a reference is a link from one article to another if it reuses and builds upon it (standing on the shoulder of giants)
- a review is a link from one article to another that passes judgement on or reproduces that article (peer review)
* a sentiment analysis based system that turns the review links into peer-review quality metrics
* a way to delegate judgement from yourself to others that you trust in your community
* a page-rank like algorithm that computes trust relative to your private key and the trust it has assigned/delegated
* and most importantly: a UI that shows you a papers ranking and relevance based on YOUR personal trust graph and community
As an observation, I think "If you're not paying for it, you're the product" applies here.
In traditional publishing, the people writing the papers are the product. Journals compete to get the best papers, and those that got the best papers in the immediate past tend to get the best paper submissions in the future.
In open-access publishing, the readers are the product. Having good papers is only one way to build your product base, On the other side, there are a number of ways to market your product (pay-to-play, for example.)
While I like open-access in principle, I also like the filtering from the traditional pubs to help me narrow a reading list.
How would you apply that observation to Wikipedia? That's the business model we're aiming to take - crowdsourced, crowdfunded, non-profit, and community governed.
And the reason for that is precisely because of the mis-aligned incentives in both the pay to access and pay to publish models.
The filtering here would be accomplished by the tagging system and the voting system.
> How would you apply that observation to Wikipedia?
I would notice that the Wikipedia reader base is a multiple of peer-reviewed scientific-paper base. While I don't know what the multiple is, I would guess the range is 1-2 orders of magnitude.
Wikipedia claims its average donation is $15 [1], so a rough guess will imply $100-$1000 average donation for a sci pub model.
That's on the income side. On the spending side: I would guess the major difference in costs would be the scientific editors, which exist in the traditional science publication model.
There are probably large initial marketing costs associated with
1. finding the right target authors and convincing them to send a valuable contribution to the new site,
2. finding serious editors to manage peer review and separate the wheat from the chaff. (edit: I assume this is still part of the process. I think about something like Einstein's special relativity paper, and I'd have to guess it would have been widely downvoted as an internet crank trying to publish crazy ideas -- he did not have a reputation yet.)
I see this as potentially expensive: The level of the first papers will determine how likely it is to succeed. Said differently: If the new site starts by publishing lame work, it's not like someone is going to want to publish there afterward.
In any case: I think it is a tough and worthwhile problem to solve, and I wish you good luck.
The problem is wikipedia is explictly rejecting what you want to accept -- original research.
It takes minutes to check someone's reference says what that it claims they say. It takes days to check if a new piece of independant research is correct.
Having left academia after my PhD, but still doing research in the private sector, I hate the scientific publishing industry wholeheartedly. It is something of the past that is definitely keeping us, as a society, from making progress.
There are more and more initiatives to create new tools around the literature (Semantic Scholar, ResearchRabbit and stuff like that) but less around moving he publishing industry as a whole.
(I'm not affiliated with the previous organizations though I think Semantic Scholar have an API for peer review matching that might be useful in your case).
I don't think scientific publishing is perfect, but I strongly disagree that it's somehow holding back society. T
he number one thing that is holding back society in my opinion is that the vast majority don't think science is important or useful, and even worse, think that their opinions are somehow the same as scientific knowledge. Even convincing people in industry that basing decisions on data can be hard, let alone convincing them to use something approximating the scientific method.
In short, if people wanted to read science papers but couldn't, I might agree with you, but people don't even want to.
> The platform splits pre-publish peer review from post-publish refereeing. Pre-publish review then becomes completely about helping authors polish their work and decide if their articles are ready to publish. Refereeing happens post-publish, and in a way which is easily understandable to the lay reader, helping the general public sort solid studies from shakey ones.
The author doesn't seem to understand the "referee" role of (associate) editors and reviewers. The core idea is about gatekeeping of a journal title's self-set publication limit in favour of articles that represent a substantive advancement of a field or concept.
However this itself is a problem for many fields of research, since it's a core factor underlying the failure to engage in replication studies and to check if results are valid. Moreover this would tend to encourage researchers with small but useful results to shelve them and NOT publish, since it will create a negative feedback mechanism for such contributions:
> Up votes increase the paper’s score and grant the authors reputation in the tagged fields. Down votes decrease the paper’s score and the author’s reputation in the tagged fields.
Part of this problem is because, like any site which uses an upvote or downvote function, the button will be used to express things other than the intended meaning. "Does not contribute" quickly becomes "I don't like X" (even if it is methodologically valid).
My experience of papers that went to top tier journals is that the peer reviews were detailed, interesting and clearly from people who knew what they were talking about. They were also somewhat harsh and demanding.
It is almost certainly the case that these reviewers would not expend the same effort on a platform like this. They review papers in Nature etc becuase they feel that they are participating in something important, at the cutting edge of science. They also, somewhere along the line, feel that doing such reviews will improve their chances of publishing there. Additionally Nature will offer some reviewers the chance to write an editorial about the article they reviewed. These are strong incentives that successfully coerce busy people. How will you replicate this?
What can happen with these open peer review platforms is that they attract 'hyperreviewers' of dubious quality. This can be seen with publons. Look at the top ranked peer reviewer there: https://publons.com/wos-op/researcher/1217819/kaveh-ostad-al..., who apparently reviewed 1352 articles in 2021. Perhaps this person is actually carefully reviewing 5 articles every working day of the year... then again perhaps not.
So, please know that I am I reading all the critical comments very carefully and taking them seriously. I haven't responded to all of them, partly because I don't have good answers to all the criticisms and I need to sit with them for a few days. I have also gotten a lot of positive feedback and encouragement. I'd say it's 50/50 right now. Maybe 60/40 at best. I've got a lot of thinking to do.
I think what it comes down to is this: Do you believe the average actor in academia/science is a good actor? In other words, are they honestly trying hard to collaboratively advance good work and good science and check their own biases?
Or do you believe the average actor is a bad actor (OR that the bad incentive structure is so strong that it corrupts even the well intentioned into mediocre actors at best.)
If the former is true (average actor is a good actor), then this system could function the way it is supposed to and would be self regulating to a large degree. And there are many things we could build into it that would help that.
If the latter is true, then the critics are right and this system will absolutely fail. Catastrophically.
I really don't want to believe the average academic actor is a bad actor, so I'm going to forge ahead for now. But I'm also going to be seeking a lot more feedback for sure. I'm going to read every single critical comment, note it, spend time thinking about the ways in which this could go bad that have been detailed here (some of which I was aware of and thinking about, and some of which are novel ideas). And if I see this heading in those directions, and I can't turn it back on track, I will absolutely pull the plug rather than keep pushing a platform doing harm.
Reviewing is not the cause of the economic problems in academic publishing. The economic problems arise from the car at of copy and production editing. That's where the money goes. See https://f1000research.com/articles/10-20
I think the idea is that almost all of the costs like management, lobbying, paywalls, etc. are to be done away with, which that article estimates is at least 50% of the cost. Getting rid of a profit margin by being a public non-profit org slashes another 30%, and so presumably a system like this may only cost a few hundred dollars an article on average, well below the thousands of dollars typically charged for top journals. At that point it's easy for costs to be handled by libraries and institutions contributing to operating costs at much less than (the author of the OP claims <1% but I'd guess more like ~20%) the cost of existing journal access and publishing fees paid by institutions today.
Awesome effort. I really really hope this makes at least a minor dent in the scientific publishing deadlock.
My 2 cents:
Please be open to suggestions.
Please involve more people, who you feel comfortable working with.
You have solved multiple independent problems of decentralized trust, voting etc. Please try to be agnostic to a specific solution.
Will do my best! That's part of the reasoning behind early ask for feedback and my eagerness for it :) Trying to get people involved as early in the process as possible to get more perspectives.
Awesome idea! Only half-joking: something like the HN ranking algorithm could be great for that, maybe you can reach out to them if they can help (you might even try applying for YC).
Some feedback: I think the division by disciplines needs to be stronger than tags - researchers are so specialized that most research even within the same discipline is often unintelligible for those specialized in a different sub-area. They need to know roughly what to expect when they go to the platform, you don't want to show physics papers to sociologists. Think of the preprint servers, they're also constrained to some disciplines (arXiv for mathematical disciplines, bioRxiv and medRxiv, SSRN for economics/social sciences...).
In the bigger picture, I think moving towards such a model would be a good thing also for scientific debate in the public eyes. Going from the binary 'It's published' to 'It seems most experts in this area tend to agree with this paper's conclusions, with some important caveats' would give a much better picture of the state of our knowledge at the cutting edge, and perhaps also address the bias against negative results better than journals who pledged to do so can.
I love what you’re trying to accomplish here! I encourage you to experiment aggressively and try to focus on testing the biggest risks, rather than prioritizing the easiest things to make progress on but whose success or failure won’t impact your overall odds of achieving your goal.
One thing which I’ve noticed in your response to other comments is that your plan for controlling bad behavior by participants in your system often boils down to either having a strong culture means no one will behave poorly, or that voting and crowd consensus is an effective system for both motivating good behavior and punishing bad behavior. This probably works fine as long as (1) the community is very small such that everyone actually knows everyone else, or (2) there isn’t much actually at stake. You reference Stack Overflow and Wikipedia - these are examples where not much is at stake. If accumulating points and reputation in either of these systems would determine who gets tenure and the culmination of lifelong career ambitions, then I wonder if those systems would be as robust to bad actors. A white-hot risk I think you should focus on to experiment is that peer-reviewed academic science is not a friendly collegial system but an aggressive and high-stakes game where people have very very big incentive to game the system. The current system has tons of problems, but it does function in the face of participants who would like to cheat if they could. The big influential editors and reviewers act like a leviathan in the Hobbesian sense which causes many problems but also solves many.
I encourage you to think about how you could solve some of these potential issues that the current system doesn’t have so much, particularly around preventing various forms of bad behavior which are easily caught and punished in a more hierarchical system like the current one, but which a more crowd-sourced system might fall prey to. Things like collusion rings to upvote each others’ papers, using lay popularity or scientific fame to overwhelm legitimate complaints about issues with a paper, even bot nets or paying to win by farming upvotes - all things which current journals and peer review are basically immune to. People will try this with their careers and livelihoods on the line in a way Wikipedia editors or Stack Overflow users just wouldn’t care to.
These issues can be overcome, but for what it’s worth I think this is the hardest part of what you’re trying to accomplish, and more important for you to focus on than the tech stack or getting a first few papers published on your setup. It will be hard to simulate, but I think you should focus on how to test the behavior of a scientist who is desperate and has their back against the wall in the publish or perish game, and how you can design to reward good behavior and prevent bad behavior in the worst case, not just in the best case or even the average case.
Very cool effort. If you pull this off the positive externalities could be huge.
I'm also working in this area; and your work is impressive!
I find your thinking to be very clear, and thank you very much for taking the time to articulate this space so clearly. Your idea is diligently thought-through, with admirable execution.
I see a limitation in your design — it relies on a central agent to design and ultimately adjudicate its reputation system. That is not safe for science! But worse, scientists will be hesitant to buy into and rely on a system that depends on such centralized leadership.
I'd encourage you to compare those aspects of your design with the decentralized "Subjective Reputation System" that we are building: https://peeryview.org/about
Peery View's ratings are Subjective: each user votes on not just publications, but also other users, to express who he trusts, and can even define his own filter function to generate his own news feed of high-ranked publications. And as each user votes, his votes are shared with everyone else who votes him up, and so on, which creates a trust network that can cover the entire web.
Peery View is a protocol, instead of a platform. Every user can post his votes and publications on any web server he chooses. The Braid Protocol extensions to HTTP make it trivial for his votes and feeds to synchronize with other user's votes on their servers. This means we don't need a central server, and we don't need a 501c3, and we don't need to set the culture. The culture will evolve itself, because every user has an incentive to have a good feed, and thus has an incentive to vote on good posts and good user behavior, and then he also has an incentive to share these votes, because doing so will give him reputation in the scientific community around him, as other people vote him up, because doing so will improve their feeds.
You find that in this way, the system has built-in resistance to being gamed. If any bias or manipulation can be systematically detected (e.g. a sybill attack) by any user, that user can express compensating votes that undo the bias and penalize the manipulation. This user can then express those compensating votes, and get reputation in the network, as other users vote him up to get improved feeds. Thus emerges a distributed incentive for the community to root out untruths. As the community deliberates the best ways to find truth, they articulate the scientific principles that they find work the best. This is a very healthy, and much-needed dialogue for us to be having these days, with the crisis in scientific culture that we are all experiencing.
1.) You have a lot of nice tech, but think long and hard about the social dynamics and how you can get from A->B. By far, the tech will be the easy part. Convincing established scientists and up-and-coming scientists who need publications in quality journals is much, much harder. As another comment says, try finding a niche first and build out. Particularly some less established fields.
2.) You mention hiring engineers, designers, devops, QE. This points to solving a social problem with technology. Consider marketing, or even social scientists who can guide you in understanding the communities you are attempting to influence. Become embedded in those communities, build trust, attend (and maybe sponsor) conferences, and then branch out.
3.) Sort of a pet peeve of mine (among many): You talk about making the studies accessible to the lay public (in terms of understandability). I feel this is a bad trend. Scientists need a way to talk to each other in their own language, without needing to lay out all the context every time. Academic publishing is that avenue. I have published many articles that will not be understandable to anyone without at least 2 years of graduate school in the field. They aren't groundbreaking, but can still be important. Science is very incremental in many areas. Although I 100% agree that it should be open and free.
For the lay public, there are always books, blogs, and pop-sci news that put it into context.
Anyway, I wish you success (truly, we need improvement over the status quo). It's easy for people to criticize, but think of all the things we don't criticize (which means we approve of them!)