I'm also working in this area; and your work is impressive!
I find your thinking to be very clear, and thank you very much for taking the time to articulate this space so clearly. Your idea is diligently thought-through, with admirable execution.
I see a limitation in your design — it relies on a central agent to design and ultimately adjudicate its reputation system. That is not safe for science! But worse, scientists will be hesitant to buy into and rely on a system that depends on such centralized leadership.
I'd encourage you to compare those aspects of your design with the decentralized "Subjective Reputation System" that we are building: https://peeryview.org/about
Peery View's ratings are Subjective: each user votes on not just publications, but also other users, to express who he trusts, and can even define his own filter function to generate his own news feed of high-ranked publications. And as each user votes, his votes are shared with everyone else who votes him up, and so on, which creates a trust network that can cover the entire web.
Peery View is a protocol, instead of a platform. Every user can post his votes and publications on any web server he chooses. The Braid Protocol extensions to HTTP make it trivial for his votes and feeds to synchronize with other user's votes on their servers. This means we don't need a central server, and we don't need a 501c3, and we don't need to set the culture. The culture will evolve itself, because every user has an incentive to have a good feed, and thus has an incentive to vote on good posts and good user behavior, and then he also has an incentive to share these votes, because doing so will give him reputation in the scientific community around him, as other people vote him up, because doing so will improve their feeds.
You find that in this way, the system has built-in resistance to being gamed. If any bias or manipulation can be systematically detected (e.g. a sybill attack) by any user, that user can express compensating votes that undo the bias and penalize the manipulation. This user can then express those compensating votes, and get reputation in the network, as other users vote him up to get improved feeds. Thus emerges a distributed incentive for the community to root out untruths. As the community deliberates the best ways to find truth, they articulate the scientific principles that they find work the best. This is a very healthy, and much-needed dialogue for us to be having these days, with the crisis in scientific culture that we are all experiencing.
I find your thinking to be very clear, and thank you very much for taking the time to articulate this space so clearly. Your idea is diligently thought-through, with admirable execution.
I see a limitation in your design — it relies on a central agent to design and ultimately adjudicate its reputation system. That is not safe for science! But worse, scientists will be hesitant to buy into and rely on a system that depends on such centralized leadership.
I'd encourage you to compare those aspects of your design with the decentralized "Subjective Reputation System" that we are building: https://peeryview.org/about
Peery View's ratings are Subjective: each user votes on not just publications, but also other users, to express who he trusts, and can even define his own filter function to generate his own news feed of high-ranked publications. And as each user votes, his votes are shared with everyone else who votes him up, and so on, which creates a trust network that can cover the entire web.
Peery View is a protocol, instead of a platform. Every user can post his votes and publications on any web server he chooses. The Braid Protocol extensions to HTTP make it trivial for his votes and feeds to synchronize with other user's votes on their servers. This means we don't need a central server, and we don't need a 501c3, and we don't need to set the culture. The culture will evolve itself, because every user has an incentive to have a good feed, and thus has an incentive to vote on good posts and good user behavior, and then he also has an incentive to share these votes, because doing so will give him reputation in the scientific community around him, as other people vote him up, because doing so will improve their feeds.
You find that in this way, the system has built-in resistance to being gamed. If any bias or manipulation can be systematically detected (e.g. a sybill attack) by any user, that user can express compensating votes that undo the bias and penalize the manipulation. This user can then express those compensating votes, and get reputation in the network, as other users vote him up to get improved feeds. Thus emerges a distributed incentive for the community to root out untruths. As the community deliberates the best ways to find truth, they articulate the scientific principles that they find work the best. This is a very healthy, and much-needed dialogue for us to be having these days, with the crisis in scientific culture that we are all experiencing.