The existing model asks volunteer academics to rate papers on three principle axes: novelty, importance, and rigor. The journals don't do much here except find these academics and send them a handful of papers to look at. Further, novelty and importance seem to be overshadowing everything else in reviews and they are both extremely subjective and can entrench existing biases in a subfield. Whether this metric is "reasonable" is a bit contentious.
I agree with you that a system of open publishing plus some post-publish evaluation would be nice. But you don't really need the journals for this. Or you certainly don't need journals making billions in profit to do this.
The existing model asks volunteer academics to rate papers on three principle axes: novelty, importance, and rigor. The journals don't do much here except find these academics and send them a handful of papers to look at. Further, novelty and importance seem to be overshadowing everything else in reviews and they are both extremely subjective and can entrench existing biases in a subfield. Whether this metric is "reasonable" is a bit contentious.
I agree with you that a system of open publishing plus some post-publish evaluation would be nice. But you don't really need the journals for this. Or you certainly don't need journals making billions in profit to do this.