In my CS area, many results aren’t interesting enough to be replicated, even in high quality journals. Often the results aren’t even very relevant to what is otherwise a design paper, and are just there to check some boxes. The fact that research results aren’t replicated is just symptomatic of a much larger disfunction.
It has been a long time since I’ve read a paper where I actually wanted to use whatever they were selling, let alone reproduce whatever results they were claiming. We don’t even have “novel, exciting, fancy” work!
When something comes out of a paper that really changes the game (like say MapReduce), it gets replicated a lot.
Good point, a lot of work really falls below the threshold where replication would still make sense.
However, I do think there is a lot of incremental, piecemeal type of work that over the years could amount to a decent step forward. It's just that currently, from looking at the code behind quite a few published papers, the statements often just cannot be trusted enough to actually build on these works. It is my view that some subsubfields in CS sustain themselves by avoiding the most pertinent questions, because their answers would reveal that the entire subsubfield has been superseded, or was never that promising to begin with. Unfortunately, that type of noise generation is actually more profitable than a single paper saying "nope", although the value of the latter in terms of knowledge generation is enormously high.
It has been a long time since I’ve read a paper where I actually wanted to use whatever they were selling, let alone reproduce whatever results they were claiming. We don’t even have “novel, exciting, fancy” work!
When something comes out of a paper that really changes the game (like say MapReduce), it gets replicated a lot.