But "not giving beta blockers" is also a decision.
In that context, there is no "we can freeze time for the rest of the world while scientists add new studies", you have to choose.
Imagine a parallel universe where beta blockers are a good solution and where Poldermans' study was fraudulent and said that beta blockers are bad. According to you, doctors should have not trusted Poldermans' study and, therefore, should have continued to give beta blockers. So, in this parallel universe, your definition of "not trusting the study" = "doing exactly the opposite of what not trusting the study means in the first universe".
Then there are also parallel universes where Poldermans' study was not fraudulent. What about there? Is adopting the opposite of the conclusion the correct things to do while waiting for new studies? Or are we rather saying "well, I know there is not much there, but the probability that the study is crap is lower than the probability that it is not, so in the meanwhile, let's follow its conclusions, it's the best bet in the meanwhile"
I think in order to make a good decision in all these possible scenarios (since you don't know which one a specific study might be in) is to make sure you have a good understanding of what the likelihoods are any single study is in one of these scenarios (ex: fraudulent).
Right now it would appear the perception and the reality of the likelihood a study is trustworthy is in disagreement.
Hopefully new incentives coupled with public statistics can help fix this.
I'm not sure you can say if the trust is overestimated or not based on the fact that "yes/no" decision are close to 100% in favor of trusting each study.
Let's imagine you distrust a lot: you believe that a study has 45% chance to be fraudulent.
You have 100 studies S1, S2, S3, ..., each saying that a process is better than another.
You take study S1. It concludes you should do G1 and avoid B1. You have 55% chance doing G1 will save life and doing B1 will kill. Conclusion: let's do G1, doing B1 would be stupid: 55% chance of killing instead of 45%.
You take study S2. It concludes you should do G2 and avoid B2. You have 55% chance doing G2 will save life and doing B2 will kill. Conclusion: let's do G2, doing B2 would be stupid: 55% chance of killing instead of 45%.
...
So, at the end, you will follow the conclusion of 100% of the studies, even if you only trust 55% of them. You end up making 45% of errors, but 55% of good choice.
Now, let's say you want to follow a ratio of conclusions that is similar to your level of trust. You want it to be following 55% of the conclusions. Which one do you follow? If you pick 55% at random, let's say studies 2, 3, 6, 7, 8, ... What is the probability that those are the correct studies? If you do a quick simulation, you will see that you end up making 49% of errors, and 51% of good choice.
So, still accepting that you will follow a bad study is still better than randomly discard a study "just so I remove some studies that may be bad".
You may say that you will discard the studies that look bad, not take them randomly, but the trick is that you cannot tell which study look bad. Poldermans study looked convincing at first, and it's few years later that problems were found, and they were found by people having the means to properly investigate (they had access to insider info that you will not have access to). Possibly there were more good studies that looked worst than Poldermans'.
edit: also, a large fraction of fraudulent studies is done to hide "non conclusive" results, which means that a fraudulent study that concludes that G is better than B does not mean that B is better than G, it just means they have no idea which one is the best. So, you cannot also pretend that a fishy study implies that the opposite conclusion is proven.
I believe you are saying that knowing (or believing) the probability of a group of papers being fraudulent does not help you (ex: in a high fraud belief of say 45%) make a better decision because the group probability does not inform any single paper probability meaningfully since a paper's findings are close to being all or none. Is that correct?
If so, I see your point.
I still think in a more diverse set of scenarios where an intervention has more than a single and binary outcome, knowing the group probability can still be informative. For example if an intervention in a possibly fraudulent study shows a large upside but other known to be unfraudulent studies show a severe but unlikely downside then it may not be a good idea to do the intervention unless there is a good reason to (ex: all known safer interventions have been tried, known risk can be mitigated, end of life and patient agrees, …).
But I am beginning to wonder how useful of a signal a known fraud percent would be given how long it takes for fraud to be discovered and then disclosed.
I still think something can be done here with public statistics and perhaps reputation but I’ll have to think about it some more. Certainly if an author or institution were more at risk for discovered fraud or fraud discovery was more likely they would do a better job of policing or not doing it. Other incentives (as mentioned in the article) are at play as well.
For example, a good basis would be statistics: how many past studies have been reproduced and confirmed that they proposed the correct conclusion? If you take 1000 past studies that were later reproduced by an independent team and count the number of them that were disproved, if this number is <500, then you know that the probability that the study is crap is lower than the probability that it is not.
There are other basis too. For example, the fact that not everyone will like to have deaths on their conscience. Or the fact that only a fraction of situations are situations suitable to commit fraud ("normal" studies often involve several independent institutes) and that fraud is very risky, especially in a sector like medicine (for example, if someone comes with an alternative to beta blockers, they will want to try to see if it performs better, and will notice that beta blockers don't perform as expected).
In that context, there is no "we can freeze time for the rest of the world while scientists add new studies", you have to choose.
Imagine a parallel universe where beta blockers are a good solution and where Poldermans' study was fraudulent and said that beta blockers are bad. According to you, doctors should have not trusted Poldermans' study and, therefore, should have continued to give beta blockers. So, in this parallel universe, your definition of "not trusting the study" = "doing exactly the opposite of what not trusting the study means in the first universe".
Then there are also parallel universes where Poldermans' study was not fraudulent. What about there? Is adopting the opposite of the conclusion the correct things to do while waiting for new studies? Or are we rather saying "well, I know there is not much there, but the probability that the study is crap is lower than the probability that it is not, so in the meanwhile, let's follow its conclusions, it's the best bet in the meanwhile"