This is actually a nice summary of how a clinical trial is more than just "did it have a positive or negative result?".
Every single trial introduces some bias through the way it's designed. Good trials try to minimize bias or at least limit it to things we know don't introduce bias.
The article does a nice job at digging into the nuances of the trial design and how it may influence (or not influence) the results.
That's why the "reference wars" you see on HN are so pointless. It's easy to just find a paper that supports your position. But trials are of varying quality.
What you see in this article is what normally happens with most major trials - results get discussed, challenged, discussed some more. After a few months doctors finally settle on the main takeaways. Sometimes it takes years.
Yeah, I thought the discussion of how you couldn't just directly compare people that accepted the colonoscopy invitation with the control group was really interesting, and non-obvious (at least to people like me that don't design studies). It reminds me of the fundamental problem that all surveys have in terms of the bias introduced by being limited to "people who agree to take surveys".
Every single trial introduces some bias through the way it's designed. Good trials try to minimize bias or at least limit it to things we know don't introduce bias.
The article does a nice job at digging into the nuances of the trial design and how it may influence (or not influence) the results.
That's why the "reference wars" you see on HN are so pointless. It's easy to just find a paper that supports your position. But trials are of varying quality.
What you see in this article is what normally happens with most major trials - results get discussed, challenged, discussed some more. After a few months doctors finally settle on the main takeaways. Sometimes it takes years.