Hacker News new | past | comments | ask | show | jobs | submit login

> One crucial question he studied: Should you give patients a beta blocker, which lowers blood pressure, before certain heart surgeries? Poldermans’s research said yes. European medical guidelines (and to a lesser extent US guidelines) recommended it accordingly.

What the guy did was clearly wrong but it’s a slightly tenuous causal chain between that and 800,000 deaths. Questions may be asked, for example, about whether the medical guidelines should have been based on studies that seemingly had a single point of failure (this one corrupt guy).

There’s an extremely toxic (and ironically very anti-scientific) culture of “study says it so it’s true” that permeates medical and scientific fields and the reporting thereof. Caveats and weaknesses in the primary research get ignored in favor of abstracts and headlines, with each layer of indirection discarding more of the nuance and adding more weight of certainty to a result that should in truth remain tentative.

Prosecuting one type of bad actor might not make a lot of difference and might distract from the much larger systemic issues facing our current model of scientific enquiry.




> There’s an extremely toxic (and ironically very anti-scientific) culture of “study says it so it’s true” that permeates medical and scientific fields

I have never witnessed this in real life. Every actual PhD and MD I've ever interacted with are cautious about over-reliance on a single study and particularly if a result is surprising will view it with extreme skepticism if the study has any flaws.

> and the reporting thereof

Sure. 99% of journalists don't know any science beyond their C in high school basic science, and they're rewarded for views and engagement, not accuracy, so they'll hype up any study. Especially ones that are provocative or engaging for general audiences.

There is a huge, huge, huge difference between the editors at Nature and the talking heads at CNN. Or between research scientists and twitter commenters.


> I have never witnessed this in real life.

It is extremely common in the practice of citations. What you see written in a paper is:

“Intervention X can help patients with condition Y (Smith, 2012)”

But when you actually read Smith the result is couched in limitations, or maybe only holds under some circumstances, or maybe uses a weak methodology that would have been fine for Smith but isn’t appropriate for the current paper.

There just isn’t room in that sentence to reflect the full complexity, and the simplified version is all too easy to slip through peer review. Sometimes papers form chains and networks of oversimplification with each citation compounding unwarranted certainty.


This is the root of the problem.

Take the whole "saturated fat is unhealthy" thing.

Here's what happened:

Study finds that unsaturated fat is healthier than saturated fat, but all fat is associated with lower mortality vs carbs.

Repeated as "unsaturated fat is healther than saturated fat".

Repeated as "saturated fat is unhealthy".

This conclusion isn't supported by any research compared to carbs(!).

Same-calorie diet of high fat is healthier than one based on carbs. But we are often taught otherwise.

The saturated fat is bad (wrong) consensus is reached like a game of telephone.


Any recommendations for course correction?

Perhaps paper linting/scoring that penalizes assertive statements linked to references vs direct quotes associated with references?


Everything becomes a meme.

I’ve come to appreciate that we communicate mimetically, humans now seem to me more a social-intelligence species than an intelligent one.

We don’t seem to win by fighting against this characteristic, so I’m getting more curious about how to adapt to/with it.


basically saturated fats are the hostage of any high carb (fried and sugary) diet?


This. There’s no way to evaluate citations at scale. Further, once a medical doctor “learns” a false fact, it’s hard to unlearn it and journals rarely publish contrarian material


It's true, this type of nuance usually doesn't make it into papers for every paper they cite (otherwise, they would be 10x as long), but ime researchers take every study with a grain of salt in real life. I guess someone whose only interaction with science is through reading research papers would never know this and have the impression that many questions are much more settled than they actually are (although there are also opinion and review papers that attempt to assess the actual state of evidence at a given point in time).


Sufficient reproducibility should be required, but the "cautiousness" when it IS there often remains. Examples of this more emotional resistance from MDs and PhDs that come to mind include:

• Helicobacter pylori and Peptic Ulcers

• The Eply Maneuver

• Handwashing for Infection Control


You are right in that the establishment has failed in its duty of due skepticism for one bad actor to get this far, but wrong in that prosecuting one type of bad actor doesn't make a difference, because part of the establishments error has been a failure to deter bad actors with prominent examples of prosecution.


First, I'm not sure that the 800,000 deaths are due to the medical guidelines themselves. If these guidelines were saying "we don't know, doctors still have to choose themselves", then the practitioners would have looked it up themselves and found Poldermans study and said "ok, no time to look into details, I have to choose, I have 0 study that says it's bad and 1 study that says it's good. I guess the best choice is to assume it's good". So, it feels like Poldermans' study would still have created a lot of problem.

I'm not sure what these guidelines are. Is it a proper organism, or is it just a "state of the practice"? Is it that the guidelines create the usage or the guidelines summarize the common practice? A bit like a dictionary will end up adding a definition because a word is used in a certain way, and these guidelines are just adding "more and more practitioners are considering this practice as the best one".

A second reflection that your interesting point made me think of is that "not making a decision" is also a "decision".

When a practitioner needs to make a choice, they have to make a choice, and "waiting for more study" is also a choice. In this context, they have to choose: I have one study that say it's good, I have 0 study that say it's bad, the one study that says it's good may be incorrect, but if the probability that it is correct is >50%, then the scientific best choice is still to do what the study says.

In other words: how many deaths would exist if someone would have waited for more data instead of following a study _that turns out to be correct_?

At the end, when you have one study, all you do is to bet on the probability that the study is incorrect (due to fraud or due to error) AND that the conclusion is incorrect (Poldermans could have been dishonest and faked his results to say that this procedure is good while in reality, this procedure is good). If this probability is still >50%, choosing to follow the conclusion is still scientifically better than not following the conclusion.


> There’s an extremely toxic (and ironically very anti-scientific) culture of “study says it so it’s true” that permeates medical and scientific fields and the reporting thereof.

Source? Where’s the proof of this? Some online blogpost is not peer-reviewed evidence. We need to back up our claims with science.


Outside of a national cancer center, liability management is priority 1. Thinking creates liability.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: