Hacker News new | past | comments | ask | show | jobs | submit login
The staggering death toll of scientific lies (vox.com)
142 points by rntn 81 days ago | hide | past | favorite | 118 comments



> One crucial question he studied: Should you give patients a beta blocker, which lowers blood pressure, before certain heart surgeries? Poldermans’s research said yes. European medical guidelines (and to a lesser extent US guidelines) recommended it accordingly.

What the guy did was clearly wrong but it’s a slightly tenuous causal chain between that and 800,000 deaths. Questions may be asked, for example, about whether the medical guidelines should have been based on studies that seemingly had a single point of failure (this one corrupt guy).

There’s an extremely toxic (and ironically very anti-scientific) culture of “study says it so it’s true” that permeates medical and scientific fields and the reporting thereof. Caveats and weaknesses in the primary research get ignored in favor of abstracts and headlines, with each layer of indirection discarding more of the nuance and adding more weight of certainty to a result that should in truth remain tentative.

Prosecuting one type of bad actor might not make a lot of difference and might distract from the much larger systemic issues facing our current model of scientific enquiry.


> There’s an extremely toxic (and ironically very anti-scientific) culture of “study says it so it’s true” that permeates medical and scientific fields

I have never witnessed this in real life. Every actual PhD and MD I've ever interacted with are cautious about over-reliance on a single study and particularly if a result is surprising will view it with extreme skepticism if the study has any flaws.

> and the reporting thereof

Sure. 99% of journalists don't know any science beyond their C in high school basic science, and they're rewarded for views and engagement, not accuracy, so they'll hype up any study. Especially ones that are provocative or engaging for general audiences.

There is a huge, huge, huge difference between the editors at Nature and the talking heads at CNN. Or between research scientists and twitter commenters.


> I have never witnessed this in real life.

It is extremely common in the practice of citations. What you see written in a paper is:

“Intervention X can help patients with condition Y (Smith, 2012)”

But when you actually read Smith the result is couched in limitations, or maybe only holds under some circumstances, or maybe uses a weak methodology that would have been fine for Smith but isn’t appropriate for the current paper.

There just isn’t room in that sentence to reflect the full complexity, and the simplified version is all too easy to slip through peer review. Sometimes papers form chains and networks of oversimplification with each citation compounding unwarranted certainty.


This is the root of the problem.

Take the whole "saturated fat is unhealthy" thing.

Here's what happened:

Study finds that unsaturated fat is healthier than saturated fat, but all fat is associated with lower mortality vs carbs.

Repeated as "unsaturated fat is healther than saturated fat".

Repeated as "saturated fat is unhealthy".

This conclusion isn't supported by any research compared to carbs(!).

Same-calorie diet of high fat is healthier than one based on carbs. But we are often taught otherwise.

The saturated fat is bad (wrong) consensus is reached like a game of telephone.


Any recommendations for course correction?

Perhaps paper linting/scoring that penalizes assertive statements linked to references vs direct quotes associated with references?


Everything becomes a meme.

I’ve come to appreciate that we communicate mimetically, humans now seem to me more a social-intelligence species than an intelligent one.

We don’t seem to win by fighting against this characteristic, so I’m getting more curious about how to adapt to/with it.


basically saturated fats are the hostage of any high carb (fried and sugary) diet?


This. There’s no way to evaluate citations at scale. Further, once a medical doctor “learns” a false fact, it’s hard to unlearn it and journals rarely publish contrarian material


It's true, this type of nuance usually doesn't make it into papers for every paper they cite (otherwise, they would be 10x as long), but ime researchers take every study with a grain of salt in real life. I guess someone whose only interaction with science is through reading research papers would never know this and have the impression that many questions are much more settled than they actually are (although there are also opinion and review papers that attempt to assess the actual state of evidence at a given point in time).


Sufficient reproducibility should be required, but the "cautiousness" when it IS there often remains. Examples of this more emotional resistance from MDs and PhDs that come to mind include:

• Helicobacter pylori and Peptic Ulcers

• The Eply Maneuver

• Handwashing for Infection Control


You are right in that the establishment has failed in its duty of due skepticism for one bad actor to get this far, but wrong in that prosecuting one type of bad actor doesn't make a difference, because part of the establishments error has been a failure to deter bad actors with prominent examples of prosecution.


First, I'm not sure that the 800,000 deaths are due to the medical guidelines themselves. If these guidelines were saying "we don't know, doctors still have to choose themselves", then the practitioners would have looked it up themselves and found Poldermans study and said "ok, no time to look into details, I have to choose, I have 0 study that says it's bad and 1 study that says it's good. I guess the best choice is to assume it's good". So, it feels like Poldermans' study would still have created a lot of problem.

I'm not sure what these guidelines are. Is it a proper organism, or is it just a "state of the practice"? Is it that the guidelines create the usage or the guidelines summarize the common practice? A bit like a dictionary will end up adding a definition because a word is used in a certain way, and these guidelines are just adding "more and more practitioners are considering this practice as the best one".

A second reflection that your interesting point made me think of is that "not making a decision" is also a "decision".

When a practitioner needs to make a choice, they have to make a choice, and "waiting for more study" is also a choice. In this context, they have to choose: I have one study that say it's good, I have 0 study that say it's bad, the one study that says it's good may be incorrect, but if the probability that it is correct is >50%, then the scientific best choice is still to do what the study says.

In other words: how many deaths would exist if someone would have waited for more data instead of following a study _that turns out to be correct_?

At the end, when you have one study, all you do is to bet on the probability that the study is incorrect (due to fraud or due to error) AND that the conclusion is incorrect (Poldermans could have been dishonest and faked his results to say that this procedure is good while in reality, this procedure is good). If this probability is still >50%, choosing to follow the conclusion is still scientifically better than not following the conclusion.


> There’s an extremely toxic (and ironically very anti-scientific) culture of “study says it so it’s true” that permeates medical and scientific fields and the reporting thereof.

Source? Where’s the proof of this? Some online blogpost is not peer-reviewed evidence. We need to back up our claims with science.


Outside of a national cancer center, liability management is priority 1. Thinking creates liability.


Independent replication is the cornerstone of Karl Popper's formulation of the scientific method. If we were really holding to philosophical basis for how science is supposed to work, then we'd be careful to consider results that have only been demonstrated by one laboratory to be tentative at best, and be cautious about making policy decisions based on it until there has been truly independent replication.

Physicists seem to be really good about this, and many other aspects of implementing the scientific method too.

I wish the other sciences would get on board. It would eliminate almost all the chronic problems that plague biological and social sciences: falsification, p-hacking, failing to notice honest methodological mistakes, outright fraud, etc.

I fear that the problem is, we can't get there from here for social reasons. The people at the top of these fields - the ones who drive culture in academic institutions, set publication standards for journals, influence where grant money is allocated, etc. - all got there by using sloppy methods and getting lucky. I think that, on some level, many of them know it, and know that fixing the rotten core of their field inevitably involves subjecting their own work - and, by extension, reputations - to a level of scrutiny that it is unlikely to survive.


But "not giving beta blockers" is also a decision.

In that context, there is no "we can freeze time for the rest of the world while scientists add new studies", you have to choose.

Imagine a parallel universe where beta blockers are a good solution and where Poldermans' study was fraudulent and said that beta blockers are bad. According to you, doctors should have not trusted Poldermans' study and, therefore, should have continued to give beta blockers. So, in this parallel universe, your definition of "not trusting the study" = "doing exactly the opposite of what not trusting the study means in the first universe".

Then there are also parallel universes where Poldermans' study was not fraudulent. What about there? Is adopting the opposite of the conclusion the correct things to do while waiting for new studies? Or are we rather saying "well, I know there is not much there, but the probability that the study is crap is lower than the probability that it is not, so in the meanwhile, let's follow its conclusions, it's the best bet in the meanwhile"


You make good points.

I think in order to make a good decision in all these possible scenarios (since you don't know which one a specific study might be in) is to make sure you have a good understanding of what the likelihoods are any single study is in one of these scenarios (ex: fraudulent). Right now it would appear the perception and the reality of the likelihood a study is trustworthy is in disagreement.

Hopefully new incentives coupled with public statistics can help fix this.


I'm not sure you can say if the trust is overestimated or not based on the fact that "yes/no" decision are close to 100% in favor of trusting each study.

Let's imagine you distrust a lot: you believe that a study has 45% chance to be fraudulent.

You have 100 studies S1, S2, S3, ..., each saying that a process is better than another.

You take study S1. It concludes you should do G1 and avoid B1. You have 55% chance doing G1 will save life and doing B1 will kill. Conclusion: let's do G1, doing B1 would be stupid: 55% chance of killing instead of 45%.

You take study S2. It concludes you should do G2 and avoid B2. You have 55% chance doing G2 will save life and doing B2 will kill. Conclusion: let's do G2, doing B2 would be stupid: 55% chance of killing instead of 45%.

...

So, at the end, you will follow the conclusion of 100% of the studies, even if you only trust 55% of them. You end up making 45% of errors, but 55% of good choice.

Now, let's say you want to follow a ratio of conclusions that is similar to your level of trust. You want it to be following 55% of the conclusions. Which one do you follow? If you pick 55% at random, let's say studies 2, 3, 6, 7, 8, ... What is the probability that those are the correct studies? If you do a quick simulation, you will see that you end up making 49% of errors, and 51% of good choice.

So, still accepting that you will follow a bad study is still better than randomly discard a study "just so I remove some studies that may be bad".

You may say that you will discard the studies that look bad, not take them randomly, but the trick is that you cannot tell which study look bad. Poldermans study looked convincing at first, and it's few years later that problems were found, and they were found by people having the means to properly investigate (they had access to insider info that you will not have access to). Possibly there were more good studies that looked worst than Poldermans'.

edit: also, a large fraction of fraudulent studies is done to hide "non conclusive" results, which means that a fraudulent study that concludes that G is better than B does not mean that B is better than G, it just means they have no idea which one is the best. So, you cannot also pretend that a fishy study implies that the opposite conclusion is proven.


I believe you are saying that knowing (or believing) the probability of a group of papers being fraudulent does not help you (ex: in a high fraud belief of say 45%) make a better decision because the group probability does not inform any single paper probability meaningfully since a paper's findings are close to being all or none. Is that correct?

If so, I see your point.

I still think in a more diverse set of scenarios where an intervention has more than a single and binary outcome, knowing the group probability can still be informative. For example if an intervention in a possibly fraudulent study shows a large upside but other known to be unfraudulent studies show a severe but unlikely downside then it may not be a good idea to do the intervention unless there is a good reason to (ex: all known safer interventions have been tried, known risk can be mitigated, end of life and patient agrees, …).

But I am beginning to wonder how useful of a signal a known fraud percent would be given how long it takes for fraud to be discovered and then disclosed.

I still think something can be done here with public statistics and perhaps reputation but I’ll have to think about it some more. Certainly if an author or institution were more at risk for discovered fraud or fraud discovery was more likely they would do a better job of policing or not doing it. Other incentives (as mentioned in the article) are at play as well.


>but the probability that the study is crap is lower than the probability that it is not

BASED ON WHAT!!!!?!?!?!


Calm down.

For example, a good basis would be statistics: how many past studies have been reproduced and confirmed that they proposed the correct conclusion? If you take 1000 past studies that were later reproduced by an independent team and count the number of them that were disproved, if this number is <500, then you know that the probability that the study is crap is lower than the probability that it is not.

There are other basis too. For example, the fact that not everyone will like to have deaths on their conscience. Or the fact that only a fraction of situations are situations suitable to commit fraud ("normal" studies often involve several independent institutes) and that fraud is very risky, especially in a sector like medicine (for example, if someone comes with an alternative to beta blockers, they will want to try to see if it performs better, and will notice that beta blockers don't perform as expected).


>Physicists seem to be really good about this

Consider the possibility that what you're observing is not so much different cultures, but different levels of resources being provided. Experimental physics receives vast amounts of funding from scientific agencies. Much of this goes to support expensive equipment like particle accelerators and neutrino detectors. However the funding of this infrastructure doesn't occur in a vacuum: the operating budget for these systems also requires the resources to perform careful data collection and analysis. You cannot justify running a quick low-budget experiment with the LHC, simply because it costs too much.

It's worth pointing out that most of the boogeyman results people hold up in the "replication crisis" literature are in the fields of the social sciences, where funding is vastly less available. Here you get weaker techniques, fewer resources for replication, and many people who are struggling just to fund any small studies at all. What you immediately diagnose as malicious behavior is much more adequately explained by lack of resources.

It is very human to think that the problem is bad humans. But in the real world, the biggest factor is almost always resources. Everything else is downstream of that.


The social sciences also have a much more limited ability to perform controlled experiments because of ethical reasons, which cannot be resolved with more money.

So much of the studies published in these fields are observational rather than controlled experiments. Physics (and chemistry, to a lesser extent biology, and medicine lesser still) has an advantage in that it's exploring the parts of the universe where controlled experiments are possible.

I don't think it's all about resources, and I think the social sciences are less resourced in part because grant-making bodies know that these fields suffer from these intrinsic limitations.


It kind of depends on where you are in physics. It might be easy for a materials scientist to do a controlled experiment, but it's approximately impossible for an astrophysicist. You can't exactly randomly assign celestial bodies to treatment and non-treatment groups; all you can do is observe.


It's not even that. Physicists can generate quadrillions of elementary particles per nanosecond and observe whatever interactions they're looking for faster than they can type up the paper. Human subject research requires recruitment, consent, IRB approval, waiting possibly decades for effects to show up. Hacker News seems to think physicists are just uniquely better scientists for some reason. They're not. Some research questions are inherently harder to answer. There is no particle collider analog that can spawn a few million humans out of thin air, give them heart disease, perform a surgery, and tell you what happens in a matter of minutes, no matter how much funding you sink into it.


That's why you reject Karl Popper's epistemology and embrace the the epistemological anarchism of Feyerabend. - https://en.wikipedia.org/wiki/Against_Method


> Physicists seem to be really good about this, and many other aspects of implementing the scientific method too.

That's because physics is relatively immune from politics. Other fields are on the other side of the "practice vs theory" spectrum and thus attract a lot of politics/interests which influence the science.

https://xkcd.com/435/


> I wish the other sciences would get on board

It’s not about science it’s about politics and corruption and power plays. At the expense of being accused of posting lazily, you really have to follow the money.

2 more weeks to flatten the curve while the oligarchs pillage the economy some more.


It's definitely not just politics. Popper was not just a champion of reproducibility but also a champion of falsifiability. This is something many a scientist also loses sight of. People will formulate their hypothesis and then try to prove it, rather than trying to disprove it, which is often the much more effective strategy for avoiding bias and other issues. You see this most often with the "soft" sciences, but it crops up in the harder ones as well.

Focusing on reproducibility and falsifiability would go a long way to improving the current state of science, regardless of the political games happening at the same time.


To expand on that person's thought though, that's the problem: reproducibility and falsifiability don't make money, and in our system, what doesn't make money, costs money. There is absolutely zero funding out there available for studies to reproduce the results of other studies. And, in addition, any source of funding that is offered for regular non-reproducing studies also comes with expectations of the scientist in charge of the study. You can say "it shouldn't be this way" all you like but the fact remains that when a fossil-fuel company funds a study into if humans affect global climate change, they obviously have an answer in mind that they're wanting to hear, and not only that, probably sought out a scientist who has an existing reputation that implies they will deliver that answer, just as, say, a governmental probe into climate change might do the exact opposite.

Every funding organization chooses the studies to fund run by the scientists they also choose to fund with an outcome in mind. Hopefully, this is less true with Government funded research? But it's still absolutely true, and also worth noting that Government-backed research has been in a steady decline since the 1980's.

And I mean, that's just money. There is also zero career progression or public impact on reproducibility. One cannot reasonably build a sustainable career in science simply by reproducing studies. Nobody gives a shit about those. Those don't get you jobs at better, more prestigious institutions. Those don't get you interviewed on CNN. Those don't get you, hell, funding for more research you might actually want to do.

The scientific community itself is a community of people, it is not immune from the corrosive and negative aspects of any community created by humans, it inherits all our faults and foibles just the same as any other, it's affected by similar biases, it's paralyzed into the same inaction, and it caves to the same influences.

Truly neutral, unbiased thinking and therefore action is, IMHO, impossible. Anyone who says otherwise has either deluded themselves into thinking their agenda is somehow magically no agenda, or is well aware they have an agenda and wish that fact to remain unacknowledged. No one is truly neutral. Every thought you have, every action you consider, every idea that comes to mind is a totality of every previous thought, action and idea you've had and the fact that so many people claim to be unbiased or neutral doesn't change this. You're not objective, you're alive.

That doesn't mean there's no truth to be had or found, and indeed, broadly, the scientific method as outlined in our school curricula is the closest thing humanity has thus far created to a truth producing methodology. But it remains built of humans and so inherits their limitations, and we should remain cognizant of that.


This article sort of just glosses over any reasons why existing laws aren't enough. It states that existing scientific fraud is rarely prosecuted (e.g. under fraud statutes), so it seems to me the right course of action should be to prosecute it!

In the US at least, it's nearly impossible to commit this kind of malfeasance without committing federal wire fraud - faked research would nearly always be part of a grant application, at least eventually, for example.

Plus, I'm surprised some enterprising lawyers haven't at least tried some massive class action lawsuits. The actual researcher may not have much to go after, but surely their institutions would. If you can get huge class action payouts for the dubious connection of talc in baby powder to cancer, why can't you get a payout here where (a) the malfeasance was intentional from the get go and (b) the harms are unambiguously clear from follow-up meta-analysis studies.

I guess I would like to understand if there is some fundamental reason that existing statutes aren't enough before adding laws.


I think it rarely prosecuted because prosecuting is risky and expensive. You have a complex technical issue few people truly understand, you are probably not 100% sure there was actual fraud, and there is always a real risk that the court makes the wrong decision. And unless the situation was completely clear, you know that the prosecution will discourage honest researchers from studying topics that could be controversial or have a significant real-world impact.

As for class action lawsuits, the researcher who published the fraudulent result is the wrong target. The responsible engineer is not the person who designed the product but the person who approved the design, and the responsible scientist is not the person who published the result but the person who made decisions based on it.


IANAL... The reason you can get a huge settlement in the talc case but not in this case is because people are purchasing the talc and so it is a product liability issue.

In the research case people are basing their care and procedures off another person's research. There is no direct payment from the person receiving the care to the researcher and so it is difficult to draw a direct line from Person A says X to Person B gets injured.


What is really interesting is looking at the meta analysis cited in the Vox article:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3932762/

This reaches the conclusion that beta blockers are harmful. However, if you look at the meat analysis, specifically figure 2, you find that the conclusion is mainly driven by a single trial - the 2008 POISE trial.

If you go to the POISE trial: https://www.thelancet.com/journals/lancet/article/PIIS0140-6...

You find that they discovered fraud in at least some of the hospitals:

" Concern was raised during central data consistency checks about 752 participants at six hospitals in Iran coordinated by one centre and 195 participants associated with one research assistant in three of 11 hospitals in Colombia. On-site auditing of these hospitals and cases indicated that fraudulent activity had occurred. Before the trial was concluded, the operations committee—blinded to the trial results at these hospitals and overall—decided to exclude these data (webappendix 1). "

We have an important question - should pre-op patients be given beta blockers - and the largest, most definitive trials to answer that question have at least some taint of fraud.


Why is it that so many of the PhDs and MDs out there are unable to tell when someone's lying to them about their very own areas of expertise?

Either they're not that smart or the processes aren't very good -- though no single researcher is responsible for their field's poor processes. Either way, we shouldn't assume that any one PhD or MD recipient is an expert until something changes. Degrees, on their own, don't signify expertise or credibility.


> Why is it that so many of the PhDs and MDs out there are unable to tell when someone's lying to them about their very own areas of expertise?

I'd wager that they can actually tell. They just choose to accept it, or, at least, not to do anything about it.

I see it in my own field (software engineering). Sometimes pushing against lies and stupidity is just too hard and tiring.


The standard of evidence in medicine research generally is really poor. Most of the papers have serious problems and reach to conclusions they can not defend. Worse is huge areas of medical research are full of fraudulent research, almost all of Psychology and Psychiatry is atrociously low quality studies and they rarely replicate even then. There is widespread corruption around the sale and adoption of treatments and the entire system of peer review and journal acceptance is almost exclusively based on the authority of the main author and not on the value of the paper.

In short medical research is nothing like Physics or other science papers and endeavours its evidence base for most of what it already does is often severely lacking.


This is an excellent question, and your corollary is sound.


A scientist that causes, through willful fraud, the death of people seems to be guilty of something like manslaughter. Using fake data is a pretty clear-cut example of willful fraud, and a reasearcher fudging data over such a life and death question should 100% be held accountable.

Scientists making errors in good faith should on the other hand be insulated from any kind of liability.


You cannot insulate scientists making errors in good faith from any kind of liability, if you make the wilful frauds liable. Because there is no 100% way of distinguishing the two.


There are cases where people have doctored images in their research data, or completely fabricated data to meet a significance threshhold.

There may be ambigious cases but there are non-ambiguous cases too.


or scientist who just don't understand statistics.

a biologist can understand how zillions of proteins interact with each other without understanding how to work with raw data.


We had to pass statistic courses in biology also.


You don't need to be 100%. We assume innocent until proven guilty in other contexts. At least some criminals are known to go free because we cannot prove beyond a shadow of a doubt they really did it. However we get a lot of them. It isn't perfect, but it is a standard.


Despite innocent until proven guilty, there are innocent people in prison or on death row. I doubt that is a standard that any scientist would agree to.


Like I said, not perfect. It is overall the best I have heard of. I'm open to something better if possible.


No liability.


Wait, you're saying, based on your prior comment[0], that scientists who commit willful fraud shouldn't be held liable?

[0] https://news.ycombinator.com/item?id=41329891


Scientists who publish research in a journal, and then it turns out that this research is wrong, for whatever reason, should not be held liable for the consequences of this.

That is what I am saying.


Not that I'm in favor of the proposed measure, but saying because we can't identify wilful frauds 100% of the time then we can't protect the non-fraudsters, is just a bit silly, no? You have this kind of problem detecting any kind of fraud.

One test is, is there written communication between people about committing the fraud? If so, there you go.


Of course, such communication cannot ever be faked by interested third parties.


That isn't a new problem though. Should we not enforce laws against fraud because people might be framed? You determine that through investigation.


I'd argue scientists have better things to do than spending their time shielding themselves from liability lawsuits.


Again, not a new problem.


That is wrong, it would be a new problem, because currently it does not exist.


I don't believe you're engaging in good faith here, so I'm not going to reply any more, but if you're interested in having a productive conversation, try to think about what I might be meaning a little more and then reply instead of taking the least sensical interpretation and responding to that. Or, if you like, you might reply to multiple interpretations of what I said if you're not sure which one I mean, and that way we can advance the dialogue.

Have a good one.


Let me sum up the discussion for you. I am arguing that scientists should not be held liable for the consequences of published research that turns out to be wrong (for whatever reasons). That is the status quo.

Now you are saying, introducing liability is fine, as we can deal with that in this or that way. I am pointing out that all of these ways are inherently flawed, to which you respond that yes, this is true, but in other walks of life we are dealing with these things too. To which I am replying, that's fine, but if we don't introduce liability, we will have none of these problems in the first place.

So you see, it is not me who is not engaging properly with the other's argument. So I am happy to finish this discussion as well.


In science, 3 independent scientific organisations must reproduce the effect. Otherwise, it's not a science at all. It's just authority.


I would say, a better approach would be to have better checks and balances in play. Maybe he falsified research, why was it recommended as a standard procedure by European Medical Guidelines without proper evaluation? I would hold them criminally liable before I hold the researchers liable.


Why? Like, from the practical standpoint I understand why we need procedures that are robust to individual evil actors. But, from a moral standpoint, if you prove unambiguous fraud by a very brilliant person who has been entrusted with incredible resources and freedom by the public, why shouldn’t that person go to jail?

Civil engineers understand the seriousness of their job, and medical researchers should too. (And of course, many do understand, and very few commit unambiguous fraud.)


Agree but…

What if it isn’t a person you can point to, but rather a company? And what if you can’t jail a company, but you can fine them. And what if the fine ends up just being the “cost of doing business” to the company?

Which company received the largest criminal fine ever handed out? Hint, it’s a pharmaceutical company.


A company is made up of people.

A fraud like this (i.e. actually involving falsifying data) would have to be done by an individual or a group of individuals. They can be jailed.

if there are aspects of it (e.g. inadequate procedures and checks that fall short of criminal by any identifiable people) then you need fines that are too large to be just a cost of doing business.


Pfizer was the answer, 2 billion.


If someone legitimately discovers something better how long do you wait - killing people in the process - to ensure it really is better? Mostly we assume researchers are honest (which is probably true).


But that’s true for all medical trials. If someone discovered a new drug that works, why wait for multiple rounds of human trials and then approvals from the proper authorities?


Many millions of lives would have been saved if we didn't wait. There is good reason we wait, but it isn't perfect.


Because the drug could work really well into doing what is supposed to do, but still generate totally unsuspected and catastrophic collateral effects in other areas. Effects that could be hidden until some time pass and everybody is using the product.

Nobody wants the tragedy of Thalidomide happening again.


> Mostly we assume researchers are honest (which is probably true).

I'd guess the average researcher is as honest and as fallible as the average human. They aren't gods, certainly they can have personal or political motiviations, become blinded by ambition, or succumb to pressure from others, or from potential personal financial or power gains.


That seems like prosecuting the police for a crime committed by someone else. How does that change incentives in a productive way?


But it is not like that. The research was a fraud. It didn’t kill anyone. Once it was recommended by European Medical Guidelines, it started killing people. I would argue negligence on their part.

It’s like this: If a factory somewhere knowingly provides subpar nuts and bolts. And then Boeing puts them in all their planes without proper testing. If a plane crashes, who is liable? There’s definitely wrongdoing on the factory’s part, but who would you as a victim sue first?


> without proper testing

What would 'proper testing' look like and how long would it take? That's the real rub here. Ultimately you have to apply such a policy to everything because you won't know what's fraudulent, so how many lives are saved vs. lost by introducing a delay (which could be years in the worst case)?


Which is why things are the way they are


What way is that? Do you think medical research has overall cost more lives than it saved in the last ~20ish years?

Obviously nobody is happy this happened, but you haven't proposed an alternative that would clearly be better.


The nice thing about running bad studies and cooking the p is that you can usually reap the rewards immediately and for a long time because reproducing studies is lame and no one wants to do it.


Suggestions:

1) Any research institutions that receive government funding are required to spent 10% (or 15% or 20% or whatever) of their total budget on replication. If they don't, they stop receiving any government funding.

2) When citing a paper scientists are required to include any replication studies, both successful and not successful.

This would hopefully lead to more replication studies being done, even if it doesn't answer the question of what to do with a study until it's been replicated sufficiently.

The second part would help us guess the validity of a paper. Papers that base their central premise on studies with multiple independent replication would probably be a bit more trustful than papers based on unverified studies.


A close friend of mine was doing her PhD in nutrition (in Germany). She asked me to have a look at the math (statistics) and it was a monstrosity.

I told her that I cannot read that further because she either won't get her PhD or I will be morally wounded.

She asked me for some examples of errors and to each of them she was saying, with evidence, that this is what "everyone does".

This was nutrition, something that is at least to some extend innate so there won't be disasters (I hope). The same thing with pharma is a disaster hanging by a thread.


Compare the 2011-14 L'Aquila earthquake trials, when Nature magazine and most of the I-love-science claque rallied behind the scientists who gave incorrect reassurances to L'Aquila residents https://en.wikipedia.org/wiki/2009_L%27Aquila_earthquake#Pro... . Is it that the facts are very different in this case, or it more that attitudes have shifted since then?


Yes the people setting guidelines based on non-reproducible or unreproduced studies should be held accountable. But they are part of the same power structure that makes laws, so don't hold your breath.

Literally anybody can write bullshit and anybody with some cash or connections can get it published, deciding to make it a medical guideline because the text and it's metadata looks a certain way is basically as competent as just using ChatGPT.


This is a drop in the bucket. The real crime is rich corporations using “science” to enrich themselves. Examples: denying that smoking causes cancer. Demonising fat while promoting excess sugar in processed food, and look at how many organisations are denying man-made climate change…


This seems more like an institutional failure more than anything.

I recall there was a discussion on HN a few years back about the Alzheimer plaque connection being established on fake data.


People might not want to hear it but it is going to keep being an issue - we shouldn't force people to "follow the science" when it comes to medicine. Scientists - and science as a whole - do not have the moral standing that their opinions justify authoritarianism. People should make their own decisions about whether they trust the remedies involved.


Medicine is not entirely based on science, so I wouldn't equate MDs to scientists. Lots of treatments are based on clinical experience which is more anecdotal and traditional than scientific. So much so that "science-based medicine" is a thing.

Don't get me wrong: I trust medicine a lot and it has literally saved my life, but doctors aren't primarily scientists.


You already have the freedom to make your own decisions. What you don't have is the freedom to endanger others. That's when we make laws, and those laws may be informed by science but they are not created or enforced by scientists. They are created and enforced by government agencies.


Viewed skeptically rather than apologetically, it seems like the fundamental concepts of society (like "freedom" and its limits) are nebulous and mutable, and the fundamental processes of society (like how ideas become law) are inscrutable and unaccountable.


> You already have the freedom to make your own decisions.

It’s not like there have ever been consequences like “if I don’t do this I’ll lose my job” or “I can’t travel here”, or “I can attend this school, even online”, right?

That would all be public policy that would never fly by the law. Am I correctly reading your post?


> “I can’t travel here”

Being allowed to enter another country is a privilege, not a right. This is something people who have passports that allow visa-free travel to almost anywhere often forget.

Thus if another country doesn't want you to unnecessarily become a burden on their healthcare system it is not unreasonable for them to demand you've gotten vaccinated. Even if there are risks associated with it, you will have to take those risks to earn the privilege.


I totally agree with this point of view on freedom, and that's my main issue with Ayn Rand followers, who do not understand basic power application effect (beside the fact that she read Kant's 'critic of pure reason' title, said to herself : 'he must be criticizing reasoning, science and the Enlightment, I must write and publish my basic thoughts on this' and now we have armies of idiots who never read Kant and try to criticize stuff he didn't say. Some are even famous.)


People frequently confuse freedom and consequences, but choices always have consequences in society.

All schools have rules of conduct, if you violate them your kid gets kicked out. Jobs also have rules of conduct, dress code, etc. And yes, not being vaccinated means you cannot have certain jobs.

The vaccine debate is really quite straight forward. On one side you have people arguing that vaccines might cause damage and they shouldn't be forced to take them. Of course, you are not forced to take them, but then those people argue that you shouldn't have consequences from it like losing your job or your kid getting kicked out of school.

On the other side, people argue that they shouldn't be forced to be exposed to someone who might cause them harm (by transmitting a preventable disease). This group also doesn't want negative consequences by having to hide from these people.

So the argument is similar from both groups, but the difference is that there is virtually no evidence of the harms the first group claims to want to avoid, while an abundance of factual evidence of the deaths of people and children caused by the lack of vaccines or by the spread of disease due to enough people not getting a vaccine.

Thus, as a society we choose to collectively put in a law siding with the second group. That's democracy.

edit: but I'll note that actuality in many cases people do get to send their unvaccinated kids to school and do end up killing other people's kids as a result. I wonder why we're talking about charging scientists with crimes and not people who do things we know with 100% certainty cause kids to die.


> there is virtually no evidence of the harms the first group claims to want to avoid

There is not only evidence of harm:

The Link Between J&J’s COVID Vaccine and Blood Clots: What You Need to Know https://www.yalemedicine.org/news/coronavirus-vaccine-blood-...

Scientists discover 'smoking gun' link between AstraZeneca vaccine and lethal blood clots https://www.telegraph.co.uk/news/2021/12/02/scientists-disco...

New Zealand links 26-year-old man's death to Pfizer COVID-19 vaccine https://www.reuters.com/world/asia-pacific/new-zealand-links...

but also lack of efficacy in stopping the spread:

Coronavirus outbreak sidelines ship whose crew is fully immunized, Navy says https://www.washingtonpost.com/national-security/2021/12/24/...

Covid cases hit records in South Korea and Singapore despite widespread vaccinations https://www.nytimes.com/2021/10/01/world/covid-cases-hit-rec...

COVID Cases Are Surging in the Five Most Vaccinated States https://www.newsweek.com/covid-cases-are-surging-five-most-v...


And a vague line between what counts as covid or vaccine death and what doesn't. Specially old people may have a lot of different things simultaneously.


You are clearly trolling as this has been beaten to death. But for the sake of other readers, getting COVID has a far higher chance of clots than the vaccines. Also, the vaccines significantly reduced the chance of death, even if they ultimately couldn’t stop the spread.

So again, the big picture is extremely clear. Thankfully the right choices were made for society.


You're not wrong. Vaccines in general have a favorable risk/benefit ratio. The problem is I frequently see people on social media claiming that vaccines have zero risks and contraindications and that they do prevent transmission. I've seen governments saying that.

This is the sort of disinformation that creates and spreads anti-vaccination ideas. All it takes is for someone to get skeptical and look it up. They will feel betrayed by the so called "authorities" and will never trust them again. It's even worse when said "authorities" want to vaccinate people forcefully as a matter of public policy. People don't enjoy having their autonomy disrespected. Especially comical are the governments that censor people for posting "fake news" only to end up spreading vaccine disinformation, and when it's pointed out they double down on those claims and do everything they can to coerce people into getting them.


> the vaccines significantly reduced the chance of death, even if they ultimately couldn’t stop the spread.

Since you admit they couldn't stop the spread, why should the vaccines be mandated?


To stop people dying. Which they did and continue to do.


>> Since you admit they couldn't stop the spread, why should the vaccines be mandated?

> To stop people dying.

Then why should it be forced on an unwilling recipient? Any more than we'd force someone (on pain of losing their livelihood or worse) to give up unhealthy eating habits, unsafe activities, etc.?


Ok, but that's supposed to be the reason we have scientific institutions in the first place. If I want to prove the earth is a globe and measure its diameter, all I need are a couple sticks and a tape measure. If I want to know if a drug or vaccine is effective, there is really no mechanism for me to determine this myself, and I am forced to base my opinion on someone else's opinion.


You could say the same thing about your drinking water or the food you buy.

Modern society is complex, and even simpler historical societies were dependent on cooperation before we got anywhere near the level of complexity and specialization we deal with today.

At the end of the day it’s a trust issue. Studies, be they medical, scientific, or observational, are one way to build trust. There are many others, but I’m not sure how well they scale with the size of a society compared to the systems we have in place today.


only if they can self-insure for the damage they cause and the cost of fixing the damage.


Follow the money? What financial (or power) incentives are there for faked science?


People should make their own decisions about highly technical aspects of their own surgeries? And then also I presume bear the full responsibility for the outcome?

No, surgeons should make decisions informed by their own experience and, yes, recommendations from medical researchers. How we're doing it is how it should be, the fraud remains the problem here.


Recommendations are optional. There is no enforcement. I might not be able to find a doctor to give me a surgery I want. But rarely does a surgeon force upon someone a surgery they don't want.

An obvious counter example is electroshock therapy or a lobotomy. Typically there's some due process there before a doctor just does something against patient wishes.

People don't like the vaccine debate for many, many reasons. Sadly a lot of the discussion is people talking past each other and refusing to acknowledge that the other side does have a point about assumptions that are damaging to their own position.


> But rarely does a surgeon force upon someone a surgery they don't want.

Depending on your definition. ER surgeons regularly force surgery on someone who is unconscious and thus we cannot ask if they want it or not. The law specifically says if someone cannot answer then the answer is automatically assumed to be they want it (at least in most places).


If you are that entrenched in your refusal, you can have a DNR order or similar that you carry around with you or is referenced on a bracelet, etc.


The DNR may not mean anything. It is possible that one was planted on you via fraud. Hospitals have complex policies written by lawyers on what to do if one is found on someone unresponsive and sometimes that means the procedure is done anyway. As someone who is trained in first aid I am not qualified to determine if that is valid and so I can render aid even if I know it really is valid. (in some cases you may be required to render aid - I used to be on the office first aid team and then I was told I had no legal option other than render aid)


It happened to agriculture in Stalin’s USSR

https://en.wikipedia.org/wiki/Lysenkoism


It was a scandalous story of the first seed bank operators literally starving to death around food to protect their peoples future post war, American attempts to help with corn crops failing (the two politically were already distrustful), and an insane farmer favored politically.... literally putting his critics in front of firing squads.

Science makes "mistakes" most of the time, but documents the process to improve the accuracy of models that represent reality.

It is notable many westerners recognize the first seed bank scientists, and in particular Nikolay Vavilov.

The legal process can't enforce research ethics, but on rare occasion people are stripped of their credentials for egregious behavior.

Despotism is ugly, and it is arrogant to think any one is immune to the phenomena:

https://www.youtube.com/watch?v=TaWSqboZr1w

Have a great day, and have faith the smarter Russians will again figure out a path to peace eventually. =3


Such a good piece of history. I'm surprised I wasn't taught this in school.


Part of this is because the modern academy holds thinkers who are just as fraudulent in high regard today, and they are unwilling to hold them accountable.

exhibit A (by far the worst): https://en.wikipedia.org/wiki/Jacques_Lacan

exhibit B: https://en.wikipedia.org/wiki/Deleuze_and_Guattari

exhibit C: https://en.wikipedia.org/wiki/Slavoj_%C5%BDi%C5%BEek

exhibit D: https://en.wikipedia.org/wiki/Carl_Jung

exhibit E: All Chiropractors

Most of the academy holds Freud, and most of his followers in very high regard despite zero of his analysis being reproducible, falsifiable, replicatable, or coherent.

Modern social sciences are infected with a modern version of Lysenkoism.


A big problem in today's political climate is if you question certain scientific findings, at best you get shouted down and at worst you get branded a far right fascist and have your career put at risk.


Fundamentally scientific rigor and accuracy is often misaligned with larger societal norms and values - a pharmaceutical corporation with a profit-making pill doesn't want to hear about the 1% of users who suffer catastrophic medical conditions as a result of using their product (e.g. Vioxx with 88,000 heart attacks and 38,000 deaths out of 107,000,000 prescriptions from 1999-2004). Similarly the Soviet Union's Lysenko tailored his research results to align with Stalinist ideology on adaptability, thereby securing his position in the academic structure - behavior that was remarkably similar to that of Anthony Fauci regarding the origins of Sars-CoV2 and the efficacy of the various treatments and vaccines that were so highly profitable to the corporate pharmaceutical sector. Reckless virology research that he supported caused a global pandemic that cost at least $10 trillion in economic damage and took millions of lives - but admitting that opens the door to liability, so no.

I've worked with both ends of the spectrum - fraudulent tenured PIs at leading research universities are not that rare, but highly skilled and reliable PIs are more common. The fundamental difference always seems to be record-keeping - frauds aren't interested in keeping detailed records of their activities that can be used by others to replicate their work (since their work is non-replicable). In contrast, the reputable researcher will want such detailed records for various reasons, including defense against false claims of fraud or incompetence, which is quite common if the research results are not aligned with corporate profit motives in areas like pharmaceuticals, fossil fuels and climate, environmental pollutants, etc.

If the powers that be really wanted to reduce research fraud, the easiest way is to make detailed record-keeping a requirement of federally-funded research, with regular audits of lab notebooks and comparisons to published work. This matters, because the problem is set to get worse with the spread of AI tools that make it possible to generate hard-to-detect fake datasets and images. In the past a great many frauds were caught because their fake data generation was so obvious, often just a copy and paste effort.


Have anybody thought about asking the same questions about COVID-19 prevention and treatment research?


Scientific fraud or medical fraud? Are we tarring all of science because of fraud in one sub-field (medicione)? How about social science fraud, where poor economic policies can cause millions to starve?

Before I get pilloried for whataboutism, all I'm trying to illustrate is that the title is a hyberbole. Fraud in medical research is definitely a problem leading to serious consequences for patients everywhere. Let's just call it what it is.


One solution would be bonding. If you publish a scientific result, put up some money with a bonding firm, perhaps for a specified period of time. If someone successfully identifies fraud, they get the money. The more money you put up, the more confident you are. This also provides an incentive for fraud finders.


Over time you'd see a socially dictated (or even regulation dictated) amount of money go up for bonding if this became regular practice. I don't think adding more of a worry around "can we afford this" would create a positive impact in a world where a) scientists and academics routinely spend more time filling out requests for money (grants/funding/etc.) and justifying their use of said money than they spend actually doing research, and b) there is already a paucity of replication.


And a disincentive to do science for the majority of the researchers that didn't born rich. And less money available for doing science.


Yes, we really need to make scientific publishing even more expensive and bureaucratic./s

I appreciate your creativity but the real solution is to fund replication studies and improve statistics education in the sciences.


We actually need less scientific publication. Bonding would encourage a smaller number of higher quality publications.


I agree with your goal of getting more high quality publications but it's not at all clear to me that bonding would effectively lead to that.

I think it's more likely that it would reward incremental studies that don't question the "scientific consensus" and so don't risk getting sued. Not to mention that it would be a major publishing expense, which are already infamously high and just ends up being financed by grant giving organizations which was instead meant to go towards research.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: