Hacker News new | past | comments | ask | show | jobs | submit login
One in ten medical treatments are supported by high-quality evidence – study (sciencealert.com)
126 points by elorant on Sept 5, 2020 | hide | past | favorite | 49 comments



I have had failed ankle surgery and have read some research papers on the procedure that was done to me.

Often, the method was asessed by asking patients to score their situation before and after the surgery (e.g one year later).

For sure, many people try to be positive and give too optimistic scores. At least I felt it hard to admit that the costly procedure had failed and saying it to my surgeon didn't feel easy.

What I fear is, there are many research papers done using patient questionaire and giving us biased results


" At least I felt it hard to admit that the costly procedure had failed and saying it to my surgeon didn't feel easy."

That's one thing I have noticed. A few years ago my girlfriend had a failed surgery. She complained constantly during the weeks before the followup meeting. In the followup meeting the surgeon talked about how well the surgery had gone. My girlfriend basically agreed and they bantered around for almost half an hour. Ten minutes before the appointment ended I lost patience and said "Hold on, guys. This thing hasn't worked at all. The pain is worse than before and she talks at home about killing herself. How do we get out of this?". The surgeon gave me the evil eye, my girlfriend said nothing and we basically got kicked out soon.

It was a really weird dynamic. I wonder how many surgeries are scored as success because patients are afraid of telling the surgeon that it wasn't. I think it may be a substantial percentage where the hospital/surgeon never hears about problems and there is no independent follow up either.


Hospitals don't even track revision rates (the proportion of the time where a second surgery is required to 'fix' issues from the first), largely because some surgeons cause many problems, and don't want to see the numbers. It's a tragic state of affairs that leads to a great deal of suffering, and I am very pessimistic about the probability of meaningful reform.


They certainly do track it because Medicare will not pay for a second hospital stay in many cases.

https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Ass...

Beware of unintended consequences though, hospitals are less inclined to permit riskier surgeries on unhealthy patients if they think the risk of readmissions is too high. Good luck getting your knee replacement if you are a 300lb diabetic smoker.


Patient questionnaires with the right questions seem like the way to go.

When I was considering a Bankhart repair for my shoulder the number I looked at was return to sport. Sure, people can be biased about this, but this seems about as objective as you can get: “Are you able to participate in the activities you were able to before your injury?”

If you are getting a surgery without measurable outcomes, why are you getting this surgery at all?


Thanks for that! I think thats a great indicator to note for the future.


What would be better than patient questionnaire?


Functional assessment? Range of motion, strength, etc.


You need devices which can objectively measure the strength of something like an ankle in multiple vectors of motion. And then algorithms which can combine the data into a meaningful index.


That doesn't sound very difficult. You could measure the angle that someone could extend before feeling pain, or ensure the ankle could apply a certain amount of pressure before the patient feels pain, etc.


It would be very difficult to bring that to market as an FDA certified medical device. And then repeat the process for every other joint.


I doubt you would need to get the FDA involved at all.

There is basically no risk of harm from such a thing. If all you are doing is using it verify whether a surgery worked or not, then it's not actually a treatment is it? Surgeons make their own tools, jigs and tests all the time.

And if for some reason you had to, there are different grades of hard with a medical regulating body. For instance, it is super easy to develop medical tools. Harder again to do implants, harder again to do medicines.

So I want to reiterate. Not hard at all.

- source. Worked for a medical device company that did implants, wound care and medical tools. Surgeons would often ask for custom tooling or jigs through us that we would get made up for them.


I'm not sure if that was sarcastic or not, but you mean devices like hanging scales and string, and a protractor?


Ah, but it's a medically certified protractor! Disposable (for safety of course) and $500 a pop.


It seems to me that squat, deadlift, and an agility test would probably cover that.


well....get to work?


Speaking from an orthopedic perspective (though most fields are similar), there are literally thousands of ways to measure this, they are called (not surprisingly) outcome measures and they are basically the foundation of almost every medical study. Many are a collection of questions or items that add up to a given score which is how they are able to be statistically analyzed.

Some are subjective like pain (VAS or visual analog pain scale, “rate your pain 1-10”), ability to do daily activities, “would you have this procedure again?”, return to pre-injury activity level, etc.

Others are objective like range of motion, strength, bone healing noted on Xray or ct, tendon / ligament healing observed on mri, histiologic healing observed from follow up biopsy, rehospitilzation rates, revision surgery rates, infection rates, or mortality rates (the ultimate objective outcome measure).

There is hardly a shortage of outcome measures out there, and researchers propose new ones all the time, but they need to be validated as relevant and accurate by other studies before they are widely adopted.

http://www.orthopaedicscores.com


In some studies, they used things like positions and angles between bones (before and after surgery).

Unfortunately questionnaires might be only thing sometimes. My message is that, we should be more sceptical about them


Technologies exist that give quantitative biomechanical analysis and "before" and "after" comparison, for example when fitting an orthesis or prosthesis.


One in ten medical treatments that have had updated reviews recently are currently lacking high-quality evidence.

I think it would be worrying if every medical review had high-quality evidence. Just because it's in the Cochrane database doesn't (to my understanding) mean that it is a frequently used treatment.


Bingo - this is not one in ten medical treatments administered, it is one in ten possible medical treatments.


This is the most important comment. The title is misleading.



> Please submit the original source. If a post reports on something found on another site, submit the latter.

Yes please change the link to the original source as per the HN Guidelines.


If this list includes depression treatments, they are examples of treatments for which "high-quality evidence" of the sort that is demanded is impossible.

The only way to know whether a person has the version of the disorder that responds to a particular drug is to administer that drug and see. Psychiatrists try prescribing different drugs until they hit one that works, or give up. Imagine designing a randomized-controlled-trial for that.

If you need help: first, select a group already using the drug. Split them up and give half a placebo. See which get worse. But first, find someone unethical enough to run it, and admits it.


The article is based off someone scraping all the Cochrane reviews and doing ctrl-f for the phrase "high quality evidence"... unfortunately the headline makes great click bait but I think it's misleading as most people are not versed enough in medical evidence and come away with an incorrect impression as to the evidence supporting medical interventions today. Even an intervention with "moderate" level of evidence supporting it has already passed a bar far beyond what most people would imagine.

As to your claim about antidepressants, psychiatry and psychopharmacology is abound in RCTs, many on treating depression with antidepressants. You are perhaps mistaking a common clinical approach (outside of clinical studies) for finding the best medical management for single patient with how clinical evidence is procured in the first place. There are more Cochrane reviews under "mental health" at 679 than there are under "orthopedics" at 478.

Also, when RCTs are proposed, no matter if they placebo controlled or not, they are reviewed by IRBs and ethics committees.


Yet, we very frequently read about RCTs that failed to find any significant benefit from this or that antidepressant, insisting therefore that psychiatrist are a bunch of quacks and people with depression are malingerers.

Those RCTs are administered by real people who are each either unaware that there is no way to get meaningful results from such a trial, or are motivated to produce meaningless results that can nonetheless be published to mislead.


You don't need to give people a placebo, you can test against a known good alternative. This is frequently used in just such cases, where an effective treatment is already known, and denying patients in a trial that opportunity would be (as you say) unethical.

https://en.wikipedia.org/wiki/Scientific_control#Positive


There are plenty of studies out there for whether a particular drug works or does not work in treatment resistant depression.

So to answer your question they do test for that.


That makes a big assumption that there are multiple types of depression caused by different chemical imbalances. Not saying that's a wrong assumption (e.g. for cancer the analogous assumption that many different mutations can lead to similar cancers is true), but it is a big assumption, and there are plenty of psychologists who would disagree with it.


"Scientifically speaking, there never was a network of validated hypotheses capable of sustaining a full-blown, global chemical imbalance theory of mental illness. Moreover-and here we come back to Myth 2-psychiatry as a profession and medical specialty never endorsed such a bogus “theory,” when judged by its professional organizations, its peer-reviewed publications, its standard textbooks, or its official pronouncements."

https://www.psychiatrictimes.com/view/debunking-two-chemical...

In the general public, there's a logical fallacy that people seem to fall for when thinking about psychopharmacology. It seems that when it is learned that serotonin re-uptake inhibitors (SSRIs)are used to treat patients with depression it is followed with some mechanistic explanation of a "chemical imbalance" resulting from "not enough serotonin."

Yet people don't make this mistake when talking about opiods or pain killers. If you break your arm and the doctor prescribes Codeine we don't come to the conclusion that "you broke your arm and all your endogenous opoids came spilling out so we had to replace it with Codeine." Although Tylenol/paracetamol are useful for treating pain, we don't hear a lot of talk about Tylenol deficiencies.


This number is shocking, but it is not weighted by usage. Just because a medical treatment is a treatment doesn’t mean it’s in common use.

On top of that, I find it 100% believable that doctors use understudied treatments. The reverse makes a lot more sense. Treatments get studied after people have tried them, and believe they warrant further study and more frequent usage.

Consider coronavirus treatments, at first we had very little idea what to do, beyond inferring based on past virus with a similar symptom profile and reasoning from first principles. Later on, we slowly but surely identified effective treatments (corticosteroids/prone position/...) after people tried them. Even later, we had descriptive studies of past effectiveness. I assume most treatments go through a similar progression (covid treatments go through this progression on fast forward).

Medical studies are slow and retrospective. Fundamentally that gets you this 1/10 figure.


Cool, what was that number 5 years ago? 10? 20? 50? Are we making progress or have we only just now established a baseline and the fact that we now know it's "1 in 10" is a number that lets us draw exactly zero conclusions with respect to whether that's good or bad, and more importantly, better or worse than before?


Neil Postman has a chapter about how medical technology ha changed what people expect from medical treatment in his book _Technopoly_. The short of it is that we’ve become reliant on technological solutions that we demand the maximum treatment available land we litigate when the doctor doesn’t order the maximum number of tests or prescribe the maximum treatment, rather than rely on the doctor carefully listening to the patient and meting out more protracted treatments. I think that it’s good that researchers try every possible avenue for research, but I think our obsession with increasingly technological “solutions” is driving an unhealthy environment of mistakes in treatments in the interest of delivering the maximum possible.


They talk about blind trials being better, but then:

>> An exercise trial cannot be "blinded": anyone doing exercise will know they are in the exercise group...

How exactly can knowing which group you're in cause a placebo effect or any other relevant effect in that case?


Blinding is a way to actually reveal placebo effects, and double blinding (blinding the examiner to which group the subject is in) helps negate observer bias.


Medicine is old. Many uses of penicillin are not well studied, because it was the first real antibiotic and we just know it works for what it does. Nobody can get funding to study penicillin uses unless it somehow magically can cure something new.


There was article in The Atlantic a few months ago about fluoride in water not probably works, but isn't particularly well-studied, and long-term side effects aren't really understood. At least with fluoride, it's so widespread that you'd expect any serious issues to have surfaced by now.

https://www.theatlantic.com/magazine/archive/2020/04/why-flu...


You mean like increase in autism, depression, and decrease in population mean testosterone?


There are 55 cochrane reviews if you search for pencillin on their site https://www.cochranelibrary.com/advanced-search. Infectious Disease is one of the most academic oriented and evidence rigorous disciplines in all of medicine. As the requirements for levels of evidence are ratcheted up many treatments that are old and established are reviewed and undergo additional study to establish them under the new expectations.


Also, there’d be significant ethical issues with trying to prove that some long-used treatment works. There’s really no way to get a control group.


This is a bit like the dental floss thing of a few years ago - someone suddenly realised there was no evidence backing the use of dental floss; cue media frenzy that dental floss is useless. No it isn't; just no-one bothered to study it. I'm sure that there isn't any high-quality evidence that 'removing your hand from a hot stove decreases burn injuries.' More evidence is a good thing, and is needed in many cases, but and there is also a heap of questionable medical treatment out there, perpetuated by the status quo.


That is not an accurate summation of the evidence. 11 publications were used to analyze the efficacy of dental floss, a some dozens more for other inter-dental cleaning methods.

https://onlinelibrary.wiley.com/doi/full/10.1111/jcpe.12363

Characterizing this as 'absence of evidence' is incorrect; the specific issue has been investigated by several independent investigators and consistently fails to find evidence of efficacy. Even with much higher sampling and statistical power, if any effect was observed, it would likely be well below anything that could justify flossing as a health recommendation, especially over IDBs.


I stand corrected.


Very few of them are, because each case is different.

Another way to put this: EBM is only good for about 1 in 10 treatments.

Great for writing papers, though.


> 22 percent had very low-quality evidence.


Well, yes.

Experimenting on humans is unethical and (mostly) illegal.


Treatments with bad evidence are no better than experiments. Treatments also only become acceptable over time, after we observe that we're probably not killing or seriously hurting people. Of course, sometimes we just rationalize it when the treatments actually are killing people, if the people profiting from them aren't actually engaging in outright concealment and fraud.


But, whenever there is a single study claiming something new, it quickly goes into mass media (including HN), and calls for "skepticism until this is reproduced a couple more times and the positive results significantly outnumber the negative ones" are quickly downvoted.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: