When I started reading the article I thought the title was a bit click-baity for Science. But then I noticed that the theory it refers to was not the amyloid hypothesis itself, but the toxic oligomer hypothesis that emerged later. That theory is pretty much the main potential explanation on why every single drug targeting Aβ failed that still keeps Aβ fibrils relevant. It's a very convenient theory because it keeps the main original observations about the fibrils relevant while explaining why therapies that target them don't work.
One part that is really important that is mentioned in the middle of the article is that these systems are very difficult to handle, and it's almost impossible to make many of them nicely reproducible. Fibril and oligomer formation depends a lot on the environment and reacts to tiny differences.
I find this kind of fraud deeply frustrating, there is so much wasted effort in the wake of faked high-profile results.
It's more insidious than that. In a field where the core subject is difficult to handle it is nothing unusual if a different lab cannot recreate known experiments on their first try. This doesn't mean anything is wrong with the original research, it might just mean that the second lab didn't control all the variables.
The part that is difficult and annoying is that the variables are not necessarily known, it takes a lot of time and experiments to actually nail down those. And even then for some sensitive stuff every detail can matter, like the exact type of tube you did the experiment in or the vendor, batch and age of every chemical you used. Very often you can control this enough to have consistent results for a set of experiments, but that kind of stuff is really hard to control across different labs.
The social default is in some fields where work is hard to reproduce is they still get into good journals and are believed up front. Usually this is fine. A lot of work that is hard to reproduce, but are relatively important or open up many research directions, do eventually get reproduced, though it might take a year or more. And usually the first party does their best to try to help reproduce the results.
I wonder what it would look like if you’re not allowed to cite papers in the field unless there’s been 5 independent labs that have managed to reproduce the result and all the labs that reproduce get to share in the credit (eg if there’s a Nobel prize). Would that change the incentive structure and create better outcomes? Science isn’t science until you have a repeatable process in place where observations match outcomes and the underlying theory. Someone proposing a bold new theory without the ability to test it usually gets significant credit even if posthumously for being a bold visionary which seems to have been sufficient throughout time as a motivator for the really foundational scientific victories.
The effort required to reproduce a result, especially an exotic one, is often prohibitive. The reward for successful reproduction is near zero, and if you fail, it’s often absolutely zero - you are not going to be able to contest a result by a failure. The original authors can simply say that your reproduction was flawed, if they need to answer at all.
A journal that requires five confirmations to cite a work is a journal that will fast have zero submissions.
The main reason people attempt reproduction is to continue that line of research, and that often starts with reproduction plus a slight tweak. This is why you find a lot of papers like this. This is also how irreproducible results become sort of known about but not challenged in many fields.
So what does the incentive structure look like? Do you get tenure for running a lab that replicates? Are your postdocs in line for tenure track positions? How is it funded: do you put the secondary replications into the original grant? What happens if you can't find X number of labs to run the test?
It's an interesting idea, but it all comes down to how the incentive structures happen.
Presuming the original lab didn't also intentionally leave out important details to make it harder for others to replicate their research quickly. Why list all the details needed to replicate your research and possibly let other labs catch up and take your next grant? Much better to add a veneer of listing details but leave out a few of the really important ones.
I heard from a chemistry researcher that it is also common in that field to give vague descriptions of synthesis procedures in order to allow authors to churn out more papers.
From reading online it feels that most important thing is to be first. And prevent anyone else even from potentially moving forward on your topic. The actual scientific progress is secondary.
yeah, I was just asking why you thought it was appropriate to extrapolate (if you had any reason to believe that the shady incentives you mention, translate to a different field, versus them being completely different in this regard)
To me this is why the emphasis on “reproducibility” is misguided. You might end up failing because you lack the technical capability. Conversely, you might “succeed” by making a mistake. We should be focusing on coming up with new ways to test the same hypothesis.
The null hypothesis tells us that if we can reproduce repeatedly and reliably then we can actually use even relatively small number of reproductions to raise the net significance of the result (which also diminishes the ability for fraud to work as it requires a larger conspiracy instead of people just accepting people at their word).
If you think reproductions have failed then it’s on you to try to have a cogent explanation for why if you’re personally motivated by a belief that the experiment is actually valid.
The particle physics folks also have a different approach these days where they blind the ability to see results and do multiple cross checks of each other’s results to validate the data (because the underlying experiment is so expensive to run). That can also work in theory although hard to say what the success rate of that approach is just yet.
indeed, the initial experiment is not immune to such concerns, and might itself have succeeded by making a mistake or failed because it lacked the technical capability
Would it be more convincing if labs corroborated other lab's findings? Like Lab A found that XYZ impacts LMQ and Lab B, following Lab A's findings, was able to replicate the finding within some margin.
I worked for a pharmaceutical company that tried to open a second factory producing more of the same thing that it was already successfully making and it never worked. They built the entire factory, it didn't work, they spent years and millions of dollars trying to debug it, it simply never worked out. They ended up just essentially running the US factory at three shifts to meet the increased demand. To this day I don't think anyone knows why the European production line never worked. Things can fail to reproduce without fraud.
Very fascinating. Were you at least able to pinpoint a part of the production process that produced different results, or was the difference only visible once the final product was "assembled"?
I was not closely involved with the project. They essentially built the first one to "spec" like "use this kind of steel in the tanks" and the remediation process was essentially tearing those out one by one and "use the steel that we used in the first tank."
Sounds like a process similar to how you would debug a difficult bug in a big and/or complicated codebase – through trying to isolate the code causing the bug, comparing the buggy code with code that's known to work, replacing other parts of the code with dummy code/mocks etc. And sometimes a bug seems impossible to solve regardless of how much developer time you throw at it, and you just have to ship the product with the bug (seems to be especially common in the game dev business).
It's not even the toxic oligomer hypothesis that is undermined - it is only the role of amyloid beta as a primary driver of Alzheimer's pathology. This isn't even out of nowhere, there's been plenty of data finding no strong association between AB levels and pathology. Most data seems to point to AB being more of a supporting element, with tau amyloids actually being associated with toxicity and resultant functional deficits.
1. Yes. Science has a snowball effect as a result of committee-driven grant decisions. Research in a hot current topic attracts more grant funding more reliably.
2. It does because some fields of science are intrinsically long-duration, highly sensitive to variables, or only testable at scales that exceed our experimental capability. That's just a consequence of physics and natural laws of reality. E.g. nutrition, chemical toxicity, economics, high energy physics.
3. We can cheat and find proxies or more tenable micro-systems to experiment on. But often those have their own problems (e.g. rodent models) or aren't feasible.
Reproducibility is a goal of science. It's not always an achievable goal.
When it's not, we do the best we can, as with drug testing pipelines.
In general, anything that's at the bleeding edge of theoretical understanding and intersects with commercial interest or draws social/media hype has a relatively low probability of standing the test of time. Whether that's replicability, applicability, or just validity.
The most exciting hypotheses tend to go one of two ways. They are rapidly supported with independent evidence, by being applied or replicated. Or they draw a lot of attention and money which helps them persist in spite of, or in the absence of, evidence.
Charlatans, purveyors of snake oil, and (most often) people who for whatever reason don't want to be seen to be wrong - they exist in every walk of life. Science is the same. The incentives in the system strongly select for these people in the 'leading hypotheses' space.
Everything is extremely hype-driven -- it turns out that "cannot confirm X" isn't very compelling for journals, etc. etc. or even the news cycle. Journals, etc. thrive on exciting new findings... and that tends to lessen the critical looks.
Of course, there are lots of other things to this problem: Very narrow fields where people absolutely know who their "anonymous" paper reviewers will be, and so must include even extremely tangential references to those reviewers' papers, etc. etc.
Usually the sciences self-correct eventually[0], but that's only because there is such a thing as objectively verifiable facts and overwhelming statistics in science.
[0] Unfortunately often as slowly as "one funeral at a time" (Max Planck, I think).
1. Twitter bot / misinformation research. A surprisingly large field in which papers are almost never replicable because they don't supply the actual tweets, but only opaque IDs and classifications. Trying to cross-check them is futile because by the time you tried months later many of the accounts have been suspended. They could easily supply the contents of the tweets they scraped along with account metadata, but don't. The few times I did deeper checks of these papers I always found some accounts that were identified as bots but weren't suspended, and on manual inspection were very obviously human. This field also has the problem of being increasingly based on ML pseudo-science.
2. Germ theory! It can't explain several aspects of the epidemiology of respiratory diseases. It's probably not wrong but is certainly incomplete. As we saw with COVID, models based on simple germ theory always make wrong predictions, but this problem pre-dates COVID. It was known for a long time that standard germ theory fails to explain the behavior of influenza, for example, why it's seasonal, why waves peak and enter decline before everyone is infected, why variants disappear totally instead of coexisting, why flu season seems to start everywhere in season almost simultaneously, why there have been outbreaks of respiratory viruses in totally isolated environments like arctic bases, and so on.
The issues here are deeper than reproducibility though. Even if bot papers were reproducible they would still be wrong because their methodologies are invalid and cannot support the stated conclusions. It's important not to get too focused on mere replication.
> 2. Germ theory! It can't explain [...] the behavior of influenza, for example, why it's seasonal,
Huh? I thought that's so well-known (must be, if even I have heard of it) that it goes without saying: Spring-summer, people go out and breathe fresh air a bit apart; autumn-winter, everyone goes inside and coughs their germs at each other.
> why waves peak and enter decline before everyone is infected,
The more people are already infected, the fewer and further between the as-yet-uninfected are, so that seems quite reasonable. (Pretty much inevitable, really, isn't it?)
> why variants disappear totally instead of coexisting, why flu season seems to start everywhere in season almost simultaneously,
See first point above.
> why there have been outbreaks of respiratory viruses in totally isolated environments like arctic bases, and so on.
Someone always flies in supplies...
I mean, WTF -- "Germ theory"? Isn't that what it was called in the 19th century, when there still was any doubt about it? What next; "Gravity is just a theory!"?
That's a popular ad-hoc "street" theory (it's easy to come up with those) but formally speaking that idea has never been tested or proven; it's not a part of germ theory. If you look at epidemiological models, they rarely model seasonality for that reason. The models used for COVID for example, didn't have any notion of seasonality in them for a long time and many still don't, because what you just said isn't actually scientifically accepted.
And it would probably run into some practical problems if you tried to nail it down, like the fact that for many people they are office or factory workers and the amount of time they spend outside doesn't change all that radically during summer, certainly not enough to make the difference between explosive spread and total eradication.
"The more people are already infected, the fewer and further between the as-yet-uninfected are, so that seems quite reasonable"
That's why the wave slows down yes, but again, this ad-hoc notion doesn't work. Try it out on paper or code up a quick epi model yourself and then compare against real world case data. The waves always end long before predicted, even when there are still tons of uninfected people available to infect and people wandering around who should easily infect them.
This is one of the reasons why COVID modelling failed so badly. They coded up the exact simple germ theory model you're describing here, and of course it yields a single giant wave whereas what is seen in reality is a long series of small waves that start and end in ways that aren't predicted by the theory.
"See first point above."
See my reply. I can assure you, that the epidemiology of influenza really isn't as simple as "people going outside in summer". Take a look at some of the research papers from the 80s exploring this topic. There are still plenty of people interacting with each other closely indoors even in summer months, yet flu completely disappears. It's not something that ties in any obvious way back to accepted theory.
"Someone always flies in supplies..."
That's not what "totally isolated" means. In the cases in question there was no contact with the outside world whatsoever. In one case, at a British polar base, they mounted a very thorough investigation after an outbreak of cold virus after 17 weeks of total isolation. They checked if any new supply crates had been opened, etc, but no. They couldn't identify anywhere that new viruses might have been introduced to the base.
One of the more popular sub-theories (though largely ignored by epidemiologists) to try and explain these things is the possibility that many people are continuously infected at a sub-symptomatic level, that the immune system never 100% wipes out the viral infection in these people, just keeps it in check. Then something happens to slightly knock the immune system out of balance for a moment, like a sudden change in temperature, and the infection is able to re-gain a foothold and starts replicating out of control again. So that'd be why people just show up everywhere at once spontaneously infected without any obvious index cases.
The standard of truth in science is reproducibility. But publication usually happens before results have been independently reproduced. Papers are peer reviewed, but generally peer reviewers are just ensuring that the results are sufficiently interesting to warrant publication and there aren't any obvious methodology mistakes.
For the purpose of funding, publication in important journals or conference proceedings is the important thing. And the public generally tends to treat publication as the standard of truth. But publication kind of works on the honor system and is susceptible to fabricated results.
Maybe there should be more funding towards reproducing (or not) important results?
One of the issues is that by chance the models used have an unexpected behavior that seems to confirm the initial hypothesis is sound.
And this is not rare. Sometimes it is fraudulent, people know things are faulty but continue anyway. In other cases it could be by ignorance of the existence of the phenomenon behind the faulty behavior or simply because that phenomenon has not been discovered yet.
Oh my god if true, and it looks to be true this is huge.
The amyloid beta (Aβ) hypothesis has always been fishy but this paper is basically the bedrock of the current investigational trajectory.
The Aβ hypothesis was almost dead in 2006 when this method was invented and results posted and it sent shockwaves through the research world. Since then spending on Aβ research by NIH has gone from $0 to $290 million all it seems based on a lie cited over 2000 times in further papers.
The article is pretty convincing as is the independent verification that not only concurred but found additional evidence.
>> it seems based on a lie cited over 2000 times in further papers.
Since papers tend to get published only if they have positive results, what does it mean for thousands of publications all citing a fraudulent paper? This seems really strange. If the first 1000 failed to produce results and were partially based on that original paper it should cast significant doubt on it, but again failures are rarely published.
Imagine I post an article that’s fraudlent and ”proves” that small rocks of a certain size and color keep away tigers. You won’t then see 200 articles saying they tried to replicate this and failed. What you see is 2000 articles dealing with how to find rocks of this exact size and color, siting my paper to support why it’s relevant to be looking for rocks in that category.
Citation doesn't mean dependency. It just means the paper was mentioned. What it does mean is that each and everyone of those papers must be re-examined with extra scrutiny to see if they hold up without the cited paper.
A lot of papers that came after were producing results in the framework it established.
Because of the nature of a disease like Alzheimer's not many studies can easily measure the final effectiveness of their addition to the space on patients.
To me this is one of the most damning things about this: all the people citing these papers and probably burying negative results. Yes, fraud is unacceptable, but what about all the people using it to further their careers?
These types of cases tend to focus on the fraud, but then everyone eventually walks away and then there's no larger discussion of how this was enabled by the field to have such a hold.
I don't buy the idea that it's all so quixotic and involves so many variables to get right that who knows what's causing a lack of replication. I've heard whispers like this about other big effects in other fields. People know what's going on, and often it's right there in published results but ignored.
If this is true, it is beyond fraud. These are crimes against humanity.
My paternal grandfather got diagnosed at around 70, and my father is 62 now, definitely getting more forgetful but normal for 60+... Still, I worry about him regularly. My father is a clinical psychologist, he's helped people all his life.
These news make me so angry I can hardly see straight. I genuinely hope this sociopath waste of skin gets Alzheimers himself, and has to relive every moment of his disgusting crimes as his brain slowly turns to mush and he forgets every person he ever loved(if he is even capable of love...)
I appreciate the sentiment, but as good as it felt to blow off some steam in this thread this morning, I'm not the type of person who sends deathwishes to people's work email...
Understandable. My condolences to you and your family on having to go through the hell you have had to.
When a bit calmer, I would still encourage you to reach out and find constructive ways of dealing with the grief.
That may commonly be to write letters/email to those that have affected you. It seems that Dr. Lense's work has had that affect on you and it may be helpful to you to write to him. Academics don't interact much with the public and having someone write to them does help focus their work and right wrongs. You may come to new understandings of the Dr.'s efforts and of your own self.
Again, best of luck and I hope you find peace and guidance.
This is weirdly confrontational, my therapist would not recommend writing an email to a professor who wronged me even on some abstract level. That's not how a psychologist counsellor would suggest confronting one's grief.
The main thing is that it's never valuable to exactly replicate a study, from a commercial, career or publication perspective. Other studies may 'incidentally' cover the same ground, but then there are any number of variables which can explain why "this seemed to be different, but".
That being said, numerous papers have been published undermining the 'amyloid beta as primary disease driver' theory already. Or at least, finding no statistical correlation between AB and neuron death or that sort of thing. The most valid hypothesis that I've seen is that AB is just a supporting element for another amyloid, tau, which does correlate with toxicity.
This guy might well have cheated but how could he be responsible for the huge mess which is the amyloid-β research (and amyloid-β lobby as told by Statnews [0])?
Sylvain Lesné, has no Wikipedia page. He seems to publish roughly a paper per year while some famous scientists publish one every month. He is not at the origin of the amyloid-β hypothesis which is more than 100 years old.
Even more there are thousand papers published about Aβ*56 and Alzheimer's disease. How could this guy responsible for this mess?
And if what he published was wrong, why this was discovered only 18 later if thousands scientists are working in the field?
My understanding is that there is a need to find a scapegoat for the amyloid-β and pointing to an obscure guy is in the interest of many big fish.
The OA points out it was, or was one of, the most cited papers in subsequent AB research. They make the argument that it has mislead many other researchers and cause a waste of many millions in funding. He's not an obscure guy in the field of AB research.
So the observations it made were pivotal to expanding research, but none of the following studies discovered counter-evidence? It seems like something is missing here.
I remember the story about the addiction "study" ("Porter Jick") that was used as the basis for declaring OxyContin "non-addictive." It was dramatized (but fairly accurately) in Dopesick.
It was a letter to the editor, by a doctor, describing a small study on hospitalized patients, and was seized upon by Purdue, as the basis for their entire sales pitch.
When there's money to be made, people can look the other way, quite easily.
I worked in a lab doing research based on that amyloid hypothesis (working on other projects). The only thing that got out of it were molecules that were really good at messing up the assays used but had 0 biological effect in vivo. So in the end the lab optimized the protein and cell assays to make them were more sensitive to artifacts. This wasn't made on purpose, just a combination of human and institutional factors that combined as a perfect storm. Several dozens of academic papers were published on that... That's what happens when you have isolated labs in a university that doesn't favor and nurture collaborations but focuses instead on political and economical aspects. Now all the people that took the wrong decisions have either retired or got fancy positions elsewhere.
Intentionally diverting resources and skewing research from one of the most biggest growing epidemics of our time seems seems like huge crime against humanity that has not been criminalized yet.
Also anyone who used the results without sufficient due diligence should be barred from working as researcher or receiving grant money. They are not quilt free either.
What about a researcher that measures the kinetics of AB-aggregation in vivo, performs X-Ray crystallography to determine the structure of this (or another amyloid), or performs simulations to determine free binding energies of various antibodies with the oligomer and cites this paper as the reason it has clinical relevance?
I know people in the field that do all this kind of research and do not know much about mice-experiments or even about the experimental techniques used to obtain these results. To me, they are victims of this fraud more than anything.
Are they supposed to do a forensic analysis of every paper they cite? It's sad if it's arrived to this point.
Reproducing the original result can be prohibitively expensive and difficult in many fields (ideally, this wouldn't be a concern - but generally, research groups aren't rolling in cash and resources, plus you need to be fast not to get scooped for the next paper and lose future grant money)
Given the replication crisis, more questions should be asked. Everyone knows a disturbingly large number of papers are on shaky foundations, but nobody is good at picking which ones
Unfortunately it's probably not serious crime. It would have been in the private sector, see Theranos, but not in academia. The USA is rather unusual in having the ORI, but it's toothless and does only a handful of announced sanctions per year. There aren't the laws on the books, nor prosecutors and investigative forces in governments dedicated to pursuing scientific fraud.
So I'm afraid to say that assuming the allegations are true, they will probably get away with it. It's absolutely standard when these sorts of things are discovered that everyone sweeps it under the rug as vigorously as possible.
Fixing it would be difficult. There are a huge variety of ways to produce fraudulent science. Even coming up with a law that captures half of them is fiendishly hard, and how to do enforcement in a timely manner? In this case capitalism came to the rescue because the investigation was funded by short sellers, so this has to be one of the best arguments for short selling around. But by the time there's a publicly listed company whose share price is dominated by research suspected to be fraudulent it's way too late.
There's probably also a fear of looking too closely. There's a culture of coverups in science that I've seen first hand. It's deeply unpleasant and breeds the suspicion that they do it so blatantly because it's become a way of life, because they know there won't be any consequences even if they're called out on it. If you start going after image tampering, well, it's only a tiny next step to say that if your paper reports a mean that's statistically impossible given the data set then that's also fraud. But then you'd have to investigate and mount prosecutions for a significant fraction of all psychology researchers. And then you're going to have to make it a crime to not share data on request as otherwise incriminating data is always going to be inconveniently lost. And then you're up to 90%+ of researchers facing action. Draining this swamp would be very hard.
based on the comments from alzforum, the relevant academics do not see this as 16 years of research in the wrong direction. they seem to see it as an invalidation of one branch of the research, while the overarching theory is still intact
the cynic in me suspects that they're saying that to maintain their - presumably funded - status quo, but not being even remotely knowledgeable in the subject, I have no idea whether that is well-founded
The science is only more reliable than non-science, not reliable in any absolute sense. And scientists are as fond of superstition as anybody.
Masks were described as useless for countering COVID transmission because of what turned out to be superstition around "airborne transmission", itself finally traced to a result that properly only applied to tuberculosis.
Belief in ivermectin efficacy was a similarly widespread superstition among mostly non-scientists.
We have generally had much better results from science. Science was finally obliged to abandon its "airborne transmission" model by people who knew better publicizing correct information. But most ivermectin fans still cling to it.
> The science is only more reliable than non-science, not reliable in any absolute sense.
What "the" science?
The scientific method is proven to be a very good way to make predictions about the world based on observations. The body of scientific work built up is immeasurably valuable and incredibly good at predicting things.
Some random corporation or politician or person claiming to have The Science on their side, and that anybody who disagrees or questions them is a heretic and an unbeliever? That's not reliable and it's not science.
"The science" as a body of received knowledge refined from the rough consensus of a moment among working scientists is something distinct from the practice of actually doing science. The latter mostly interests those doing it, and mostly frustrates everybody else with its apparent wishy-washy attitude toward questions of public policy.
There isn't a "wishy-washy attitude", there is respect for the fact that the questions of public policy can't be answered by the consensus or lack thereof.
Only politicians and policy advocating 'scientists' cough Fauci cough are confident enough to make proclamations about the state of science. To their credit, no one seems able to hold them accountable when they falsely declare consensus and silence the voices in opposition
"Superstition" does a disservice to the fact that many scientists internationally were reporting on efficacy with ivermectin.
Meanwhile the information coming from "reputable" sources in the world have proven to be less than accurate if not complete misrepresentation
Edit:
There is no such thing as 'the science'. There is science. Which is testing and questioning to determine facts. Anything more must be "trust in the science of..." some actual thing. And that actual thing is not public policy.
We have sufficient evidence that the CDC and NIH are interested in appeasing their corporate sponsors and otherwise believe themselves to be beyond reproach
Many scientists (and... others) internationally were publishing claims of efficacy that collapsed under light scrutiny. The small handful that did not collapse showed a very small effect that could be accounted for by decreased parasite load.
Reputable sources said, correctly, that there was no good evidence that ivermectin worked against COVID. Later, they were able to say they had good evidence it did not work, a stronger statement. They could not honestly say that, early on, and did not; but we all know that almost everything doesn't work. So, anything claimed to work deserves skeptical scrutiny.
Biochemically, it would not have been surprising if ivermectin helped some. But "not surprising if" is a very, very long way from "does". Reputable sources made, in the end, the correct call. Meanwhile, people draining the ivermectin supply did themselves no good, but made it harder for those afflicted with parasites to get needed treatment. Those using ivermectin instead of getting vaccinated made themselves carriers, contributing to spread and mortality.
I don't find this to be likely or relevant. Commercially available and otc worldwide, generics.
It does have an impact by reducing parasitic load and inflammation. This can be seen in countries with high rates of parasitic infection.
Reputable sources in the US hammered the one sized solution that runs contrary to immunization history and theory.
Vaccination does not 'stop the spread' which is an absurd point to make at this stage. That is absolutely evident from case counts across the US as vaccinations increased.
What has been clinically demonstrated is that vaccinated people who get COVID anyway shed virus for a shorter period. And, also (this is important), don't die. We will all catch it, sooner or later, likely as not from our pet. Some of us might be so lucky as not to notice.
In fact, ivermectin supplies really were depleted for quite some time.
Ho, you might want to get up to date on those studies.
Boosted and vaccinated are carrying viral loads longer than unvaccinated as of Omicron
And more importantly, it would seem that all deaths in recent studies of the later omicron variants were in the boosted and eligible for booster cohorts.
You have to think that this type of news does add ammo for climate science deniers, covid vaccine skeptics, and ivermectin/hcq proponents to make the case consensus science != absolute science.
Regardless, I do think most people agree that all science should have the opportunity to be subjected to earnest and thoughtful scrutiny, and it shouldn't be a career ending endeavor to do so.
One problem the amount of press and hype that these type of research receive in the mass media (same thing for the "cancer cure breakthrough"), likely promoted by researchers and labs looking for funding, and which generally amount to nothing. The general public may forget about a specific claim, but they don't forget the number of false or misleading headlines they've read over the years.
Add to that a blatant case of forgery like this on a high priority subject, and you have the recipe for antivaxx, covid denialism and overall decline of the trust in science.
The blatant case of forgery described in the article. Manipulating images to make something that do no exist appear real is as blatant and as forgery as they come.
I recommend reading the comments on AlzForum [1]. From the discussions (which are from real Alzheimer's researchers), it sounds like this fraud is significant in terms of Dr. Sylvain Lesné's work, but that the news has been vastly blown out of proportion, and not significant to the field as a whole.
thanks for sharing this. it would be ideal if these comments could get integrated into the original article. it's interesting that ashe commented here and not when contacted by the journalist.
hopefully this fraud didn't truly cause unnecessary delays in pursuit of a cure.
I love how the researchers are shorting the stock - they're not insiders because the data is public available for anyone (with the requisite very domain specific knowledge) to see.
There a whole lot of the sorts of investor who are normally shorting stuff who must feel that this is somehow insider information
"“So much in our field is not reproducible, so it’s a huge advantage to understand when data streams might not be reliable,” Schrag says."
I am just a lowly engineer, but this alarms me. Why is anything that has not been reproduced considered valid science by anyone? Why aren't our standards higher?
If you can't reproduce an experimental result, it is useless information is it not? At least an experiment that can be reproduced yet fails to prove a hypothesis can teach you something. An experiment that cannot be reproduced yields no useful information. In fact, it can even mislead!
I just don't understand the motivations at play. These are obviously intelligent people who know that you can't fake reality, so why do they publish fraudulent papers? Just for short-term gain? Do they become blinded by belief in their hypothesis?
Biological systems are so complex that reproducing results is extremely hard even if the authors publish their method in detail, simply because they might not know why it works for them.
This is hard to understand for software people, since code tends to behave reproducible as a default.
Basically the only way to reproduce a difficult finding is to learn the procedure at the original lab.
An example: A friend of mine could not reproduce his own findings in another lab. Turned out the precise type of the lamp build into the setup mattered.
Another example:
I could not reproduce a finding the was something I wanted to build upon.
Turned out the precise method to dissolve one of the chemicals in the buffer as the problem. It was even hinted in the paper, but who would describe in detail what he means by “ vigorously stirred” ?
Biological systems being complicated and unintuitive is an excellent explanation for slow or no progress. It's entirely orthogonal to the question of why published results are not reproducible, misleading, or wrong. If some problem is super hard that explains why I can't answer it, it doesn't explain why I continually publish fake answers.
My understanding of the situation is that academics and scientists work in a weird bureaucracy, there is an incentive to publish, academics are very bad at detecting fraud and worse at punishing it and statistical manipulation is easy and endemic. These things explain why there's so much academic research that can't be reproduced and why some academic fields are basically the modern equivalent of astrology.
You only notice the cases where you try to reproduce things, and time being limited ,you only reproduce obscure results.
As a comparison: I did a bunch of math too and in some cases just accepted the proofs I build on.
Same for programming: I neither audited lapac not GCC.
For the engineering crowd software dependencies are a probably a good comparison. If they come from a credible source you trust them and build on top of them.
Nobody blames companies for not auditing the kernel if there is a security problem in it, but everybody screams: “You should have verified the result”, if some paper is redracted.
This is why almost all results from running rats in mazes are spurious.
There was an early, very good paper identifying all the details needed to make a valid maze experiment. Nobody cites it, so nobody reads it or acts on its results.
They are blinded by assuming good faith and competence.
Most scientists consider most of their colleagues more or less incompetent, and even where they accept experimental results, often reject the experimenter's interpretation of the result, often correctly. Scientists advise us to ignore the abstract, ignore the interpretation, ignore the conclusion, and trust only the data, at most. But we mostly don't get to do that for fields not our own.
High prestige is detrimental in that it short-circuits this skepticism. This happens not only in Alzheimer Syndrome work. It put psychology research in the grip of behaviorism, statistics in the grip of non-causality, political science in the grip of dialectics.
A new wrinkle in all of this is the current practice of promoting research to the public. Many (most? practically all?) universities employ nontechnical people whose jobs it is to identify newsworthy research results that can be dumbed-down and puffed-up in press releases.
Granting agencies do the same thing, either directly via newsletters and press releases, or indirectly, by encouraging research teams to hire public-relations people.
Members of the public cannot be expected to be able to read, let alone understand, the dizzying array of specialized articles that are published on any given topic. Heck, this is a real challenge even for scientists looking at work that is slightly outside their domain of expertise. Some form of overview is necessary, but press releases, usually biased and ill-informed, are just not the way forward.
But will press releases go away? No chance. This is how students get attracted to universities, and it's how alumni get encouraged to donate for that oh-so-important sports complex.
What (if any) recourse is there against those who have falsified scientific data? It almost feels like fraud on the {scientific research} market. Billions of dollars and, perhaps even worse, countless human hours, have been wasted in reliance ob completely fabricated information. For what? An ego boost? Citations? I don’t work in science so I truly don’t understand the motive(s) on the other side of the equation.
This is so typical of contemporary academics, at least in biomedicine, I don't know where to start. For me, this is exemplary of a lot of what I see in the field, that happens every day, to larger or smaller degrees. It's not the exception.
There's so many elements to this story that echo things I've encountered, and read about in studies of trends. For every case of outright fraud like this might be, there's dozens of "soft fraud" (read: "questionable research practices") that get the benefit of the doubt or are never brought to light. No one has discussed this here yet, but part of the story is that the program officer on one of the accused' recent grants was a coauthor with him on this very research -- consistent with trends discussed in the literature (of the largest predictor of grant receipt being coauthoring papers with grant review panel members or officers). So there's this fraud, potentially involving someone who then goes on to help decide who gets research dollars.
Your reaction is something I've wrestled with a lot. Many times there are no consequences really. People talk about lost reputation or something, but in cases of soft fraud nothing really happens. It just all kind of evaporates into a fog of scientific dispute between parties. In cases of blatant, legal fraud, someone can be fired or lose a position, but that requires significant evidence.
For what? Yes, ego, citations, money, titles, accolades as a "rising star" or a "genius" or whatever it is.
The saddest thing to me really, aside from all those harmed by bogus treatments, or forgoing real treatments, is all the researchers with legitimate ideas, who go against trend, who are pushed out because doing the hard work isn't glitzy and has lots of dead ends. You end up in this system where the buzz is what matters, and riding waves of self-perpetuating hype is what gets you to the top. Appropriate skepticism and careful thought, meanwhile, costs you because that takes time and risk.
And people who might've benefited from an actual, more reliable treatment who might have been developed, had the scientist refocused to other hypotheses.
Holy hell. Someone please correct me, but the beta-amyloid camp has been under attack, establishment researchers still keep going despite weak evidence, and now the best drug they have might have fabricated data???
Nope. It wasn't evidence for a drug that was fabricated, it was a 2006 study published in Nature that convinced everyone to believe the amyloid hypothesis.
No, it's not. It was introduced in the 90's. "The amyloid hypothesis was first proposed in 1991 by John Hardy and David Allsop." You're probably thinkinng of the person who "discovered" Alzheimer's disease (Alois Alzheimer).
These dots don't connect for me very well in this article:
> In 2006, Schrag’s first publication examined how feeding a high-cholesterol diet to rabbits seemed to increase Aβ plaques and iron deposits in one part of their brains. Not long afterward, when he was an M.D.-Ph.D. student at Loma Linda University, another research group found support for a link between Alzheimer’s and iron metabolism.
Yet, fast forward to December 2021 or early 2022:
> Three of the papers listed Lesné, whom Schrag had never heard of, as first or senior author.
If Schrag was doing research in the same area, at around the time Lesné's results started to dominate, propelling his career, it's odd that Lesné's existence somehow eluded Schrag for 16 years.
Did Schrag switch to something else at around that time and never give it a second thought for a decade and a half?
What makes me the most angry about this is the huge difference between what the scientist personally gained from this fraud and the cost of believing in these results for that past 16 years. Maybe he is a bit richer because of this, but it comes at the cost of hundreds of millions of dollars being spent on research that is destined to be fruitless.
In the time since he published, I lost my father after investing hope that a trial that is supposed to prevent amyloid plaques and now my mother-in-law is slipping further into dementia.
As an insider, I am tempted to say research in most diseases is driven by lots of fraudulent and oversold results.
Maybe not direct image manipulation as in this case with Alzheimer's, but certainly there is always a lot of monopolistic rich-gets-richer behavior.
Nearly all professors at top universities I have met develop intimate relationships with funders and journals, which they use to steer the field in their preferred direction. As the posted article says "You can cheat to get a paper. You can cheat to get a degree. You can cheat to get a grant. You can't cheat to cure a disease."
I have been asked directly to misrepresent results on several occasions. In the most recent one, a professor who has received all prizes and accolades in his field threatened me and others when we refused to misrepresent research results. I could afford to do this, but my workmates who have families to support were on the brink of giving up to the bully.
It’s also important I feel to stress that this behaviour exists at multiple levels of magnitude. A professor may not be bending journals and whole fields to their will, but can expect and pressure consistently positive results from their lab, to the point of being actively reckless with the truth and blind to the flaws in their theories.
Just getting caught up in the practice of self-promotion and the trumpeting of one’s “novel and impactful” ideas while managing a lab from the top without having done actual frontline research work in some time can take you pretty far from scientific integrity IMO.
With no culture of being proud of reporting negative or contradictory results, I’d say “excessive scientific zeal” is an easy and common trap. Even a slightly forceful lab-leader or PI can end up swaying a group of people into some grey dishonest zone of scientific practice.
This is why science needs to formally pre register hypotheses. "People tested this ten times and it worked once" is infinitely more useful info than "someone published a paper because this works! But we have no idea how many people tried exactly the same thing and got a negative result".
I know two people personally who dropped out of PhD programs due to fraud in their departments. I never asked them too much about it, but I'm imagining it's related to not wanting to lose potential funding or waste years of research to a wrong conclusion.
I could imagine that they might also not want to invest the effort and time needed to complete a degree that would be of little value if it became public knowledge that the department issuing the degree had a problem with fraud.
Thank you for making a stand on this. My doctor has a "No Free Lunch" sign in his office saying he won't talk to pharmaceutical reps. Maybe the research side needs a similar pledge to avoid being corrupted by funding. I don't know what it would be though.
The problem is rarely corruption by funding in the way you're thinking - Merck going "Hey, can you take another look at those numbers?"
The problem is the pressure to get funding by being productive. Negative results are wildly harder to get funded. Less productive labs are at a disadvantage for grants. Which means, like any metric upon which people's salaries rest, people are tempted to game it.
> NIH spent about $1.6 billion on projects that mention amyloids in this fiscal year, about half its overall Alzheimer’s funding
The funding for one year from this one agency amounting to 1.6 billion dollars is a stunning.
If what you say is true, promoters of the validity of the NIH should be a push for statutory changes that mandate NIH commit a percentage of the overall budget to confirming significant past research.
I think part of the challenge is no one wants to throw their hands up in the air and abandon amyloid after so many years and dollars spent on it. Instead, they say "well we just weren't doing an early enough intervention", which is how Aducanumab was forced through with weak evidence of effectiveness.
The big question is what do we do now? There's Tau, but that hasn't been a slam dunk either.
> I have been asked directly to misrepresent results on several occasions. In the most recent one, a professor who has received all prizes and accolades in his field threatened me and others when we refused to misrepresent research results.
I wonder if intentional research fraud should carry legal consequences.
It does carry severe legal consequences. If you defraud a federal agency into funding your research (which almost always accompanies any fraud in basic scientific research) you are committing a federal crime and you can go to jail.
Fraud and fabrication of results are abhorrent, but this is too simplistic a take. These fraudulent results didn't single-handedly spawn 16 years of fruitless research. Instead blame the incentives that create a system where nobody validates a result for 16 years.
If academic science were in a healthy state, novel results would be validated 100s of times. The current system is setup such that only novel results are rewarded. There is no reward for validation. During my PhD I was actively discouraged from performing experiments/studies that weren't "new". Everything ultimately comes back to the funding model. Scientists are only funded for groundbreaking work, measured by the number of publications. In turn, journals will only accept previously unpublished work.
The fraudster's refusal to confess for 16 years as they saw the field go astray and elderly people suffer is a crime against the health of so many people. If I were a prosecutor I would try to get them for fraud, criminal negligence, or even manslaughter. If I were a vigilante, I would have some strong "words" with this perpetrator who caused so many elderly people to suffer. I hope they are brought to justice within their lifetime while they are still alive to be punished.
Hold on a second. In no way did this person “cause” people to contract Alzheimer’s. Let’s remember that all of medicine is an intervention. There’s nothing to say that research going in a different direction would have lessened the suffering of people with the disease.
I’m not saying this person is faultless, but scapegoating is hardly a sound response to what is clearly a systemic issue of misaligned incentives.
The most powerful results in science are continually reproduced by being built upon to uncover further new knowledge. I'm no expert in the toxic oligomer hypothesis or Aβ hypothesis, but it appears that these paths have led to very little new knowledge.
It reminds me of the eating cholesterol hypothesis for artery blockage and heart disease. Most artery blockages are made of cholesterol therefore eating cholesterol must be the culprit. Simple, straightforward, and now apparently wrong. The cholesterol buildup is a symptom of something else.
Seems likely with Alzheimers that the amyloid buildup is a symptom not a cause.
Still half a billion dollars of NIH money went into this sub field.
The article points to how the main accused got an R01 approved AFTER this misconduct started to come out, and the guy awarding it was one of the coauthors of the first fabricated papers.
I felt like the author’s choice to quote that specific part of whatever she wrote was a pretty harsh choice. As a reader you see that and jump to the exact conclusions you’re alluding to here.
Proposal: if a paper isn't reproduced in ten years after publication, then it gets automatically retracted (which can be reversed as soon as it is reproduced). Any papers that cite the retracted study (in a way that the conclusions depend on it) would also get retracted. That would be powerful incentive for all the researchers who cite the study to try and reproduce it so their papers don't get retracted.
You could still search these retracted studies when doing research, of course. You just can't cite them.
All that does it create an industry for replication labs, and then a market where these labs compete to develop the most "guaranteed" replications for the least cost, and then a whole lot of replications which check the box but actually rely on strained interpretations, questionable modifications to process, uncaught fraud, etc.
That sounds worse than what we have since it just eviscerates the significance of what replication means in the first place.
The amount of money for an NIH R01 non-modular budget hasn't changed since 1999. Which means due to inflation, the average "gold standard" grant budget covers half as much as it used to, roughly.
Wherein are you going to fund mandatory replication studies - some of which are massive cohort studies, so you're going to need a whole new population cohort - let alone the...citation police...to review every citation not just for its existence, but for its content.
We just need to cement the notion that no theory is truly proven or beyond attempts to disprove it through replication. Attempts to discourage replication should be a red flag.
The same issues exist. The next dodgy scientist on the hamster wheel looking to get a name for themselves will claim reproducibility and then publish a follow up in the spirit of publish or perish. You could further entrench the issue with this approach, unfortunately.
Better solution might be for the government to just fund reproducibility studies, and even departments of reproducibility. Take the profit motive out of it, find good scientists who are painstaking but maybe not innovative and fund them to reproduce major results. The scientists would never get the credit for major breakthroughs, but could occasionally be wrecking balls that called research like this into question. With consistent and reliable funding from the government their reward could be stability of employment rather than innovative fame (of course these days some partisan politics would probably gut it, which is why we can't have nice things).
I'm open to that idea but not sure how it would play out. It could go wrong in a few ways.
One potential problem is that it could become "the" authority on reproducing results. If they repro something and another scientist can't, or vice versa, would the other scientist get ignored?
Another problem is that it could take away the independence of science. The government might start saying what is good science and what isn't.
They can only publish their results, though, they can't force people to accept them.
Any other institution, anywhere else in the world, is free to publish results which disagree.
Scientists should then have the necessary tools to figure out which one is likely to be correct, and try an independent third or fourth time.
It would be expected that sometimes there would be a failure to replicate which was a mistake in how it was replicated, that shouldn't be seen as being a failure of the goal of replication, and the original authors should be incentivized to reach out and discuss the issues with the methods.
Is there a level of collaboration among so many people that at least data can be re-used without being "reproduced"? I.e. the field as a whole has put enough work into an apparatus or infrastructure that we can regard the initial observation to have been trustworthy? We shouldn't have to build a second LHC before we believe any claims from the first one, right?
How about the threshold for publishability is that two independent groups come up with the same conclusion? So you have to partner with another team somewhere else in the world in order to get published, and then you get joint credit.
I think there's a powerful incentive to be first, which drives people to research in the face of so much doubt. Trying to convince one lab that the idea is worth funding is hard enough. Two labs is just unrealistic.
My idea hacks around the problem by not diminishing that huge incentive to be first. Even if your paper is retracted due to lack of repro study, you were still the first, and if it does repro you are back to full credit.
I just add an extra incentive to get people who cite the study to verify that its reproducible. And at the same time, give the original author more incentive to include lots of details in their papers so it's more likely to be reproduced, and lock in their fame.
Correcting the many incentive problems in modern American science would need a hypothetical body with significant funding leverage over journals & scientists to exert executive action. Sadly there is no such centralized funding body, so the problem must be unsolvable.
One needs a theory of mechanism to develop an intervention i.e. "new drug". The simpler the better.
But given the tremendous complexities involved in biological systems one has to be very careful with the data.
Quite frankly most in the field are not sufficiently trained or aren't rigorous enough in dealing with uncertainties and I wouldn't blame them it's mind-boggling and paralysing.
This of course can be exploited given the incentives stated above combined with the public's unawareness of e.g. statistical significance vs clinical significance etc., with the myriad mathematically correct ways of presenting data, and a handwaving attitude about reproducibility, this really opens up the floodgates for someone in a lab to become "creative", an entrepreneur.
Which of course should be highly discouraged in the context of scientific research and in order to relieve some pressure the community once in a while condemn the most daring of examples without imhv looking too deep into the entrenched mechanisms enabling this.
A PhD student (biomedicine) once jokingly told me about the old HeLa contamination "problem"[0] in labs experimenting with cells. I must have reacted shocked as he laughingly added: Well, that was quite a long time ago, the problems now, only compounded.
So, in order to get the full picture it is "healthy" to zoom out and look at the other side of the spectrum i.e. longitudinal studies (with its own sets of limitations). One impressive one relating to Alzheimer's is the famous nun study[1].
Something doesn't quite make sense here. It seems there is good evidence that certain figures in Lesne's papers are fraudulent. Certainly that is a huge red flag and it is likely that some (or all) of the findings are wrong. But that doesn't mean that the paper's ultimate conclusions are necessarily incorrect. I mean, thousands of papers go on to report on Aβ56, do they all just assume that Lesne's findings are true without further investigation? That seems highly unlikely. What's notable is that the entire article here never once refutes the most important finding of the original Lesne paper (Aβ56 can produce alzheimer's like symptoms in rats).
> What's notable is that the entire article here never once refutes the most important finding of the original Lesne paper (Aβ56 can produce alzheimer's like symptoms in rats).
You refute a paper by showing that part of it is incorrect. The research sleuths have provided incredibly strong evidence that the very existence of Aβ56 is demonstrated (in the paper) by a sham line on the western blot.
It's like somebody has demonstrated that a photo of a "ghost" is actually a double-exposed film, and you are stating "What's notable is that the entire article here never once refutes the most important finding of the original photo (that ghosts can haunt the waffle house)."
Methylene Blue is in phase 3 trials with Taurx[1] The patented version will be slightly tweaked version of the century old drug so they can make some money on it. Still, it would be one for the ages that an out of patent cheap drug makes decades and billions of research obsolete. They say it clears tau, but it probably works some other way like being an antimicrobial that can cross the blood brain barrier.
A doctor I know takes Kordon methylene blue which is pennies a dose.
> A Phase 3 clinical trial of LMTM (TauRx0237 or LMT-X), a derivative of methylene blue, failed to show any benefit against cognitive or functional decline in people with mild to moderate Alzheimer's disease. Disease progression for both the drug and the placebo were practically identical.
You act as if the cited article was simply about an honest presentation of negative results. When in fact it illustrates a similar kind of dishonesty and manipulation as TFA being discussed in this thread.
I don't see why we should have any amount of optimism in this direction based on the evidence in front of us.
And this is why I'm increasingly suspicious of "consensus" in the scientific establishment. With careers, egos, reputations, and grant money at stake, it's tempting to use one's power to entrench this consensus.
Scientific consensus has value, but science also requires that people be open to having their pet theories be validated through replication.
I'm a scientist, and I don't think things so easily become consensus. In order to get to consensus, a model needs to have been tested from many different angles. It's never enough to get to statistical significance, things either get replicated or, more often, triangulated.
Consensus is an extremely time-consuming thing to build, and it's extremely important to be aware of when there exists a consensus, where there isn't one, and what the consensus is.
It isn't an appeal to authority fallacy, it's a form of deferral to expertise, and it's one of the most important heuristics we have.
Absolutely not true. I’ve seen things become consensus because of one single questionable image in one paper from a “trusted” lab.
In my alma matter field the question was whether her2 receptor recycles (has implication of all breast cancer antibody therapies that target her2) and a paper from a Genentech lab had ONE FIGURE (where different data points from different experiments) as proof. An entire sub field including projects in my own lab were spawned assuming this.
Whenever I point out this flaw in lab meetings I’d be shut down by my professors as “they know the authors they trust them”.
I think I'm going to continue disagreeing with you on this point, but I will point out that our difference of opinion is a semantic one.
To say that everyone "believes in" a model could mean that everyone accepts it as plausible (and thus worthy of further exploration), or it could mean that everyone is justifiably certain that it maps properly to a real phenomenon.
I never say that something is "consensus" in the first case: IMO, the term should be reserved for the latter case, or appropriately qualified.
In any case, the situation you describe appears to lack meaningful triangulation. [1]
Its fair to be skeptical and to use your critical thinking abilities, but I worry that people go "oh, this generally credible source isn't perfectly reliable, I guess I'll go listening to crackpots and charlatans." Almost as bad is when you become convinced you have the answers, despite not being remotely qualified to speak on the subject. This is how you get flat earthers and 5G conspiracy theorists.
I've begun to tune out societal criticism that doesn't come from someone with a better idea, who isn't willing to stand in one place and defend a proposal.
The experience of saying how you would do it better and getting it torn to shreds really brings you back down to earth.
This is why I grit my teeth whenever I hear a politician say something like "the science is settled". Anyone who understands the scientific method knows to say "the science is settled, except if <some specific thing> happens".
What does consensus mean, exactly? It doesn't sound like Aβ-56 had full buy-in; the article mentions people complaining about an "amyloid mafia" and half of all funding still going to alternate research, so clearly there were a lot of people who weren't part of whatever the "consensus" was.
What's really interesting to me is that such vast sums can be spent, a number of people apparently did not like the Aβ*56 hypothesis and had substantial reason to oppose it or try to find holes in it with substantial rewards for doing so, yet the ones to find problems were some dudes (without any funding at all) just poking around.
> She and others in the lab often ran experiments and produced Western blots, Larson says, but in their papers together, Lesné prepared all the images for publication.
This is a well known thing scientific fraudsters do, “touche finale”. Any scientist knows preparing figures for publication is tedious, no PI in their right mind would routinely do that.
I think we are going to find out that diet plays an enormous role in all sorts of metabolic diseases in the coming years. Specifically as it pertains to Alzheimer's, increasingly strong evidence is coming out that your risk can be reduced by avoiding excess carbs/glucose, and also by avoiding oxidized foods (in the typical Western diet, usually polyunsaturated fats that have been sitting in a bottle).
I work in the neurotech/sleeptech space, and specifically in stimulation of slow-wave oscillations.
These SWOs decrease as we age, which is linked to the build-up of amyloid plaque, and increased insulin resistance. They aren't necessarily linked to each other, but rather to the decreased capabilities in the brain, from what I understand.
Reminds me of that famous paper about Majorana fermions [1] that got retacted because they edited a figure (I'd say with malice) to support their sensationalist claim. In the unedited image the claimed effect is all but gone. The publication of the paper led to a flurry of research funding and a partnership with Microsoft (who were keen on using Majorana fermions for topological quantum computing).
The article mentions that the main reason for suspecting that results were fabricated is that many images from the researcher's papers seem to have been manipulated. As someone who is not particularly familiar with AI but occasionally picks up on it here on HN, I am now wondering how difficult or easy it is these days to take a manipulated image of a western blot and feed it to an AI tool trained on unmanipulated western blots to create a deepfake that is then virtually indistinguishable from a true one. -- Can perhaps anyone here familiar with deepfake technology provide an assessment?
“The immediate, obvious damage is wasted NIH funding and wasted thinking in the field because people are using these results as a starting point for their own experiments,” says Stanford University neuroscientist Thomas Südhof, a Nobel laureate and expert on Alzheimer’s and related conditions.”
I guess he’s the Nobel prize winner, but I would’ve thought the immediate and obvious damage was unnecessarily delayed research progress while people are dying.
Over a decade? Who knows what the overall impact was.
The scientific process is the systematic identification of what is true.
Wikipedia: "the scientific method involves careful observation, applying rigorous skepticism about what is observed, given that cognitive assumptions can distort how one interprets the observation."
sure. but i think i'd rather see biologists debunk prevailing theories in biology by doing biology while in the process getting better at biology rather than dabbling in forensic image analysis. (although, who knows, maybe said dabbling could lead to some new meaningful insights down the line)
i guess the bigger point is that the act of topical inquiry should have inbuilt mechanisms for discarding ideas and approaches that turn out to be dead ends, without having to rely on them actually being fraudulent (they can be, and often are, completely legitimate, yet also completely wrong).
we shouldn't be discovering bad ideas in science by fraud detection, we should be discovering them by mainstream scientific process.
There are so many correlations of vascular injury with dementia that it seems obvious that the fibrils would not be causative but rather correlated to dementia. I personally believe that dementia is related to or caused by repeated small vascular injuries that eventually cause degradation of the brain. That is, transient ischemic attacks that are completely subclinical aside from the injury that is visible after autopsy.
My un-scientific version of this is that if you have a large bowl of ice cream for desert every night for 55 years, you may develop Alzheimer's Disease.
I wonder if you could get the same unexpected result when copying images of Western blots?
It's probably not the case here but it could be devastating for a researcher to be accused of fabricating data by using an affected copier/image editor/file format.
No. The xerox machine had a digital analysis phase with a feedback loop between the image being recorded and the output being printed.
A western blot is a direct measurement.
It's like asking whether the xerox example might explain why the splatter pattern is identical in sections of two putatively different Jackson Pollock paintings.
Maybe I didn't make myself clear, but the image in the article looks like it could have made both one and two passes through a Xerox machine before it ended up in the paper.
If it's anything like the data fabrication I've seen in industry, it's lack of ethics, laziness, mixed with deadlines/constraints beyond the individuals control, with a dash of arrogance that the conclusion is still correct, regardless, and it'll be fine.
I've also seen very qualified people have a massive holes in their perception, making them somehow unaware that they're generating garbage, and even defending the garbage when it's pointed out.
This is ridiculous. Dozens if not hundreds researchers have confirmed the high MW oligomers correlated with pathogenesis. I have. To say that he found this or everything relied on this theory is absurd. We've know this since the early 90's when Lesne was in high school or earlier (e.g Glabe and Cotman's work at UC IRVINE and Christian Pike now at USC , my work etc.) . this is way before Lesne even considered a career. It used to be called "micelles' or soluble abeta. Also the post doc supervisor repeated the data and found the same thing so she took him at his word. I can't stand this ridiculous media that make this out a scandal and create false disinformation that the entire hypothesis is based on this.
This is ridiculous. Dozens if not hundreds researchers have confirmed the high MW oligomers correlated with pathogenesis. I have. To say that he found this or everything relied on this theory is absurd. We've know this since the early 90's when Lesne was in high school or earlier (e.g Cotman's work at UC IRVINE and Christian Pike now at USC , my work etc.) . this is way before Lesne even considered a career. I can't stand this ridiculous media that make this out a scandal and create false disinformation that the entire hypothesis is based on this.
There is so much of this, through many fields of science today. Incentives that corrupt and undermine the scientific method. And yet, many people will deny that there is justification in skepticism of say, relatively new mRNA vaccines.
Yes, it's good that science usually deals with such problems in the long run, but how is the average person supposed to trust that the latest scientific assurance, isn't 15 years away from being retracted like in this example?
We cannot be confident that "science usually" overcomes faulty models. The best we can say is that science has often been seen to succeed at this, in well publicized cases. Many less visible fields might never overcome their biases. Usually a field cannot correct course until a whole generation trained on a false premise dies or retires.
Economics is a field that has been particularly resistant to correction, but is far from alone. Geology and statistics are recovering from a similar handicap.
As Max Planck is often quoted, "Science advances one funeral at a time." Often vindication is finally delivered only after all the opponents are dead, and the ultimate victor has retired from a career blighted by them. Probably much more often people are driven out of the field and never vindicated.
Lynn Conway was driven out of computer architecture (where she invented out-of-order execution in the '60s, thus long delaying that advance) before finding success many years later in VLSI chip design methods.
A big part of the problem is things moving so fast that a lot of stuff doesn't have a long run. Covid and its vaccines being an example. In the end the reason to trust them was a mix of "if not this, then what?" and "it doesn't seem to be killing people".
It has been unfortunately necessary to downplay cases of debilitation and, even, death apparently traceable to vaccination. If vaccination saves the lives of a hundred times as many people as it harms, in the "trolley" sense, that should be good enough, but in popular imagination it is not.
Rational treatment might enable identifying individuals particularly at risk and not vaccinating those, but that option is closed to us. Instead, a random, suspicious fraction of the population pays particular attention to negative outcomes and avoids vaccination, to its detriment, and most of those at risk for problems get vaccinated anyway.
> It has been unfortunately necessary to downplay cases of debilitation
I don't think it was necessary at all, and instead is very counterproductive. Many people know they're not being dealt with honestly by the government and media, resulting in more distrust and resistance to vaccination, than there otherwise would be.
The number of deaths would objectively be larger if people had access to accurate numbers, because even more would avoid vaccination and then die of the infection vaccinated against.
It is a tragic calculus. "Trolley Problems" are very far from theoretical in public health management. We are forced by distrust to sub-optimal choices that themselves promote distrust. Managing risk of a better population would be easier, but you battle pandemic in the population you have, not the population you want.
You state confidently that deaths would be higher, but you gave no evidence for that. I believe you are wrong. The lies confirm to all that we should not trust you. Assuming you have any ability to recommend things that will actually save lives, in the long term proving yourself a liar will only lead to fewer people believing you and taking your advice.
So tell the truth, build trust, and let people decide for themselves. That will save lives in the long run.
What you mean is, tell the truth and be blasted for lying anyway.
I have plenty of complaints about how public health measures are prioritized in the US. I am not worried about official dishonesty. Public officials are just as good at fooling themselves as everybody else, so they can be wrong without lying, and will be at times. Expecting somebody, anybody to be right all the time is a recipe for disappointment.
Downplaying isn't lying, and categorizing people into those that tell the truth and those that don't tell the truth is a stupid way of assessing anything important.
Lying about the risks is lying, regardless of what ends you think will justify the means.
But even if you disagree, let’s talk about your claim that categorizing into two groups is stupid. Think from the citizen’s perspective of the CDC. The question is, “Do I trust them enough to take health advice from them?” If they have lied to you too many times about things that are important enough, the answer is no way. I never implied all that this lying group said was lies. Obviously one lie wipes out thousands of truths. Trust is gone. Listening stops.
You’re going to listen to somebody! When the public institutions become untrustworthy, and this lose their authority, who knows what authority figure you will turn to? It could be anybody. It could be Q.
> You can look into major issues without needing authority figures.
No you can’t. Not unless you are doing the study yourself. And even then, almost all studies rely on other authorities. Things like death certificates and cause of death, hospital reports, etc. all depend on authorities and you have to decide whether you trust them for this thing.
I never said it was binary. I said trust can and is lost through lies. If you can’t acknowledge that, I’m not sure what else to say.
> I never said it was binary. I said trust can and is lost through lies. If you can’t acknowledge that, I’m not sure what else to say.
"Obviously one lie wipes out thousands of truths. Trust is gone. Listening stops."
I would call this binary. I don't really care what we call it, though. Especially if downplaying counts as lying, then this policy is completely infeasible. It means nobody will ever be listened to. Trust is wiped out in all circumstances.
It's a really stupid way of handling a biased source. And all sources are biased, so it's a really stupid way of handling sources.
If you want to say that one lie adds skepticism to a thousand truths, that would be a massive improvement, because A) it works well to be somewhat skeptical of all sources, and B) you can still learn from many sources you're skeptical of.
Demanding perfect authority is itself a pathology. Claiming perfect authority is a favorite tactic of liars and demagogues.
The best we can hope for is people doing their best with what they have to work with. Very many do.
The worst make shit up, routinely. They hone their message to attract dupes, and always succeed. Many of them believe whatever pulls; most don't care what is true or isn't.
Is there no place in your world for authorities at all? For trust at all? For trusted flawed authorities to become untrusted liar non-authorities in people’s minds?
I have no idea why you are talking about perfect authority.
People can become "untrusted liar non-authorities" in people's minds independent of any actions of their own. The best they can do is their best, and hope not to attract the attention of demagogues.
Of course they can become untrusted regardless of what they do. But the whole question is whether you acknowledge that people also have a major impact through their actions of what people think of them.
What exactly is “doing their best”? Is it lying, trusting the end to justify the means?
Have you found any evidence of anyone lying about risks?
There are always an expected number of deaths in any period of weeks after (or without) a vaccination, subject to big random fluctuations. It should be obvious that (a) numbers cannot be interpreted correctly without education, and (b) people without such education finding numbers will insist on interpreting them anyway, some ignorantly, some with active malice.
What and how much to publish about numbers reported are hard choices I am glad I don't need to make.
Even publishing nothing, there will be spurious reports claiming to know official numbers, and spurious interpretations of spurious numbers. Your fragile trust is broken regardless, among people so inclined.
This is exactly why a large portion of the population is not wrong to doubt the official narrative.
Science shouldn't even be engaged in trying to save as many lives as possible. Science should only be concerned with discovering and disseminating the truth as it is.
Ok, so they're too busy to understand this basic difference between public health (their own field) and science, something that you're telling some random person on the internet about. Nice
Excuse me, where did I lie? I have no role in collecting or reporting on adverse reactions. I don't even know for sure that adverse reaction numbers are as large as I suspect.
I do empathize with the people trying to minimize deaths from a raging pandemic in an atmosphere of politically-motivated disinformation that is actively contemptuous toward public safety. People with your attitude make their work that much harder, and cost more unnecessary deaths.
Sorry. You spoke in the first person plural “we” so I just addressed my comment back to a generic “you.” I never meant to say anyone in particular (let alone you) was a liar.
However, you were advocating for not giving people accurate numbers. Whoever is in charge of that decision should not lie. They should give accurate numbers.
What they should do if the goal is to minimize public mortality is not obvious.
What they should do as a matter of abstract merit, or of public perception of benignity, are two wholly different, generally easier and less vexing questions.
Perhaps public health officials need to pay more attention to the iterated trolley problem. Difficult, for sure, when dealing with the pandemic in front of you, and not the next one that comes, but our encroachment on animal viral reservoirs and insistence on conducting hazardous gain-of-function research all but guarantees the next one will come sooner or later.
Gain-of-function research, conducted carefully enough, might expose risks ahead of time that could be vaccinated against. You would generally prefer that it be in pathogens not already adapted to humans, or in places those are handled.
We are guaranteed pandemics regardless, just by how much international travel we do. What matters is the response. We are lucky monkeypox (actually a rodent illness) is rarely fatal.
Could someone in this thread please cite the alluded to cases of disability or death caused by one of the COVID vaccines? It would be helpful if people can be specific. I'd like to understand (1) whether this really happened or is a myth (2) which vaccine and (3) what's the biological mechanism of damage.
I can’t do as you ask, except to say that vaccine skeptics will point to any and every health problem in a vaccine recipient they know as having been caused by the vaccine. (I’ve found that) it’s very difficult to counter that anecdata.
Anytime you inject biochemically active stuff into billions of people, there will necessarily be some actual bad reactions mixed into a large number of events that would have happened regardless, or that anyway had nothing to do with the stuff injected.
And, some people will get in car accidents on the way home from the clinic, that would not have happened if they didn't go. People who get in line are exposed to random pathogens others in line are distributing, and to any pathogens injected via insect bites in that place.
People who do not get vaccinated are subject to similar risks, but are not counted.
Playing up these numbers does nobody honest any good.
One part that is really important that is mentioned in the middle of the article is that these systems are very difficult to handle, and it's almost impossible to make many of them nicely reproducible. Fibril and oligomer formation depends a lot on the environment and reacts to tiny differences.
I find this kind of fraud deeply frustrating, there is so much wasted effort in the wake of faked high-profile results.