Hacker News new | past | comments | ask | show | jobs | submit login
More than half of high-impact cancer lab studies could not be replicated (science.org)
194 points by kevin_hu on Dec 10, 2021 | hide | past | favorite | 110 comments



There's many papers showing similar. (My personal favorite is https://www.science.org/doi/10.1126/scitranslmed.aaw8412 where they removed the claimed target for various cancer therapies and many still worked)

If you've ever been within a mile of a research lab this shouldn't be remotely shocking. Typical research is done rapidly, sloppily, and in a context where getting the "right" answer is the only incentive.

The replication crisis that hit psych is only being held back by (IMO) that bio is harder to replicate due to tooling, protocols, reagents, cell lines, etc.

In complex systems you have the degrees of freedom to be wrong at a scale that folks still do not appreciate.


> If you've ever been within a mile of a research lab this shouldn't be remotely shocking. Typical research is done rapidly, sloppily, and in a context where getting the "right" answer is the only incentive.

Let me preface that by saying that 50% sounds like an awful lot, even bearing in mind the following.

On top of that, these studies focus on highly complicated phenomena, where almost anything could play a role. It is common that one factor that is not necessarily accounted for in a study turns out to be important, even in well-run studies. Flukes happen as well.

Failure to replicate per se is not a problem, if the original study was done well and honestly. It is still some data points that could be re interpreted later in the light of other studies.

Now, the fact that there is so little incentive to publish confirmation studies is a big problem, because it means that these non-reproducible works are not as challenged as they should be.

Remember that one study is meaningless. Even with tiny rates of false positive or negative, erroneous conclusions are bound to happen at some point. A result is not significant unless it has been observed independently by different research groups.


"Failure to replicate per se is not a problem, if the original study was done well and honestly"

No but in this case a common reason for failure to replicate was that the papers didn't actually include enough details to replicate them to begin with. This should be grounds for failing "original study done well" because the reason they're paid to do research in the first place, is to propagate accurate and useful knowledge. Missing critical details means the study is a failure.


I see it as a systematic failure of the peer review system. Referee should be particularly careful that the article includes all information needed to replicate the results. This is often not the case, from my experience.


> the fact that there is so little incentive to publish confirmation studies is a big problem

I don't quite get this point: this is cancer research, co in theory all related institutions should be interested in replicating breakthrough studies, right? And failure to replicate it, time after time, should be a cause of concern, shouldn't it?


The institution as a whole may have the incentive, which plays out on a 5-10 year timescale. Funding agencies in principle care, but no individuals are rewarded based on the results on this timescale. Industry, venture capital, and the like do care because they need reliable-enough results to build on, but they can filter for the good ones and scale from there.

The lifecycle for a junior PI is about 5 years, which really means they have about 3 years to produce a result sufficient to advance. This is enough for a bit of validation but unlikely anything beyond their lab, so the result is likely to be untested. But who in that lab cares? They just need to get the next job, so a 3-4 year timescale is fine. The PI cares long-term because a lack of new productivity means a lack of ongoing funding, which means they either improve the quality of their processes (which means herding the junior researchers through that system) or simply continuing to hack their way through.

There is a reason why people moan about scientific theories advancing one death of an old professor at a time.


OK, I understand the individual incentives at play here. But the institution also has a governing body and other stakeholders interested in actual results, right? If they see that someone else has made a breakthrough and it looks like a very promising method, it seems logical you should check if it really works and if it does, improve it in order to make clinical studies possible and hopefully one day benefit the patients, no? Otherwise, what is the point of sharing cancer research if nobody is interested in replicating the important studies?


The governing body and related stakeholders care at varying levels. For government-sponsored research, the primary metrics are publications, impact factor, number and prestige of collaborations, diversity of staff, and velocity of money (see [1] for an example, pages 3-4). At the most cynical level it kind of reduces to welfare for educated people. The end result is that there are researchers producing good work, but you have to filter to find them.

For external industry funders, they have a strong and vested interest in identifying good output before it is public. The strategy here typically is a mixture of identifying good researchers and making them kings (consistent funding, direct lines of communication, access to nonpublic materials, etc), and of placing lots of small investments to get in the room for discussion. An example of this mixture is the BP deal at UC Berkeley [2].

As for shifting standard of care, this is a commonly-expressed goal but often the research is far enough away that the individuals doing the work lack close communication with the actual practitioners. Some institutes are trying to resolve this by close physical and social proximity (e.g. TU Dresden's medical campus) but this is far from the norm.

[1] https://www.helmholtz-muenchen.de/fileadmin/Jahresbericht/Ja... [2] https://www.berkeley.edu/news/media/releases/2007/02/01_ebi....


High prestige journals often will not accept replication studies. Only interesting and novel results. Therefore the incentive is low since there is little utility in doing the replication unless it is a prerequisite for something the researcher wishes to do.


> High prestige journals often will not accept replication studies.

But this is the opposite of how it should be! If we learned anything from scientific research is that if a breakthrough study is published, it means almost nothing until it gets verified. Actually, these days I tend to believe meta-analyses almost exclusively. But you need time for these to appear.

So when I read about some exciting new study, my first reaction is, "Well, that's interesting, we'll see what other scientists say." So actually we should look forward to the first replication as the first indicator of whether the original study was correct or not. The authors of the original study and the journal board should be grateful to the replication team for helping them do their job! Rejecting replication is a disservice to science.


Maybe it’s time to smear dirt on high prestige journals: One failed replication experiment at a time.


> all related institutions should be interested in replicating breakthrough studies

Well, no. Grants don't pay for replication. Replication studies are less likely to get published or get citations. In other words, if you want to do replication, you wont be able to find money for it. If you find money on it, there will be punishment in terms of lesser career progress.

The system does not value replication.


They're saying that the institutions should care about it, not that they do.


They should in an ideal world, and if the economic system (and society at large) was optimising for scientific relevance and validity. It does not, therefore they don’t care.

Well, most of us (scientists in general) do care, but we have opposite incentives.


Maybe we, as scientists, should work to change that


> I don't quite get this point: this is cancer research, co in theory all related institutions should be interested in replicating breakthrough studies, right?

I share your concern. The operative word here is “should”. I. Practice replication studies have an opportunity cost because they won’t end up in a prestigious journal, they are harder to use in grant applications, and the time and resources could be spent on studies that would be more useful in these ways.

> And failure to replicate it, time after time, should be a cause of concern, shouldn't it?

It would be, and it would also indicate that a technique is dodgy, or that a research group or scientist is unreliable.


it would also indicate that a technique is dodgy, or that a research group or scientist is unreliable.

Not necessarily: it could also mean that we don't properly understand the process, and therefore essential steps/prerequisites in the experiment are left undocumented. That type of error is more fundamental than merely the technique or the researcher.


> a context where getting the "right" answer is the only incentive.

This looks like such a fundamental problem to me. Zooming out, studies are explorative into the unknown, outside of our knowledge. We should expect most of them to find nothing.


It is worse than nothing, because incorrect positive results and non-published negative results lead people to waste years of their lives building off something which is not true.


Yep, and "nothing" (as in "the effect of X on Y is zero") is a meaningful result, possibly a very important one, but being "nothing", it has a lot lower chance of actually getting published.


The issue is “nothing” is not sufficient to assume zero effect, it just means the effect wasn’t measured. Actual progress means trying to replicate both positive and negative results.


Well yeah, but if you're trying to "cure cancer", and in a (just made-up) theory an acidic environment should help with cancer, so you try lemon juice, and it shows that lemon juice cures as much cancer as placebo does, that seems publishable to me.


I worked in cancer genomics (and applications of ml to that) and have a couple of papers to my name. (Software methods so repeatable in the sense you can rerun them but that doesn't mean the data will mean anything).

Every ML practitioner should spend some time working with genetic data just to realize how weird things can get when you have millions of sparse features for a few thousand cases and all sorts of batch effects. Like to do it well you need to control for the lightbulbs in the microarray machine you were using or the sequencing center used, minor version of the sequencing technology and reagents order the subjects were sequenced in, who ran the machine, geographic origin of subjects etc.

And then when you go out and try to tie your data into the literature you need to be aware of all sorts of self reinforcing bias in which genes get published on.

Amazing and artful applications of mathematics have been developed to address all of these and the care bioinformatic papers address things like cross validation with makes ML look like a joke. Beyond that its even common to go out and get more data or validate with different technology before publishing but it is still easy to get things wrong.

If we truly want reproducible research we need to address the batch effects with less noisy sequencing machines and an assembly line like approach to generating orders of magnitude more data as cheaply as possible including new data on new subjects to verify studies after the fact.


I'm shocked that close to half can be replicated. It's not a matter of academic misconduct -- biology is just hard. There are a lot of ways to fool yourself. If you doubt, just take look at the number of drug candidates that make it out of the lab (it's a LOT lower than 50%).

The journalism side of Science magazine is far too credulous. Most science is wrong. Even when it is published in Science (perhaps, especially when it is published in Science).


All scientific fields are hard though. The way they are advanced is with rigorous methods.

I'm not shocked either though. Not because biology is hard, but because academia is corrupt.


Not that it's corrupt, but incentive structure is suboptimal to say the least, especially for government-funded work.

There was a recent blog post / HN thread [1] about groups never admitting failure, it's very relevant here. The funding agency has to have a process for vetting applications, and what's more reasonable than the ability to manage their own work, i.e. propose a plan and successfully execute on it? Combine it with slow cadence of funding and you have a list of promised results for next 3+ years which you must achieve somehow or you'll never see a grant from that agency again. And they give out a lot of grants, so reviewing reports in detail to be able to distinguish "this result we promised was not achieved because we are incompetent" from "this result is negative, and we need to change direction based on new research" is just infeasible.

In commercial research the situation is much better in my experience, probably because your "funding agency" is more narrowly focused and is typically interested in getting useful results.

[1] https://news.ycombinator.com/item?id=29488641


Nah, I don't go to "corrupt"...never assume malice when incompetence will suffice.

Lazy? Absolutely. Prone to exaggeration? Definitely. Self-delusion? Always. Corrupt? Maybe sometimes.

Science is a human process, and is subject to all of the problems created by any group of humans. But the magic of it is that, in the long term, everyone's personal incentives produce an emergent result of overall correctness. You just have to understand that most papers are wrong, and never take anything at face value.


Hanlon's razor: never attribute to malice that which is adequately explained by stupidity. Not that academics were stupid, but making mistakes usually obvious in hindsight. And yeah there would be some malice as well, as everywhere, including here, because everybody's a human until those rigorous methods weed humanity out of the processes (way to go).


Many of the studies couldn't be replicated because the original authors didn't provide sufficient details in their papers, and then refused to reply when asked for those details.

Whether this is malice or not is kind of irrelevant - they have failed to meet the most basic requirements expected of researchers. They took the money but did not deliver the expected work artifacts.


It's funny how early 20th century science was filled with mind blowing discoveries and realizations that fundamentally changed how we viewed the universe and to some extent even life itself, while the early 21st century is filled with a surprisingly number of boring work churned out by overworked grad students, that turns out to be, in unsettling quantities, largely bad, irreproducible science.

People love to blame bad statistics, but it's much more obvious that the issue is the "publish or perish" culture and a perpetual race to manufacture credibility in academia. If you're curious you can trace some of this back to the 70s when peer review as we know it today was essentially born, precisely as a means of proving credibility during a period of academic funding crunches.

As much as it's easy to point a finger at academic I increasingly find the same thing in industry. Within my own career the number of companies that are doing nothing more than finding elaborate ways to burn investor funding (without even realizing they are participating in this) has grown to consume almost the entire tech industry.

At this point the most shocking news isn't fraud, it would be to find someone out their down real, meaningful work.


Respectfully I don't think you're characterization of early 20th century and today's research is accurate at all.

There is a stupendous amount of new science being carried out. Science that we'll look back on 80 years later (just as we look back on the science of the 1920s) and recognize how impactful that science has been. In fact, I would imagine that the science that is being worked on today will prove far more impactful than the research of the 1920s.

Think about the seminal technologies that are still in their infancy! Deep learning in it's current form (AKA the form that actually works) is ~10 years old. That's it. And in those ~10 years we've gone from cute theory to being able to convincingly emulate video, translate languages, solve dictation, and generate near-human quality text. Then there's CRISPR, which could potentially cure 1000s of diseases and dramatically improve diagnostic tools, and is similarly ~20 years old. There are many thousands of other groundbreaking technologies that are being worked on as we speak.

Contemporary scientists are conducting this research.


Deep learning has nothing to do with "science" and the last algorithmic advances that enabled it happened rather more than 10 years ago: the neocognitron, in 1979, the Long-Short Term Memory recurrent neural nets in 1995, backpropagation (for neural network training) in 1986, etc.

In general, all the interesting work that enabled today's deep learning boom happened towards the end of the 20th century and recent advances are primarily owed to increases in computational power and availability of data sets.

Says not me, but Geoff Hinton:

Geoffrey Hinton: I think it’s mainly because of the amount of computation and the amount of data now around but it’s also partly because there have been some technical improvements in the algorithms. Particularly in the algorithms for doing unsupervised learning where you’re not told what the right answer is but the main thing is the computation and the amount of data.

The algorithms we had in the old days would have worked perfectly well if computers had been a million times faster and datasets had been a million times bigger but if we’d said that thirty years ago people would have just laughed.

http://techjaw.com/2015/06/07/geoffrey-hinton-deep-learning-...


Google published "Attention is all you need" in 2017, which introduced the Transformer Deep Learning Architecture. This is the general framework by which GPT-3 was trained, and is increasingly replacing most other architectures in NLP tasks.

So.. I guess I disagree strongly with your premise? Look, it takes some time for these ideas to propagate to non-experts. In 20 years a hacker-news poster will likely post the same comment, referencing the 2017 Transformer paper to enforce the idea that all the _good_ science in DL was done years ago.


You're mistaking the hot new fad of Transformers for "new science" and the field of NLP for a scientific field. Transformers have only taken over NLP because that entire field of research has run out of new ideas, and has been desperately grasping for straws in the last twenty years. Consequently, NLP researchers have been jumping on this bandwagon or that ever since I can remember. I enquired about a PhD in NLP as soon as I finished my degree, in 2011, but by 2014, when I finished my Master's, I knew that was the last thing I wanted to study: a subject riddled with sloppy practices, flooded with shaky empirical results, without any real theory to guide it, without even any useful benchmarks or credible metrics that could justify the empirical direction. That is why the GPT-3s and the BERTs and all those data- and power-hungry monstrosities reign supreme: because NLP is built like a house of cards, with no scientific foundation whatsoever, so that anybody can claim anything they like and there is almost nobody left to challenge even the most outlandish claims.

>> In 20 years a hacker-news poster will likely post the same comment, referencing the 2017 Transformer paper to enforce the idea that all the _good_ science in DL was done years ago.

On the contrary, my expectation is that in 20 years from now people will point to Transformers and laugh and joke about how misguided people were "back then" to think that just building larger neural nets would lead to AGI, as they do now for earlier AI approaches.

What I'm not convinced is that you are any different than the "non-expert" in your comment, to whom "it takes some time" for ideas to propagate. You are holding up Transformers as some kind of scientific breakthrough but that is only the most obvious conclusion to draw from the overhyping of the approach on social media and the lay press.


I work in the field and transformers have literally transformed science.


Can you please clarify which "field" is that and how do you "work" in it?


the field is the application of machine learning to biology (specifically drug design, but also the wider area of computational biology in all its forms). Here's some of my prior work: https://arxiv.org/abs/1502.02072 however that doesn't use transformers. The DeepMind work on protein structure prediction does, and it represents one of the greatest technical achievements of the modern era.

Transformers basically took anything that was "sequence learning" (large fractions of computational biology are) and made it work 100X better over night.


Thank you for clarifying. I see that the paper has six authors two of whom declare equal contribution. Could you please further clarify what was your personal contribution to the paper (to the extent that this doesn't unecessarily expose your personal information)?

Regarding Transformers, the results on protein structure prediction are certainly impressive, but they are not enough to justify your comment that "transformers have literally transformed science". At most we can tell that they have considerably improved protein structure prediction. Perhaps your comment was an exaggeration?

Edit: I can't find the paper on the ICML 2015 website, although I can see that it has a few hundred citations, as a preprint, which is not uncommon these days. But, could you point to some published work you contributed to?


I really do believe that the attempt to industrialize scientific research has failed. Science cannot be done at scale. It comes from singularly focused projects lead by pure inquiry. I don't regret my PhD because I enjoyed achieving it, but I see it as a waste of time.

There's an analogy in Software Engineering. Programmers are akin to Artisans or Craftsmen, but the industry wants and needs them to be something more like little factories. I don't think that approach makes good software, even if it's a good product. Just as I don't think the system we have created in academia doesn't produce good research, even though it makes good careers.


I don't want to point a finger, but the same time period has seen explosive growth of the life sciences. Doing good science on "squishy stuff" is just plain hard, for two reasons: First, isolating a single cause and effect, is almost always impossible. Everything is multivariate and noisy. Second, many of the things we're urgently interested in learning involve human health and welfare, and there's probably a widespread feeling that we don't have a century or two to wait for solid answers.

My day job involves designing measurement equipment, some of which is used in life science. I absolutely care about knowing whether a measurement is any good or not, to the point where it's not just a paycheck, but a true passion. And I work with people in actual research who care just as much, using my stuff to do front-line work.

There are certainly problems, and I don't wish to defend the "publish or perish" system, but I don't think a cynical view of the causes is a complete or even accurate picture of what's going on.


Mind-blowing discoveries and realizations that fundamentally change how we view the universe and to some extent even life itself in the late 20th century, off the top of my head:

* Cosmology. Most of the cosmological theories (e.g., Big Bang) didn't develop until after 1950, and the Hubble Space Telescope has advanced the field considerably.

* Supersymmetry/superstring theory. I hesitate to put this here because I think this is a wrong turn for theoretical physics, but it is a very well-known model that developed in the 1970s, with the superstring theory revolution occurring in the 1970s.

* Central dogma of molecular biology. Proposed in the late 1950s, this states that basically DNA -> RNA -> protein, and nothing goes back the other way. It has also taken a massive beating in recent decades, as we've discovered there's quite a few ways that information gets passed around in cells that don't follow this path (e.g., epigenetics, and it's an epic acerbic word match if you want to restrict epigenetic only to things like methylation of DNA or include any other heritable non-genetic traits as epigenetic).

* Plate tectonics. This became accepted geological theory only around the late 1960s.

This is just a list of pure science discoveries I can think of off the list of my head. Stretch to applied science advancements, and there's several that have caused massive revisions to fields--organic synthesis techniques, NMR spectroscopy, radiocarbon dating, genetic sequencing are all things that came to mind while compiling the above list. If you switch over to math/CS side of things, you can also consider that the development of RSA quite literally opened up a new world of what can be done with computers, or the Cook-Levin Theorem that really kicks off computability theory (and leads to the asking of the question P=NP). Pretty much everything about CS only develops in the late 20th and early 21st century.

Hell, that last comment prompted me to remember something else that rather fundamentally changes how we view the universe. It turns out that it's far easier to teach a computer how to play chess than speak with the intelligence of a three-year-old, never mind an adult. What can be more mind-blowing than discovering that we can't make a more effective learner than a toddler?


I'm not agreeing or disagreeing with the OP, or you, but the 1950's and '60s should be "mid-20th century", not "late". I don't know if there's a commonly accepted definition of "early", "middle" and "late" that does not involve waving hands.


FWIW, most studies of academic problems sort of pinpoint the [late?] 90s as a change point era for various reasons.

I also have a hunch that replication problems are not equal everywhere. I suspect that biomedicine and the social sciences are worst off (although I think CS ML might also have some issues).


I assume we have a whole lot more scientists than we have back then, both in total and per capita.

Still, if scientific progress is in any way real and not cyclical (as in Kuhn), then science should be getting harder. It stands to reason that if there are fewer and fewer secrets to find, finding them (or randomly stumbling upon them) will be a lot rarer.

We tend to assume that there is an infinite amount of knowledge to be gained of the natural world, but it's not necessarily so. It's especially not evident that our tools are evolving as fast as they need to, or even that they can evolve much further.

I'm not saying "everything has been found", more like "all progress makes further progress require more effort, by definition".


I agree with this. I’ve been coming to similar conclusions myself.


If you have not, you should read The Idea Factory.

https://www.amazon.com/Idea-Factory-Great-American-Innovatio...

We owe most of our "modern world" to the hard science done at Bell Labs in the 1900s, before the corporation was singularly focused on quarterly profits.


Dear HNers, I hope that you will remember this article (or all the previous one) the next time you will invoke science like a religious dogma in a discussion.

If you can't reproduce it, it's not science.

I believe in science, but I also know scientists are humans and thus fallible.

We live in a post-truth world, I have no certainty about anything anymore except what I can check by myself, which is not much.


Worse, scientific organisations are institutions. I trust individual scientists to be reasonably honest, but I trust the bureaucratic organisations they belong to as far as I can throw them...


I really would like to do research in a specific field, but I’m just so worried about the bureaucracy and horror stories of academia. I mean I’ve had to operate in bureaucratic environments, but it just seems so antithetical to what my actual goals would be. Just the amount of bureaucracy I’ve encountered trying to get into universities has rubbed me wrong.

Sadly I couldn’t remotely afford the equipment to do things on my own (nor do I think I could handle the initial learning workload these days without some structure).


The equipment can be affordable depending on how you structure things. For example, there are opportunities like the Molecular Foundry user program [1] which provide grants for the costs of running experiments on expensive machinery. The main trick is having a specific enough question to answer that you can define how the grant would help you do so.

There are also community initiatives like BosLab [2] which are more similar to maker spaces in structure.

[1] https://foundry.lbl.gov/user-program/get-started-here/

[2] https://www.boslab.org/


Speaking of reproducible science, I am really curious what cancer experts take is on this video [1] and their thoughts on cancer not being a somatic genetic disorder but rather that being a downstream effect of a mitochondrial metabolic disorder.

[1] - https://www.youtube.com/watch?v=06e-PwhmSq8


> If you can't reproduce it, it's not science.

Let me throw a tiny wrench into your logical reasoning: I can't reproduce most results, does it mean most results are not science?

Absence of evidence is not evidence of absence. You should not expect scientific experiments to be replicated every time.


You are downvoted, but I think you make an important point - just because results cant be replicated, doesnt mean its not science. As long as someone keeps investigating the results, showing that they dont work and why, its still science - we learn from mistakes (which is the essence of the scientific method). It stops being science when we just accept any results that are being published and turn them into dogma.


Of course I'm not speaking about being unable to reproduce an experiment because you don't have the lab or tools available.

But if you can't reproduce with the exact same conditions than the original experiment then it's not science.


> because you don't have the lab or tools available.

I agree, that's trivial. What's not so trivial: Where does the "experiment" start and where do the "conditions" end?

In theory the scientific method is easy, in practice (i.e. state of the art research) you are studying highly complicated phenomena, that are, per definition, not well understood. Often you don't know the mechanism of action, let alone the factors that influence it. I don't blame you - if you haven't done scientific research, it is very difficult to appreciate.

So again, from the failure to replicate a result it does not follow that the result is wrong, or that "it's not science". That's a misunderstanding of the replication crisis.


You're right, I'm not a scientist. I'll give it to you that researching in a scientific way is science, but if it's not reproducible then what you publish shouldn't be established as "truth" but as a hypothesis that needs further research.

What bothers me is that some people are very dogmatic about things that are published and most of the time those people aren't scientists either.


Yes, they're not science to you. You're not doing science/being scientific when you believe those results- you're just believing.


Let me re-iterate my point, because I see a number of commentators here making this logical mistake. When scientist A shows a result, and scientist B can't reproduce it, it does not mean the result is wrong.

Of course the result could be wrong, and, assuming scientist B knows what they are doing, it's an indication that the result is indeed wrong. But it could also be true.

This is not a comment on the state of science and whether incentives are set correctly at the moment (they aren't).


> When scientist A shows a result, and scientist B can't reproduce it, it does not mean the result is wrong

Correct, but until scientist A’s result can be reproduced by someone, anyone, relying on scientist A’s result is really more akin to a religious leap of faith, then science. When any belief based on faith alone is presented as an unquestionable truth, it deserves skepticism.


> If you can't reproduce it, it's not science.

Just a point that not all science can have empirical and reproducible study.


Thats... debatable. A core principle of science is to be able to test and verify your theory. Which means it must be empirical and reproducible (if you can test then so can I, and if we have different results - then theory is wrong and needs to change, or the tests were wrong, etc). If its not - it borders belief, which is not science (https://en.wikipedia.org/wiki/Science#Verifiability ).


i've said it before and i'll say it again, this isn't going to improve until there are more grant dollars and recognition for replication than there is for novel discovery.

replication is boring, yet critical, and we should be compensating highly skilled scientists for doing it.


It won't change until a track record of producing studies that fail to replicate hurts the career of an academic more than not publishing anything at all. As long as publish-or-perish provides a stronger incentive than the negative repercussions of publishing nonsense, this will continue.


I don't know if one necessarily wants to punish for failure to replicate, but I do think that falsified data/records should lead to a death sentence of a scientific career or result in criminal charges if necessary (e.g. for defrauding the associated institutions/grantor). I guess sort of like what Elizabeth Holmes is facing currently.


That sounds like spending money on trying to replicate results that can’t be replicated doesn’t it? Isn’t that what this study was doing? Doesn’t it make more sense to revisit how the existing funding is spent?

I have very little insight into this area of research, so I’d just have to guess everyone’s in a hurry and reaching for results or published papers.


> Doesn’t it make more sense to revisit how the existing funding is spent?

i don't know what you'd find in a financial audit that a scientific audit wouldn't tell you, but a passing financial audit won't tell you if a scientific finding actually sticks. therefore, seems like a better deal to audit the results, no?


I didn't mean spent in a financial sense. I meant in how the research itself is performed (or maybe how it's granted and re-upped), as that seems to be the issue. If research continues as it is, won't most reproduction studies still fail if this posted study is accurate? It makes more sense to treat the cause and not the symptom.


Part of this is being addressed through increased reporting requirements for data, software, analysis pipelines, pre-registered protocols, and the like. This has led anecedotally to some improvements in quality, and at minimum it makes initial due diligence on the publication significantly more impactful.


> "i don't know what you'd find in a financial audit that a scientific audit wouldn't tell you"

Financial audits would uncover financial fraud.

> "seems like a better deal to audit the results, no?"

Better to audit both.


i suppose. i suspect most financial fraud would be caught by the accountants and grant administrators at respective institutions. accounting and best practices around it have been around for a long time.

this discussion is about ensuring scientific velocity by increasing the rigor required to declare success and add new knowledge to the commons.

unless there's large scale fraud going on, i honestly don't think bringing in additional auditors to look at books that are already professionally kept is going to benefit anyone other than said auditors.


> If it was only their money on the line, that would probably suffice. But when researchers (and by extension the institutions they work at) are receiving government grants, I think there need to be independent audits.

these are all nonprofit institutions, some are government run, i'm pretty sure they're required to engage independent professional auditing firms just as a condition of their tax exempt status, let alone acceptance of federal grant dollars.


> i suspect most financial fraud would be caught by the accountants and grant administrators at respective institutions.

If it was only their money on the line, that would probably suffice. But when researchers (and by extension the institutions they work at) are receiving government grants, I think there need to be independent audits.


100% agree. The NIH could replicating studies a requirement of funding for graduate or postdoctoral training. In addition to the scientific and societal benefit, it would just be good training.


I would be glad to have funding explicitly set towards appropriate and open documentation of the actual protocols, equipment, and materials, plus a combined effort to enhance these reporting requirements by the journal. There has to be a balance between the one-off interesting new findings and the large-scale work, but clinical trials definitely fall under the latter category.


your first task in lab will be to reproduce our competitor's work!

in seriousness though, there's enough money and trained scientists looking for work... moreover i think there's a whole personality type that is prevalent in science that would be very happy to replicate and reproduce results...


This is a self-resolving problem.

- breakthrough study is published

- because it’s breakthrough, it attracts attention and other labs look at it (we often tried to repeat breakthrough work in our lab)

- if it can’t be replicated, interest just peters out OR it drives someone else to “fix” it. If it’s so important people need to know it gets published.

- if it can be replicated further research continues


The problem is with step 3: incredible amount of energy goes into building upon a bad study because the fact that it's not retracted.


95% of science is doing stuff that doesn’t work. Replication - independent replication - is not a wasted effort.


> Additional problems surfaced when labs began experiments, such as tumor cells that did not behave as expected in a baseline study.

Given the various difficulties of culturing cell lines, this is not surprising.

What is infuriating to me is unresponsive author queries about data and protocols. That should result in some sort of ding to the paper on the journal's website.


As someone who took pains to make my work reproducible, ultimately leading to my lab continually expanding on the project as well as leading to a patent, I can tell you. Yeah, many researchers are full of shit. But it only takes a few to push the science forward.


I briefly worked for a startup that made cancer patients genom sequence data and looked up relevant medicines, and trials in pubmed. They sold this as personalized cancer treatment for tens and hundreds of dollars. One key takeaway for me from that experience was that I should not smoke and should not eat processed meat, if I want to live a long life. Neither small cell lung cancer or gastric cancers are very treatable or fun.


More than half of high-impact cancer lab studies can be replicated?


Good thing we don't have empirical research in software development. This means we avoid a replication crisis.


Hah! This paper describes a deep-dive into papers on database research that made quantitative claims, and about half of the investigated papers had problems with replications of some of their claims.

The Repeatability Experiment of SIGMOD 2008 https://pages.saclay.inria.fr/ioana.manolescu/PAPERS/SIGRecR...


Benchmarking is often close enough.


There is only one real solution in the paper quality crisis. High quality journals need not only to peer review papers but also do the independent replication or verification before publication. This is expensive and time consuming, but better and cheaper than the current house of cards that is science.


In mice?

Also, the lab animal telomere issue has corrupted almost all animal studies in the last 50 years, but there doesn't seem to be any effort to correct course, or even to acknowledge the problem.


What lab animal telomere issue? Never heard of that in my decade of working in a lab.


https://www.nobelprize.org/prizes/medicine/2009/illustrated-...

2009 Nobel prize given for this work.

Lab animals, due to the way they are bred, are essentially susceptible to cancer and health issues not seen in normal wildtype populations. Shortened telomeres are the primary danger, increasing mutation rate and cell senescence.

Essentially, all data involving the expectation of normal cellular behavior is subject to question - signaling from lab animals is corrupted.


The Nobel prize is about discovery of the telomeres, not about anything about lab populations.


Do you have a source because as I understand it the length of the telomeres is reset each generation and so generally not transmissible to the next generation.

Apparently leukocyte telomere length is heritable though, is that what you are talking about?



What exactly are you interpreting out of this paper now ? How does this invalidate cancer studies in mice?

Most cancer studies in mice are xenografts of human cancer cells so any relevance of mice telomeres is completely lost anyway.

Also this obsession with telomeres being the fundamental key for all our biological problems seems so conspiratorial. Like you think our cells which do unimaginable things can't keep their DNA ends frothy? Like that's what is gonna spell their doom? Or you consider telomeres are just a symptom of a cells longevity decided by other factors ?


you're just confused.

lab rats and mice aren't useful model organisms, but it's not for this reason.


This seems relevant both for the links and the rebuttal (?): https://biology.stackexchange.com/questions/90609/why-might-...


Probably regurgitating it from the Weinstein stuff.

https://uncancelclub.com/bret-weinstein-1/


They're probably referring to: https://pubmed.ncbi.nlm.nih.gov/11909679/

I think this is one symptom of a much greater issue: the inbred mice strains we use are disastrously weird (arising from the initial population artificially bred to make research easier). For the sake of controlling for genetics, we've chosen to make lab research translation drastically less effective.



Also curious to know.


Remindes me of the Vox article with the classic line "Everything we eat both causes and prevents cancer".

https://www.vox.com/2015/3/23/8264355/research-study-hype


slow clap...

Unfortunately the argument of "too expensive" or "too difficult" fails to fly or appeal to me when astro/particle/nuclear physics always split their people/experiments/resources into multiple teams, eventually to compare and contrast results.

"Big-pharma", often supporting this research has enough money/resources to waste it seems that they will pursue any research to test it rather than verify it outright. And this culture has fed back into the mindset of most biologists these days it seems...

This has to be one of the biggest drains/in-efficiencies in the field and is driven by naive corporate policy/greed... such a waste.


Perhaps these unrepeatable studies will be obviated by personalized targeted treatments, given the ubiquity of Illumina (etc) sequencers and designer antigens (BioNTech and Moderna, for example)?


There's reasons to suspect problems could be worse not better with personalized inference.

Defining replication gets tricky when your N is 1, and your population is one system, perhaps at a particular moment in time.


Does anyone know of such a systematic replication effort in other fields of science? I wonder if the conclusion would be much different.


There are a few, one of the popular ones on this site, https://www.science.org/doi/10.1126/sciadv.abd1705 shows the results are pretty similar in different fields [[a search for "replication" on hackernews will bring up some others]]


Cancer as entropy- the plot thickens


> Cancer as entropy- the plot thickens

Isn't cancer basically the result of entropy? With enough imperfect cell-divisions, sooner or later one of them is going to have a mutation that causes unbound cell-division (i.e. growth).


I wrote a brief comment in a previous thread about some of the causes of this replicability crisis: https://news.ycombinator.com/item?id=29478946

Tl;dr misaligned incentives in science, the academic job market, and the funding system, the publish or perish dogma, and an overproduction of PhDs.


What's your connection to the industry, that you recommend fewer scientists and greater centralization of research?


At least it's better than psychology...




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: