Hacker News new | past | comments | ask | show | jobs | submit login

There's many papers showing similar. (My personal favorite is https://www.science.org/doi/10.1126/scitranslmed.aaw8412 where they removed the claimed target for various cancer therapies and many still worked)

If you've ever been within a mile of a research lab this shouldn't be remotely shocking. Typical research is done rapidly, sloppily, and in a context where getting the "right" answer is the only incentive.

The replication crisis that hit psych is only being held back by (IMO) that bio is harder to replicate due to tooling, protocols, reagents, cell lines, etc.

In complex systems you have the degrees of freedom to be wrong at a scale that folks still do not appreciate.




> If you've ever been within a mile of a research lab this shouldn't be remotely shocking. Typical research is done rapidly, sloppily, and in a context where getting the "right" answer is the only incentive.

Let me preface that by saying that 50% sounds like an awful lot, even bearing in mind the following.

On top of that, these studies focus on highly complicated phenomena, where almost anything could play a role. It is common that one factor that is not necessarily accounted for in a study turns out to be important, even in well-run studies. Flukes happen as well.

Failure to replicate per se is not a problem, if the original study was done well and honestly. It is still some data points that could be re interpreted later in the light of other studies.

Now, the fact that there is so little incentive to publish confirmation studies is a big problem, because it means that these non-reproducible works are not as challenged as they should be.

Remember that one study is meaningless. Even with tiny rates of false positive or negative, erroneous conclusions are bound to happen at some point. A result is not significant unless it has been observed independently by different research groups.


"Failure to replicate per se is not a problem, if the original study was done well and honestly"

No but in this case a common reason for failure to replicate was that the papers didn't actually include enough details to replicate them to begin with. This should be grounds for failing "original study done well" because the reason they're paid to do research in the first place, is to propagate accurate and useful knowledge. Missing critical details means the study is a failure.


I see it as a systematic failure of the peer review system. Referee should be particularly careful that the article includes all information needed to replicate the results. This is often not the case, from my experience.


> the fact that there is so little incentive to publish confirmation studies is a big problem

I don't quite get this point: this is cancer research, co in theory all related institutions should be interested in replicating breakthrough studies, right? And failure to replicate it, time after time, should be a cause of concern, shouldn't it?


The institution as a whole may have the incentive, which plays out on a 5-10 year timescale. Funding agencies in principle care, but no individuals are rewarded based on the results on this timescale. Industry, venture capital, and the like do care because they need reliable-enough results to build on, but they can filter for the good ones and scale from there.

The lifecycle for a junior PI is about 5 years, which really means they have about 3 years to produce a result sufficient to advance. This is enough for a bit of validation but unlikely anything beyond their lab, so the result is likely to be untested. But who in that lab cares? They just need to get the next job, so a 3-4 year timescale is fine. The PI cares long-term because a lack of new productivity means a lack of ongoing funding, which means they either improve the quality of their processes (which means herding the junior researchers through that system) or simply continuing to hack their way through.

There is a reason why people moan about scientific theories advancing one death of an old professor at a time.


OK, I understand the individual incentives at play here. But the institution also has a governing body and other stakeholders interested in actual results, right? If they see that someone else has made a breakthrough and it looks like a very promising method, it seems logical you should check if it really works and if it does, improve it in order to make clinical studies possible and hopefully one day benefit the patients, no? Otherwise, what is the point of sharing cancer research if nobody is interested in replicating the important studies?


The governing body and related stakeholders care at varying levels. For government-sponsored research, the primary metrics are publications, impact factor, number and prestige of collaborations, diversity of staff, and velocity of money (see [1] for an example, pages 3-4). At the most cynical level it kind of reduces to welfare for educated people. The end result is that there are researchers producing good work, but you have to filter to find them.

For external industry funders, they have a strong and vested interest in identifying good output before it is public. The strategy here typically is a mixture of identifying good researchers and making them kings (consistent funding, direct lines of communication, access to nonpublic materials, etc), and of placing lots of small investments to get in the room for discussion. An example of this mixture is the BP deal at UC Berkeley [2].

As for shifting standard of care, this is a commonly-expressed goal but often the research is far enough away that the individuals doing the work lack close communication with the actual practitioners. Some institutes are trying to resolve this by close physical and social proximity (e.g. TU Dresden's medical campus) but this is far from the norm.

[1] https://www.helmholtz-muenchen.de/fileadmin/Jahresbericht/Ja... [2] https://www.berkeley.edu/news/media/releases/2007/02/01_ebi....


High prestige journals often will not accept replication studies. Only interesting and novel results. Therefore the incentive is low since there is little utility in doing the replication unless it is a prerequisite for something the researcher wishes to do.


> High prestige journals often will not accept replication studies.

But this is the opposite of how it should be! If we learned anything from scientific research is that if a breakthrough study is published, it means almost nothing until it gets verified. Actually, these days I tend to believe meta-analyses almost exclusively. But you need time for these to appear.

So when I read about some exciting new study, my first reaction is, "Well, that's interesting, we'll see what other scientists say." So actually we should look forward to the first replication as the first indicator of whether the original study was correct or not. The authors of the original study and the journal board should be grateful to the replication team for helping them do their job! Rejecting replication is a disservice to science.


Maybe it’s time to smear dirt on high prestige journals: One failed replication experiment at a time.


> all related institutions should be interested in replicating breakthrough studies

Well, no. Grants don't pay for replication. Replication studies are less likely to get published or get citations. In other words, if you want to do replication, you wont be able to find money for it. If you find money on it, there will be punishment in terms of lesser career progress.

The system does not value replication.


They're saying that the institutions should care about it, not that they do.


They should in an ideal world, and if the economic system (and society at large) was optimising for scientific relevance and validity. It does not, therefore they don’t care.

Well, most of us (scientists in general) do care, but we have opposite incentives.


Maybe we, as scientists, should work to change that


> I don't quite get this point: this is cancer research, co in theory all related institutions should be interested in replicating breakthrough studies, right?

I share your concern. The operative word here is “should”. I. Practice replication studies have an opportunity cost because they won’t end up in a prestigious journal, they are harder to use in grant applications, and the time and resources could be spent on studies that would be more useful in these ways.

> And failure to replicate it, time after time, should be a cause of concern, shouldn't it?

It would be, and it would also indicate that a technique is dodgy, or that a research group or scientist is unreliable.


it would also indicate that a technique is dodgy, or that a research group or scientist is unreliable.

Not necessarily: it could also mean that we don't properly understand the process, and therefore essential steps/prerequisites in the experiment are left undocumented. That type of error is more fundamental than merely the technique or the researcher.


> a context where getting the "right" answer is the only incentive.

This looks like such a fundamental problem to me. Zooming out, studies are explorative into the unknown, outside of our knowledge. We should expect most of them to find nothing.


It is worse than nothing, because incorrect positive results and non-published negative results lead people to waste years of their lives building off something which is not true.


Yep, and "nothing" (as in "the effect of X on Y is zero") is a meaningful result, possibly a very important one, but being "nothing", it has a lot lower chance of actually getting published.


The issue is “nothing” is not sufficient to assume zero effect, it just means the effect wasn’t measured. Actual progress means trying to replicate both positive and negative results.


Well yeah, but if you're trying to "cure cancer", and in a (just made-up) theory an acidic environment should help with cancer, so you try lemon juice, and it shows that lemon juice cures as much cancer as placebo does, that seems publishable to me.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: