This just highlights an emerging trend, that I'm already seeing in University labs. The distrust of science, especially experiments you didn't do yourself. It's worrying enough to see the public mistrust science, but I'm seeing this happen in biology labs across my university.
This article is understating the effect on pi's and their labs. During one of my conference meetings with other PI's we estimated 3 million of our combined grant were spent trying to replicate incorrect or bad studies.
It also has an extremely demoralizing effect down the lab pipeline with the other workers in the labs from phd students, postdocs, and lab techs.
I've been hearing more and more disillusionment about the state of science from lab members. I've already had to have a meeting with several postdocs who said something similar to I don't trust 90% of published articles.
Something is really wrong in science, and this is just the beginning.
The scientific community need to have a public serious discussion about the worries many of us are already having in private. But since no PI ,including me, wants to jeopardize their career, I fear things are going to go off a cliff.
EDIT: If any rich silicon valley investor is reading this, please invest in a companies trying to automate and standardize research lab work.
> Something is really wrong in science, and this is just the beginning.
>But since no PI ,including me, wants to jeopardize their career, I fear things are going to go off a cliff.
I think part of the decline in biology specifically may be the rapid growth of NIH funded research in the 90's due to the success of molecular biology and the necessity of many warm bodies at the bench, which was followed by the NIH budget "crunch" of the last decade. This combination dramatically increased the competitive nature of getting grants and high-impact papers. In this environment, those who are careful in their claims are selected out of the career pool (not all obviously, but enough to affect the proportion).
A couple of my experiences:
I had one paper in a mid-range journal where we found a very interesting biological pattern in the data, which we discussed but ultimately decided to dismiss because there were confounding factors. A very well known lab took the exact same data, did a couple minor experiments to justify dismissing our concerns, and published in a top-tier journal.
In another case, I reviewed a paper from a VERY famous lab and their own metrics proved that 95% of the data was noise. They simply removed the offending metrics and published in a different, equally high-profile journal.
> we found a very interesting biological pattern in the data
Isn't it related to p-hacking? Get lot of data, search for a pattern then publish some theory the pattern would "prove". Instead of having a theory first, setting up experiments which can disprove it then analyzing the data from those experiments.
The observed pattern exactly fit into a previously established hypothesis and was totally unexpected. I did not slice and dice the data with multiple statistical tests. The only way it could have been related to p-hacking is that I am familiar with a bunch of biological theories and could recognize data patterns that are compatible with any of them. So, maybe a Bonferroni-type correction of 30-70x (i.e., number of theories my internal neural net has been trained on)?
> EDIT: If any rich silicon valley investor is reading this, please invest in a companies trying to automate and standardize research lab work.
This. Take gut microbiome research. Hardly any of it is reproducible. Many labs find different microbial abundance between CFS/ME/IBS and controls but every lab finds a different set of microbes to blame (in addition to the fact that the overlap between control groups is terrible). I do not understand how folks can publish this work. At the start of the research, I can understand, but we are a decade into this modality.
And then there are the consumer microbiome companies like ubiome. I am an early adopter and have spent thousands analyzing the results they sent me but i have yet to find anything repeatable or reliable. Even looking at their smartgut test, I do not understand how they can peddle this stuff. I have yet to find a doctor who is using their results. Until the reproducibility problem is solved, it seems like everyone is just playing in their sandbox and science can only go so far operating in this fashion. I wish these groups (ubiome, american gut, thryve, etc) would collaborate to resolve these descripencies.
>>Take gut microbiome research. Hardly any of it is reproducible.
Totally agree, and even one level up is unreproducible, or done in ways that are psuedoscientific - nutrition and effects of diet on humans. So many "ubiome"-like places doing blood testing and other diet-related "science" based on complete nonsense.
The gut microbiome is in constant flux; it constantly changes every day.
A sample sent to 3 different companies on 3 different days would have 3 different results, even if their analysis was perfect.
I've not had my stools analyzed. If I were to, the biggest thing I would look for would be the level diversity in my microbiome, as opposed to individual strains.
I only have minor knowledge about this, so maybe I am missing something.
This is a complicated question but let's break it down to the simplest problem i can think of. Suppose I do a study comparing the gut microbiota of CFS patients compared to controls. Then, another group does the same study. The control group better have reasonable overlap. If they do not, then both labs should process the raw samples and see if they get reasonable overlap. If not, start here to fix.
> The gut microbiome is in constant flux; it constantly changes every day.
It depends on who you talk to which is somewhat my point. Eric Alm's lab showed that the gut microbiome is relatively stable over time (they sample daily for over a year). Other labs have shown it varies daily. My own lab has shown that it varies greatly depending on where you take the sample (stool is actually quite heterogeneous in terms of bacterial populations). You could argue how you define "same" as a function of a scale, in this case taxon level. But the problem still remains.
> A sample sent to 3 different companies on 3 different days would have 3 different results, even if their analysis was perfect.
If the analysis was "perfect" (whatever that means), then why are they different. Can we model the differences?
The real truth is that stool is quite heterogenous in terms of bacterial populations. Its quite easy to get an OOM difference between abundances in the same sample.
> I've not had my stools analyzed. If I were to, the biggest thing I would look for would be the level diversity in my microbiome, as opposed to individual strains.
Diversity on which level though? Most consumer sequencing companies do not go the strain level. Most are genus and you get about 50-75% of the species.
> I only have minor knowledge about this, so maybe I am missing something.
Appreciate your honesty. Science has done a sub par job (in my opinion) of emphasizing to the public that we (the community) understand what is going on. All we really know now, broadly speaking, is that the gut microbiota is different when compared to healthy controls be we really have no idea why. That doesnt mean we dont understand the metabolite profiles, but the simple fact that Provetella has been implicated in both sickness and in health should indicate that reproducibility would be very insightful.
I upvoted your comment, but want to add a compliment. This kind of dialogue and knowledgeable comment are why I keep reading HN, often starting with the comments.
Thank you for addressing the previous comment in a thorough manner while not talking down to him or the rest of us who are smart and interested but not educated in the specifics of this domain.
Thanks. In my mind, what gets done by berating someone in this context. The poster said they don't really know much so I assume they are part of the general public. Its up to science to educate the public on these matters.
Also, I have run into quite a few microbiome zealots, all are what i call armchair scientists -- no college background and think they know everything from reading abstracts on pubmed. The kind of arguments they put up cannot be well addressed on the internet because they can find some study which proves their point. Which this proves my point -- the studies are inconsistant.
I welcome thoughtful and engagine conversation about the matter and have problem being told I am wrong when evidence is presented.
Thank you for the detailed response. I am part of the general public, so to speak, but am also interested in this research because I have a certain disease. I am treating this disease with diet modification, which seems to be working. (The Specific Carbohydrate Diet.) However, it seems like there are many questions as to why it works that is very much not understood.
> The control group better have reasonable overlap.
I wonder if there is enough understanding of what makes a "healthy" gut microbiome in order to properly assess what "overlap" means. Isn't a microbiome a complex system with many moving parts? For example, a study of successful businesses could include many successful businesses that are completely different from one another. Comparing these to "unhealthy" businesses would probably not be very useful.
It seems like you're answering this question by saying the different labs should process the same samples, to at least see if the same samples are returning the similar results. Is that right?
I wonder if there is a resource for further learning on this topic. Reading scientific studies, as you say, provides only limited value to a layman. The number of people with diseases affected by the gut microbiome is very, very large.
>I've already had to have a meeting with several postdocs who said something similar to I don't trust 90% of published articles.
GOOD. They should feel that way. Science is in a state of crisis and that 90% number is an accurate reflection of reality. In the study mentioned below 90% of landmark cancer research papers could not be reproduced. Someone will surely chime in that their own favorite branch of science is better, but better than 90% failure is a really low bar.
Yes and no.
In many ways the probelems we are addressing are much more complicated than were being addressed 100 years ago. Often advanced statistical methods need to be used to make inferences. We are often looking for 2% signal over noise, trying to understand complex dynamical systems. Even without anything nefarious, and careful execution of rigorous methods, unmeasured 3rd variables can be different across study samples, and you can't measure everything. Even if you could, you won't have the statistical power. Random assignment is nice, but not every field can do it ethically. Even with random assignment, there is no guarantee that the effect of unmeasured 3rd variables will average to zero. That is. Science is harder than many give credit for.
That said, the competition and incentives in science are perverse. Peer review quality has declined, if only because everyone is in a rat race to survive, and being a peer reviewer does not do much for getting you a job or tenure.
A lot of the problem is that people pay attention to the hype of news articles rather than the particulars of the papers.
In the one you link, they don't describe what they mean by "hold up." It turns out that the only definition that those authors give is in the caption of Table 1
>The term 'non-reproduced' was assigned on the basis of findings not being sufficiently robust to drive a drug-development programme.
Now, this is a completely different thing from reproducing in terms of getting the same science. Driving a drug program is not biology, it means does this phenomena that you found in one thing apply to the vast majority, so that a pharma like Amgen can run a trial, put all stage 4 patients on it, and have it work on enough people that the trial has good enough stats.
However, that's really not how cancer works. Cancer is extremely diverse. If somebody is publishing a "landmark" paper (whatever their definition is of that), then they are publishing something new and unexpected most likely. That means that the landmark finding is almost certainly not going to generalize to enough that a large pharma can run an uninformed clinical trial and throw it all patients without pre-selecting for patients where it will most likely work.
Further, that silly thing is just an editorial. It's a plea for science journals to only publish things when it's found in the majority of cancers, or something. If we did that, we'd know far less about cancer, because it's treating something that's actually diverse as a monolithic entity.
Are you going to get a particular cell line to have exactly the same gene expression as another lab? Probably not, if it's a tricky to grow cell line. Are you going to get cancer drug sensitivities to be all the same? Probably not, if you're testing when the cell lines are growing at different densities.
People really should distrust that any individual data point from a particular model biological model is going to generalize, especially for difficult biological models. That doesn't mean we should stop publishing, because if we do, we're not going to be able to figure out what's actually driving those differences.
Edit: but I agree most medical/biology/pharmacy papers are plagued with issues, mainly because the incentives are screwed up and there's more noise overall.
Of cource, science itself isn't failing, and hopefully it isn't science itself that people are losing faith in.
Rather, performing proper science is despressingly difficult. So much that we would like to believe is unbiased can so easily be biased without us knowing it.
Actually, I'm glad for this lack of trust in ourselves. Self-doubt is the beginning of wisdom, so they say.
What would it even mean for “science” to have failed? Science is just testing your assumptions against reality to gain knowledge.
If you mean the academic institution of, uh, Big Science or whatever you’d call it—sure, it has failed in its current incarnation to impose the rigour for reproducibility, and incentive toward reproduction. But academic institutional cultures evolve in response to noticing problems like that. Like software evolves in response to discovering vulnerabilities. A given institutional culture (like a given piece of software) won’t have the same vulnerability twice; if Big Science is still having crises 1000 years from now, they will be new and different crises.
The thing is, even fallible science is better than non-science. The only thing that has consistently put understanding of the universe forward is science. I mean, if we don't trust the scientific method, what's the alternative? Bible?
> EDIT: If any rich silicon valley investor is reading this, please invest in a companies trying to automate and standardize research lab work.
This is how one knows publicly funded research is fundamentally broken. Sloppy research is fueled by the push to publish, which is fueled by too many people climbing over each other for too few spots.
Is it a bureaucratic problem, perhaps? I don't want to recapitulate the many (and far better informed/experienced) discussions of this on HN in the past, but the economics of peer review seem really broken. There's a natural temptation for reviewers to rubber stamp things both favorably and unfavorably depending on circumstances, but it doesn't seem efficient to have experienced scientists spending valuable think time reviewing things like basic methodology - naturally they're most interested in the new claim, and secondarily interested int heir own academic reputation depending on how they're cited and so on.
One possible way out of this (which may have been tried and rejected already for all I know) could be to take up the approach of legal journals, which are usually edited by students rather than professors. There are downsides to this as well - a fetishization of formatting and a busywork demand for citations on even the most prosaic claims, not to mention a tendency to proliferation, but there's an endless supply of eager reviewers who are highly motivated to do good work to burnish their academic and subsequent practice credentials. Being an editor of the Harvard Law Review can get you into all sorts of high-powered jobs, for example - though so, depressingly, can putting your name on a ghost-writer's monument to your commercial vanity, so that shouldn't be the only reason to do it :-/
I agree with the necessity for automation, and maybe some sort of tokenization so that people can save time by being able to rely on trusted certifications. Remember when Creative Commons seemed like an idea whose time had come and a lot of art work was appearing with standardized icons that quickly summarized the scope of the legal rights being asserted over the work's copyright? After a great start that model seems to have run out of steam, perhaps because copyright as a political has been sidelined by more fundamental concerns - at any rate, I had anticipated seeing the CC 'branding' much more widely by now than I do. But similar shorthand cor certifications of statistical rigor or procedural validity might help to make scientific publications more reliable, as well as accessible to lay readers.
No, in my opinion the problem is that all the low hanging fruits in science have been collected and our old way of doing science does not work anymore. 20 or 30 years ago it was "easy" for a single PhD student to discover something completely new and groundbreaking. Of course it still happens today, but 20-30 years ago it was way easier to get truly new results as a small group of researcher with a low budget.
In the very beginning of science you did not even need anything. Just looking art the stars and thinking about it was enough.
But nowadays? We have to build a LHC with several 1000 people to do basic science! As a PhD student you have to spend 3 years just reading and learning "old science" in order to reach the current boundaries of scientific area.
The current system of "we give this topic to a PhD student and he will figure out a solution in exactly 3 years" just does not work anymore. In order to advance science we need much larger and much more connected groups who grind away on a single problem together for 5-10 years to make truly groundbraking scientific advancements.
Thanks for this succinct explanation. It bears many similarities to problems of politics and governance that I wrestle with.
I wonder to what extent crowd work can be effective for making real progress; I've been impressed by things like FoldIt and astronomical classifiers (as opposed to the BOINC distributed processing approach) but I don't know to what extent those result in real progress vs useful-but-limited stamp collection. I am very interested in the possibilities for gamification but it seems like there's a big gap between the empirical data collection/space mapping aspect and actual theoretical development of the kind done at the Santa Fe institute or the IAS.
While peer review has its problems, it's hardly the biggest problem and there are systemic issues that peer reviewers can't see.
E.g. one of the biggest issues in science is publication bias and the lack of publishing "negative" findings. You don't see as a reviewer if the person submitting the study has three other studies laying on his desk that he'll never publish.
Reproducibility is a major problem and certainly has levels to it such as starting materials [1], details within protocols and methods section of papers, unaccounted variables such as room temp., technique and data interpretation as often raw data is inaccessible.
Would a 3rd party observersing company would help? For example a company that's like the UN and visits the lab during the experiments and says: this lab follows the protocols.
not OP, but can give you an industry opinion (I design laboratory automation work-cells like theirs for a living :).
They're one of many core labs that offer similar "assay as a services". I like their vibe and attempt and a differentiated offering.
Transcriptic's data capabilities and web experience may be enhanced compared to industry standard, but overall their offering it's a pretty common tech to the industry. They make a big deal of their "cloud" capabilities. (personally: It seems like a nice attempt but doesn't move the needle much). The trouble with running core labs is every user wants something different. It's pretty rare to be able to put together a set of experiments that different customers across the industry will want to us... and the optimum protocols/libraries themselves are constantly changing. Regarding Transcriptic in particular, it's unclear how much tweaking users can do to their protocols, which many rule out it being acceptable to many researchers.
Many larger companies, universities and hospitals get similar robotic work-cells custom designed for their research centers (and have been for 20+ years).
That all said, there are real barriers to automation in this space, most of which is the researchers don't know it, don't trust it, or can't afford it. It is remarkable to me the number of labs that run on sneaker-net.
It’s mostly just branches of biology that have the worst problems. Medical research, specifically.
And the bloated costs mostly originate from the preservation of baseline ethical standards and safety practices applied to operating procedures.
Trying to squeeze bogus statistics out of in vivo blinded placebo trials has always been this brute force hack of russian roulette with terminal diseases, and an effort to guesstimate someone’s best hunch, without knowing what’s really happening, and praying for something obvious to jump out of the woodwork. What a clueless hack. But then again, how else to approach mysterious phenomena when left with nothing to go on but instinct and intuition?
Even more bogus is how we pin all our hopes on pills. How many pills can you really take? And with the entire world bathed in a bizarre cocktail of industrial chemicals, why is the only solution to medicate after the fact?
Wouldn’t it make sense to solve the modern problems we see, by controlling for the miasma of industrial chemical pollutants adulterating our bodies from cradle to grave? Or is that what’s wrong with science? That we pretend that everything we study is a spherical cow in a vacuum?
This really isn't that surprising and has been known for years. Lots of kids in grad school were doing experiments with cells at >200 passages that were obviously contaminated with mycoplasma. I think they knew it, but their HIV research had always been done that way. I'm guessing if they went back to lower passage / non-contaminated cell lines their results would change.
Similar thinking from the source [1]:
> Why does ATCC continue to distribute HeLa Contaminated Cell Lines?
> ATCC continues to distribute these cell lines, even though they have been shown to be contaminated with HeLa, because researchers need them for purposes beyond use as models for specific disease/original source tissue.
Beyond that, certain cell lines started out as contaminated [2]:
> In the earliest stocks available, the level of contamination was 0.6%.
I'm imagine that Step 0. of Materials & Methods in new papers going forward will be: "We verified that our cell line was made of intestinal cancer cells with PCR and genomic expression tests..."
This contamination has happened before with HeLa cells and the original "War on Cancer". It was an absolute disaster back then, and I'm aghast that the medical industry hasn't learned from this mistake. I actually am quite shocked and horrified that this has happened again. This means that many many human lifetimes of work are invalid.
> The scientists can carry out a genetic test before starting their research to detect misidentified cells. But that takes time and money. “The scientists I spoke to said that was the biggest problem,” says Halffman.
Would performing such tests be a business opportunity? A researcher wants to use a certain cell product, but first sends a sample to our testing facility to see if it's a quality product or a botched one.
This is overstating the issue in the present day. An immortalised cell line is after all a very convenient but very contrived model of a real system. For example, the greatest revolution in cancer treatment in the last 10 years has been immunotherapy, which was developed using more realistic models than 'cell line in a dish'.
Immunotherapy research uses cell lines as much as every other branch of microbiology. For example, the T2 cell line is lacking in a peptide transport protein called TAP, which makes it handy for studying the immune recognition of arbitrary peptides (not just those from the cell).
And if you want to study the effectiveness of any immunotherapy, it's very common to implant a mouse cell line like B16 in live mice and then see if the tumor regresses under different treatment conditions (e.g. with or without checkpoint blockade).
30,000 studies are a lot, but are only 0.2% of studies using cell lines, per my light reading of the paper. Moreover, some results will still generalize, as even if contaminated, they may still be cancerous cells of another lineage.
In some ways I think these kinds of things are more about having the time and resources to keep things organized. It is a constant struggle in labs by people who may be expert biologists, but amateur database managers.
And real tumors are quite heterogeneous, so maybe experiments done with contaminated cell lines are more realistic than those done with pure cell lines.
This is what you always hear from biomed/psych/etc. It simply is not important to get details right. Misinterpret the results of your analysis (p-values), use the wrong cell lines, measure the wrong thing, etc. It rarely seems to affect the conclusions.
If anyone who points out errors is being pedantic, it makes you wonder about the point of doing all that. Just come up with an idea A, say "heads = conclusion A", "tails = conclusion not A", and flip a coin.
I think one of the things that is actually happening is that for many examples, while the quantitative answer changes, the qualitative conclusion isn't altered by the errors.
In graduate school, as part of a class on survival analysis, we subjected the same data set to increasingly sophisticated analysis techniques to account for all kinds of things.
Each time, the effect estimate changed.
At the end, while discussing them, the professor asked a very simple question: "Do any of these suggest that HAART is a bad idea?"
Similarly, during the Ebola epidemic, when everyone was fussing about the various forecasts missing the mark, etc., what they were actually predicting was "This is a serious crisis that needs international intervention".
I'm sure that HAART is a good idea sometimes and a bad idea other times. Also, this will change as time goes on (HIV mutates, demographics change, other treatments are available, etc)
I'm also sure that there are many things going on at any given moment that should count as "serious crisis that needs international intervention". The real question is whether it is a more serious crisis than other things currently happening.
To deal with both these issues you are going to need deeper understanding than "good idea vs bad idea" or "is crisis vs not crisis".
But the real problem with being unable to quantify your understanding is that you are left unable to make precise predictions, thus you can never perform any stringent tests. If all you can predict is something vague (eg, HAART will increase 5 year survival), there will be many ways to misinterpret the data in support of your explanation even if it is totally wrong.
The point is not that it will change over time - it was that in this setting with this information different methods can give you different answers, but that may not actually matter. If you're HIV+ in this country right now, you want HAART, whether I did some fancy marginal structural modeling or not.
> To deal with both these issues you are going to need deeper understanding than "good idea vs bad idea" or "is crisis vs not crisis".
This is not, actually, how the response to Ebola worked. It was very much "crisis vs. not crisis". And a huge amount of clinical decision making is "good idea vs. bad idea".
The suggestion was not that you're not able to quantify your understanding. The suggestion was there are errors that are possible to make that, while changing your effect estimate, do not change whether or not you do a thing.
To return to the HAART example, imagine you're an HIV+ patient and I've told you that a major study failed to control for time-varying confounding, and that upon re-analysis, instead of doubling your 5 year survival, it only increases it by 87%.
Or, for a form of "is this repeatable?" that I particularly despise, that it still doubles your chances of survival, but the p-value has gone from 0.047 to 0.062.
>"To return to the HAART example, imagine you're an HIV+ patient and I've told you that a major study failed to control for time-varying confounding, and that upon re-analysis, instead of doubling your 5 year survival, it only increases it by 87%.
Or, for a form of "is this repeatable?" that I particularly despise, that it still doubles your chances of survival, but the p-value has gone from 0.047 to 0.062.
Do you want to stop taking the drug?"
This obviously depends on the relative costs. Cost of side effects, buying the drugs, time going to treatment, etc. That all requires accurate quantification. But cost-benefit isn't even what I meant.
To begin with, if they can't nail down a quantifiable effect that is stable from study to study, who knows what is going on? Why would you have confidence in their estimates of effectiveness if they are inconsistent with one another?
"This obviously depends on the relative costs. Cost of side effects, buying the drugs, time going to treatment, etc."
I'm going to suggest if you're facing death from an AIDS-related illness, a relative risk of 2.00 vs. 1.87 will feel very, very similar to you.
"To begin with, if they can't nail down a quantifiable effect that is stable from study to study, who knows what is going on? Why would you have confidence in their estimates of effectiveness if they are inconsistent with one another?"
Define "stable" - because even if the effect of something is fixed in the same way a physical constant it, it's invariable sampled with error.
Again, if I give you five studies that suggest that HAART improves survival by:
100%, 102%, 87%, 94% and 96%
are you really going to suggest that we don't know that HAART improves survival?
>"I'm going to suggest if you're facing death from an AIDS-related illness, a relative risk of 2.00 vs. 1.87 will feel very, very similar to you."
I at first thought we both understood there would be some uncertainty about these values no matter what but we were leaving that out out for simplicity's sake. Roughly, I was assuming it is at least +/- 10-20%. The reason these values "feel very, very similar" is that I would not expect medical data to be able to distinguish between them.
>"Define "stable" - because even if the effect of something is fixed in the same way a physical constant it, it's invariable sampled with error."
Ok... so you are implicitly considering uncertainty.
>"Again, if I give you five studies that suggest that HAART improves survival by:
100%, 102%, 87%, 94% and 96%
are you really going to suggest that we don't know that HAART improves survival?"
Not enough info.
- Where did the uncertainty go?
- What methods were used to generate these values? Even basic strategies like blinding the people collecting/processing/analyzing data is often still missing.
- What population/frame do those numbers refer to, and how much error can we expect from extrapolating for other situations in the future?
You need a reliable quantification, doing this hand wavy "things are better/worse, significant/insignificant" is an awful idea.
Immortalised cell lines are a bad model, for many reasons. Anybody that has done cell culture knows this. Anyone that has tried to translate in vitro work to in vivo or into actual clinicsl trials in people knows this even more. The article is acting like the fact that cell lines are an even worse model than we thought is a massive deal, which it really isn't, in my opinion.
Huh. I'd take a bet that it's less than a dozen. I think it would be difficult to transfer it-- you'd have to have an OB-GYN who was also do research who didn't follow proper cleaning protocols between researching and doctoring.
Unlike Devils, we don't bite eachother's cervices as a way to say "hello".
And if the OB-GYN is making that kind of contamination error, the results of that are likely going to be washed out by HPV cross-contamination.
This is like asking how many times someone had been killed by the shell casing from an automatic weapon. It might not be zero, but we're talking some pretty one-off, House M.D.-esq circumstances.
Having met a number of physicists and software engineers working in biological and medical research, they have their own, unique brands of fail in addition to the usual slate.
> In June 2007, all that changed. Ain attended the annual Endocrine Society meeting in Toronto, where Bryan Haugen, head of the endocrinology division at the University of Colorado School of Medicine, told Ain that several of his most popular cell lines were not actually thyroid cancer. One of Haugen’s researchers discovered that many thyroid cell lines their laboratory stocked and studied were either misidentified or contaminated by other cancer cells.
[...]
> But rampant contamination is not the shocker in this story. Ain retired all the lines; he never sent any of them out again. He also sent letters to 69 investigators in 14 countries who had received his lines. He heard back from just two.
This article is understating the effect on pi's and their labs. During one of my conference meetings with other PI's we estimated 3 million of our combined grant were spent trying to replicate incorrect or bad studies.
It also has an extremely demoralizing effect down the lab pipeline with the other workers in the labs from phd students, postdocs, and lab techs.
I've been hearing more and more disillusionment about the state of science from lab members. I've already had to have a meeting with several postdocs who said something similar to I don't trust 90% of published articles.
Something is really wrong in science, and this is just the beginning.
The scientific community need to have a public serious discussion about the worries many of us are already having in private. But since no PI ,including me, wants to jeopardize their career, I fear things are going to go off a cliff.
EDIT: If any rich silicon valley investor is reading this, please invest in a companies trying to automate and standardize research lab work.