One of the things that is latent in this article is that in the US you are supposed to have done a power analysis in order to justify the number of animals you are using in your study. Almost no one does this, and it is not surprising -- if you are doing cutting edge research you are in unknown unknown territory and any power analysis is likely to be no better than a guess. In a sense it is farcical that exploratory research needs to pretend that it is always successful.
Countless animal experiments are uninterpretable simply because they were poorly executed by first year graduate students. There is no other way to learn. Write them off as animals used for training if we want full accounting, but researchers' time is scarce enough as it is so asking them to publish uninterpretable results is a non-starter.
On the other hand I think it is important to publish as many results as possible to avoid file draw effects etc. Data publishing might be one way around this, but at the moment few labs anywhere in any field have the know-how to publish raw data for negative results, even if it is just sticking files in a git repo and getting a DOI from zenodo.
"Countless animal experiments are uninterpretable simply because they were poorly executed by first year graduate students. There is no other way to learn. Write them off as animals used for training if we want full accounting, but researchers' time is scarce enough as it is so asking them to publish uninterpretable results is a non-starter."
> I'm sorry but this simply isn't true. During my masters i did in vivo surgery on 30 rodents and tested how many the procedure was succesful in (10). I had to record how many the procedure was sucessful in and write this up in my methods. I'm not sure how you see keeping a record of this as something that takes a significant length of time that you wouldn't bother to record it.
In fact i'm pretty sure (at least for the UK) that keeping records like this is essential under the home office rules for animal safety in scientific procedures. These rules are there to help with the 3 R's (reduce, replace, refinement). This guidance aims to cut down on the use of animals in research or at least ensure quality experiments are being ran. If you don't measure - how can you improve?
I think a common misunderstanding in this thread, and something not explained in the article, is the difference between internal accounting of animals and publication-level. Labs track animal usage internally in the way you mention, and typically share that information with the university vet, IACUC, etc. but that is simply not useful information to publish.
Despite the possibility of differing opinions in this thread, I think the article has not done a good job of explaining the realities of the process, so anyone here who is not familiar with how animal work gets done is coming in with a purist misunderstanding. Everything is reported (in the US), but not in publications.
Looking at publications alone for animal accounting would be like if I looked at the checking accounts for everyone in a country and wondered where all the money went. Of course it's in savings, investments, cash under the bed... but I only looked in one place. I cannot conclude money is unaccounted for when my search was incomplete by design.
Yes. I did not mention this in my original comment, but internal accounting and oversight by IACUCs is pretty good. They know within a couple of cages how many rodents are on campus at any given time (module reproduction etc.). If we want the public record to be able to account for this then it would likely have to be in another venue, because animals whose data does not go directly into a paper could be "involved" in the exploratory work for tens of papers. How do you prevent double counting, how do you know which animal whose data was not used was reported in which paper? Mostly you don't. The IACUC has it, it is buried in lab notebooks, etc. and if someone needs to be disciplined for misuse it is on the IACUC.
This is different from the UK where the 3Rs are much more strongly enforced, to the point where I remember asking a question back in 2015 to the then head of UK animal research about reproducibility, and getting the answer that he wouldn't approve the use of animals just to replicate an already completed study. In the US in some fields animals will be used just to replicate a result because another lab needs to know for sure that it is real before expending even more animals for a potentially useless follow up study. The way the numbers play out in practice, we would be much better off doubling if not 10xing the number of animals used in initial publications to avoid the 10x replication studies that will be done inside other labs to make sure that the result is real. Of course if we did this then the publication rate in many fields would be cut in half, or decreased by an order of magnitude.
Yes - animals absolutely do not go missing within university tracking systems with IACUC oversight.
I disagree with the parents that the remaining animals are even primarily used for training purposes. There are countless ways that experiments may fail with uninteresting results, that do not count as null results.
> In a sense it is farcical that exploratory research needs to pretend that it is always successful.
Did you interpret the article as saying that exploratory research must be successful? I read the opposite, that "unsuccessful" (defined here by me as negative or inconclusive) research should be published more. What am I missing?
The original article saying that more unsuccessful research should be published, and I agree. The sentence quoted in gp is a bit hard to parse, but it is just another way to say unsuccessful research shouldn't be hidden and ignored. The context for the sentence is a bit more from the funding side, where I usually only half jokingly say "99% of all funded grants are successful!" Which is the complete opposite of reality, where 99% of experiments fail.
"Inconclusive" can mean several things. You can get inconclusive results that don't strongly support or refute a particular hypothesis. These should be published. However, a lot of experiments end with "no/bad data" and publishing that is, IMO, often a waste of time.
Suppose you want to see how different types of neurons are distributed in the brain. You hypothesize that two specific subtypes of neurons are always found in close proximity in one condition (brain area, developmental stage, disease vs health, etc), but not another. There are a lot of ways to do this, so you pick one and start.
If things go well, your antibodies selectively label each neuron type. You count the pairs of neurons that are neighbors (or not) in condition A, those that are neighbors (or not) in condition B, and do some stats. If you get this far, I agree it ought to be possible to publish something, regardless of whether the proportions are wildly different, exactly the same, or somewhere in between.
However, things often go wrong. These protocols have a lot of free parameters and it's often not feasible to calculate the best ones from first principles. As a result, you try something and notice that the result is wildly implausible: maybe everything is labelled as one of your cell types, even stuff that isn't neurons. You tweak the protocol, and now nothing is labelled. This is also implausible--the tissue is from a normal animal--so you make some more adjustments and try again. Perhaps you even change techniques altogether and use FISH or a viral vector instead of immunohistochemistry.
The final protocol (if successful) is always included in a paper, but these intermediate failures are usually not and I'm not sure it makes sense to. Suppose the solution was to use a better antibody from a different company. The pilot experiments where we varied the incubation time, sample prep, etc using a dud antibody are fantastically uninteresting. Furthermore, people often change multiple parameters at the same time; going back and convincingly demonstrating which one "matters" would require a lot more work for a fairly limited payoff.
Finally, people also adapt their research question based on the data they can obtain. Maybe you can reliably label one type of neuron, but not the other, so you decide to focus on how those cells' locations vary during development. If so, it'd be weird to report a bunch of failures of an unrelated technique in the resulting paper.
There are many ways for something to be unsuccessful without being interesting. We do not need to reduce the signal to noise ratio of scientific publication further by requiring all exploratory research efforts to be published.
yes, it's important for work to see the light of day, but we do not want to disincentivise risky or exploratory work.
One of the things that is latent in this article is that in the US you are supposed to have done a power analysis in order to justify the number of animals you are using in your study. Almost no one does this, and it is not surprising -- if you are doing cutting edge research you are in unknown unknown territory and any power analysis is likely to be no better than a guess. In a sense it is farcical that exploratory research needs to pretend that it is always successful.
Countless animal experiments are uninterpretable simply because they were poorly executed by first year graduate students. There is no other way to learn. Write them off as animals used for training if we want full accounting, but researchers' time is scarce enough as it is so asking them to publish uninterpretable results is a non-starter.
On the other hand I think it is important to publish as many results as possible to avoid file draw effects etc. Data publishing might be one way around this, but at the moment few labs anywhere in any field have the know-how to publish raw data for negative results, even if it is just sticking files in a git repo and getting a DOI from zenodo.