"The demand for sexy results, combined with indifferent follow-up, means that billions of dollars in worldwide resources devoted to finding and developing remedies for the diseases that afflict us all is being thrown down a rathole. NIH and the rest of the scientific community are just now waking up to the realization that science has lost its way, and it may take years to get back on the right path."
The institution of science is undergoing a catastrophic decline. The reason behind this is simple: it is no longer a growing economy. Public funding for science is frozen or being cut, private R&D labs are shuttering their doors, and companies are increasingly concerned with quarterly results at the expense of long term research.
And why should it be otherwise? Science has never payed off as a logical financial investment. It is the riskiest of gambles by definition, requiring inordinate expenditures of time and resources in the present for a chance at some distant breakthrough decades or even centuries in the future. Institutional science is not an economically sound choice in the best of times, let alone during the current span of never-ending recessions.
The truth is, science is a creative pursuit much like the arts. Like the creation of literary masterpieces or profound paintings, it has never made economic sense in the present. Only afterwards, once the impact can be seen, do we understand its significance. And that is why it will always be worth pursuing.
The reality is that, increasingly, we live in a society that does not understand this philosophy of life. People only care about how they will survive tomorrow, and who can blame them, as the world economy gets ever more competitive and cut-throat.
Increasingly, it has become clear that our society does not reflect one designed with its own best interests at heart. Why this is, how it happened, and how we can change it, will be the greatest challenge of our lifetimes.
> Public funding for science is frozen or being cut, private R&D labs are shuttering their doors, and companies are increasingly concerned with quarterly results at the expense of long term research.
It's ironic that Silicon Valley owes most of it's existence to investments by the DoD and ARPA. Most of the technology we use in the electronics industry today would not exist except for all the public money that was poured into semiconductor research. Slashing public funding, especially of fundamental research, is one of the worst things we could probably do for our economy's future.
It's OK, the silicon valley companies and other private sector institutions who have benefited so much from investment by the DoD and ARPA are ready to address any investment shortfall to the sciences, right?
I mean, investing in a startup that has a value proposition based on real scientific research is the same as investing in the research in the first place. Am I right?
> investing in a startup that has a value proposition based on real scientific research is the same as investing in the research in the first place
Not necessarily, if invest in science you're investing in the lab producing publishable results, without the constraint of bringing a product to market. The two don't necessarily overlap.
The semiconductor material used by Bardain, Brattain, and Shockley for their Nobel Prize work at Bell Labs was developed by researchers at Purdue University under a grant funded by the National Defense Research Council. Their developments were hardly independent of publicly funded research.
If you want to play this game, you can go back even further and find that they built on work done by privately funded individuals during the Victorian Era and during the Enlightenment. Go back even further and you'll find that much (but by no means all) of the wealth that funded that was acquired thanks to Feudalism.
I therefore credit feudalism for the creation of the transistor.
You seem to have misunderstood the comment, (the sibling comments should help explain, or maybe some light reading about the invention of the transistor).
I have no idea what you are getting at. I am familiar with the invention of the transistor, and understand the comment I was replying to completely. Furthermore, there are no sibling comments to my comment....
What I am guessing went wrong here is that you are unfamiliar with sarcasm, so I'll help you out: I don't actually credit Feudalism with the invention of the transistor.
Don't bother. HN is libertarian leaning thus will defend military spending and fight any criticism of the US military or its massive spending to the death.
I'm not even going to go into how a lot of that spending, if freed up, would go toward endeavors that are not military related and to a parallel timeline we can never know. Imagine NASA with 10 or 50x the budget. Or NSF with 10x the budget, etc. We'd probably be typing this on a moonbase or on a cottage on Alpha Centauri's earth-like planet.
Instead we fawn over the peanuts that falls out of the elephants mouth and praise its generosity for feeding the hungry.
I think you may have responded to the wrong person. I'm just making a point that who you credit with a discovery can change many times if you are willing to point at the owner of the shoulders that the inventor was standing on.
Transistors discovered by a private lab? A private lab with government funding? A private lab building on work done by individuals such as Faraday? Individuals who in many cases received government funding in the UK? Individuals who were building on work by Benjamin Franklin, a self-made and self-funded man? Benjamin Franklin, who doubtlessly was enabled by early work on the scientific method itself by Roger Bacon? Roger Bacon, who was supported by the catholic church? Roger Bacon, who built on work of earlier Muslim scholars?
If we want to play the "who gets credit" game, we need to decide beforehand how many times we are going to go down the "who funded who" tree, and "who researched the prerequisites" tree.
I'm not talking about politics; I am pointing out that you people are talking past each other because you all have different ideas of how to assign credit.
Right, my point is that a lot of people here are invested in the idea of "military solves all" and will try to disingenuously tie all innovations to military or defense financing.
I think what he meant was NOT FOR PROFIT which every single one of those people you mentioned all the way back to feudalism operated under the spirit of.
It's just now all the low hanging fruit is gone and you need million dollar labs instead of an apple and a notebook.
That is debatable. It's kind of a stretch to call AT&T Bell Labs 'privately funded' as ATT was a government-sanctioned monopoly. Technically it was privately funded, but not through a working market system. Kind of like the other great industrial pure research labs (MS, IBM, Xerox).
The transistor development was not done under government direction or a government contract to do the work. It was done by a private company choosing to invest their own funds into it.
>> ...companies are increasingly concerned with quarterly results at the expense of long term research... And why should it be otherwise? Science has never payed off as a logical financial investment.
The problem is that this article generalizes "science" to the life sciences. Biology and medical experiments take a long time, are very expensive and reproducing results can be very difficult. Of course there's more pressure for scientists to produce results and you'll see fishier papers to justify their big grants.
Research can be incredibly profitable. What about Bell Labs? 3M? The technology from your iPhone wouldn't have been possible without Apple's big R&D department. There are many examples of successful (and profitable) R&D departments. Heck, Exxon's R&D arm comes out with many innovations a year to make pumping oil out of the ground easier. It just isn't as visible (or linkbait-y) as that new gene they found.
>> "[science] has never made economic sense in the present"
Life sciences don't. It takes decades for new drugs to come out sometimes. So yes, there are a limited number of people who do this. But electronics? Tech? Mechanical/Aerospace engineering? SpaceX, Intel and others are constantly improving.
>> "Increasingly, it has become clear that our society does not reflect one designed with its own best interests at heart. Why this is, how it happened, and how we can change it, will be the greatest challenge of our lifetimes."
Bullshit. There's financial incentive to go with the fold and do conventional things, but there's also a lot of financial incentive to innovate. However, innovation is hard when there's a higher barrier of entry. Information is the industry that has seen the most innovation in the last decade because anyone and their mother can make software.
The real problem is making the barrier of entry lower for other industries. To make manufacturing, biomedical research and others easier. And guess what? There's economic incentive to do that, too! People who make software development easier are getting tons of money and 3D printing, which will completely change the way manufacturing is done, will probably create some billionaires in the next decade.
The system is fine! In the last decade, we got portable access to all of human information, the ability to always know where we are, advancements in electronics, cars and all sorts of myriad incredible things. Give us another couple decades and we'll make things that will blow anyone from the present's mind.
> The technology from your iPhone wouldn't have been possible without Apple's big R&D department.
It's interesting that you point to Apple. Compared to companies with a more old-fashioned approach to R&D (Xerox, Bell Labs, IBM, Microsoft), Apple spends relatively little on R&D, and does practically nothing of genuine scientific value. I would hold up Apple as an example of the profitable "focus research on specific products, buy research expertise rather than cultivate it in the firm, forget science" approach to R&D.
Apple is the top spender on R&D, as a percentage of its revenue. Second is Google. 3rd Samsung, 4th Amazon, 5th 3M. [1]
Xerox had about 10 years of glorious moon-shot type research. IBM is notoriously fickle about spending on R&D.
The only exception is Bell Labs, which succeeded due to a combination of a number of factors [2]. There wasn't any grand vision underlying it, just the interest to grab the best and the brightest, during a time when there was far less silo-ing and over-specialization than today.
Please look again at the figures in the link you provide: Apple is larger than each of the companies I mentioned (very slightly larger than Samsung by 2012 sales; I'm not sure why 3M is in your list - they are 1/4 the size of Apple in a far lower margin industry), and spends less in absolute terms on R&D than each, and far less as a percentage of revenue.
IBM has an excellent track record of scientific contributions, has an excellent research institute, and I gather is good about things like giving its employees time to write scientific articles and providing support to employees who wish to pursue PhDs. Xerox PARC still exists and still does good, publically visible research. I don't see what "grand vision" has to do with anything.
Note, the rankings are:
> as a percentage of its revenue.
Also, regarding 3M, I am not sure what you mean by 'far lower margin industry'. 3M is a conglomerate. It's like GE, in some sense- doesn't play in any single industry, but in several tens of industries.
I think that ranking cannot be by %age R&D spend. Taking the R&D expenditure from your link, and turnover from the accounts for 2012, we have (money values in $bn):
Yeah, Apple doesn't really invent stuff, they generally just come out with the same incremental technology improvements as every other company. They're mostly just good at combining components into aesthetically pleasing products, and then claiming to have invented every feature from scratch.
Its like if Mercedes Benz claimed "only we could come up with the idea of a steering wheel, seat belts and a motor" when announcing their new model vehicles every year.
I hate to say this, but I regret that I have but one upvote to give.
I can't believe the old "science isn't profitable" canard is being trotted out to full effect on HN. The fact of the matter is that science is imminently profitable, and this is so obviously true that it takes nothing short of a true believer in the benevolence of the government as the superior investment entity and the free market as bogeyman to believe otherwise.
I mean this isn't /r/politics, is it?
The desire for profit is the single largest driver of scientific and technological innovation, period. Yes, NASA and DARPA have created some amazing things, but what about the hundreds of billions which are spent by privately held entities' R&D departments every year? Does all of that research yield nothing of value? The pharmaceutical advances? The electronic advances? The advances in automobiles, computers, housing, information access, and, now, space travel, as well as virtually every other product on the planet - do those mean nothing?
Of course they do. Private funding for research constitutes the lions share of research funding, and that private research leads to the vast majority of scientific and technological advances around the world.
The line that science would not happen absent government funding is so demonstrably false that to even make such a claim is tantamount to shouting to the world that you are a first-class idiot.
> The line that science would not happen absent government funding is so demonstrably false that to even make such a claim is tantamount to shouting to the world that you are a first-class idiot.
The private sector doesn't advance science. Also: you are stupid for claiming so. Maybe even first-class stupid.
See where that exchange gets us? Now cite your data on the majority of scientific advances being privately funded, since you claim to be able to demonstrate it. In particular, I'm interested in basic-science advances, of the kind that CERN and Stephen Hawking and etc. produce. Who in the private sector is producing those?
He and his former research colleague, Jennifer Moyle founded a charitable company, known as Glynn Research Ltd., to promote fundamental biological research at Glynn House and they embarked on a programme of research on chemiosmotic reactions and reaction systems. In 1978 he won the Nobel Prize in Chemistry "for his contribution to the understanding of biological energy transfer through the formulation of the chemiosmotic theory."
So, yes, the private sector does advance science, even basic science with no profit value. To claim otherwise is first-class stupid and also ignorance of history.
Josiah Gibbs, Irving Langmuir, and Robert Millikan were all American scientists who ran their research off of largely private (some for-profit as in the case of Langmuir, who did both basic and applied science under the auspices of GE; the others at predominantly non-profit educational institutes) These people are not insignificant - Gibbs rewrote the book on thermodynamics; Langmuir figured out that argon should fill lightbulbs -applied- but in basic science also invented the concept of plasma voltage, made observations describing ocean circulation, and conceived of molecular monolayers; Millikan discovered the charge of an electron). I am sure I am missing other American scientists who did their research with little or no help from public sources.
Totally disagree. Carnegie Institutions, Wellcome Trust, etc. are all 'private sector' and dispense millions of dollars of grants for fundamental research.
You probably mean, 'for-profit private sector'. But then, you disregard the several Nobel prizes that Bell Labs received, the intangible impact of IBM's Deep Blue and other programs on computing, etc. If solving a basic-science problem can result in several million dollars of revenue, you can be sure that a company will throw money and man-power at it. There are several examples even among the non-Bell Labs, non-IBM, non-Xerox companies: Google's search algorithms, pharma companies' Lipitor, Viagra, etc., Intel and AMD's industry-defining minuaturizations, 3M's optical films, etc. etc.
You don't think that private companies regularly form hypotheses, test those hypothesis, and draw conclusions from their results? Applied science is no less worthy of the title of science than is pure science, and the private sector is applied science writ large.
As for pure science and government funding, I'm not against it, and promoting one over the other is not my intention, as they both have their roles to play. I agree that pure science would not occur to the extent that it does now absent government funding. What I disagree with is that science would not happen at all were that the case. That's flatly stupid thing to say, and I'm a little amused that you would make that claim just before calling me stupid.
In a world where the rate of technological advance is larger than it has been at any other time in the history of the world (and is still increasing), it seems obvious that both are playing their roles rather well. The system is functioning. Science is not dead. Hell, it's not even sick.
First off, I'm pretty sure he was trying to make a point about the inefficacy of name-calling in such an argument, not literally calling you stupid.
Second, Intel advances very specific, safe-bet science like shrinking the semiconductor. As plenty of other people have noted, the semiconductor was a public-funded invention and farmed off to Japan before it could be made "profitable" by American corporations. Heck, early American business laughed transistors off the continent before the value proposition was discovered.
Also, as long as I'm commenting here, I'll note that science is not dead, but I believe that our ability to consume information has tainted our ability to understand true science. To me, the quintessential American scientist of the 20th century was Richard Feynman. His Cargo Cult Science [1] essay, which pops up on HN from time to time explains what's happening best: people who claim to follow science, are actually practicing some sort of modern-day witch doctor magic. They wave their hands and claim the data supports their wild discovery and someone bites. Suddenly it takes 10,000 hours to become a master of something "on average."
As far as I know the early ICs were much less reliable than circuits made from discrete components. It's the improvement made by private companies like Fairchild and others that made them practical. There's really a long path from invention to its practical application and the world benefited much from the improvements made by companies.
On the other hand scientific research is an example of a positive externality. According to economic theory the market underproduces this kind of goods. There are numerous measures to correct this including patents and public funding of science. We shouldn't dismiss the private companies nor public funding as both parts have positive contributions to science and are necessary for the science to operate effectively.
> people who claim to follow science, are actually practicing some sort of modern-day witch doctor magic.
Maybe this is just phrased poorly. In that essay Feynman is saying some people (namely advocates of pseudoscience) are invoking "science" to give credence to their claims but are abandoning scientific rigor, this is what he calls cargo cult science. Not all people who claim to follow science.
Fellatio by fruit bats prolongs copulation time. (PLoS One) [1]
Seriously? The greater issue here is modern day "scientists" or people who are interested in and consider themselves scientifically-minded, reading something like that and filing it away as new knowledge and not questioning the veracity of things.
As the other child to this comment noted, the problem that Feynman outlines is that no one is doing these tests over again to re-test hypothesis under new conditions or with new devices. The results get published and suddenly it's taken as Truth with a capital T.
"Because there are no facts, there is no truth
Just a data to be manipulated
I can get any result you like" -- Don Henley [2]
If that makes you uncomfortable, I suggest you try out some form of theism. Because if tomorrow I fall through my floor when getting out of bed, all we can do is re-evaluate our knowledge based on new data. And there's probably going to be a lot of new data.
I don't even know what you're trying to illustrate with those links. My point was not that bad science doesn't exist, it certainly does. But Feynman did not have this nihilistic view of knowledge, he believed very firmly in the ability of science to deduce truths about the physical world and that there is objective truth.
Sorry to dwell on this, but this is not a nihilistic view, this is what's called the Pragmatic view of truth and was expounded by 20th Century Chicago School philosophers like John Dewey. Feynman was oft quoted as being a pragmatist, and a key tenet of pragmatism was embodied in one of his quotes:
"We never are definitely right, we can only be sure we are wrong."
In this fashion, we never hold objective truth in the Platonic sense, rather we accumulate knowledge that explains how the world works.
Does fellatio in fruit flies improve sexual reproduction? One study suggests it does. The problem we face now is that a substantial portion of people, including many very well educated folks who would claim to live scientifically-based lives, say "That's amazing!" and run and tell their friends this hilarious new "fact" they've learned.
Unfortunately there are plenty of these new "facts" people learn that are much more than novelty and lead to public policy or destructive dietary changes cough fructose cough.
We could all use to be a little more skeptical, pragmatic and curious in our lives.
you should reread the essay. He then goes on to mention how mainstream physics experiments have failed to do basic things like "repeat an old experiment with the new apparatus" and therefore careening towards pseudoscience.
>> Private funding for research constitutes the lions share of research funding, and that private research leads to the vast majority of scientific and technological advances around the world.
Please reveal your source for this belief.
>>every other product on the planet - do those mean nothing?
Breakthroughs are expensive and do not conform to our short term business cycles, as they rarely result in products, as opposed to breakthroughs. Trivial and immaterial incrementalism perfectly conform to our business cycle.
This discussion is devolving into pointless bickering about where you draw the line between science and technology.
If we stop being fussy about the definitions, it's pretty clear that some research/engineering is profitable in the short term and some is not. A lot of the work that is not profitable in the short term has potential long term benefits that are very important.
Of course science is profitable and corporations spend a lot of money on and make a lot of money from research. That doesn't change the fact that a lot of current technology would probably not exist if it hadn't been funded by governments in the first place. From the top of my head: GPS, the Internet.
Technology is a progression of ideas. GPS and the internet were very low-hanging fruit on the tree. The government was just the entity which was the first to reach up and grab it. If they had not, someone else almost surely would have invented GPS and the ideas that underlie the internet.
We could construct hardly any of the infrastructure on which our developed economies are built without science: our energy systems, our food production systems, our communication systems.
Science has enabled us to separate fact from fiction, and hence develop theories that allow us to manipulate our environment in sophisticated ways.
I don't see how you can argue that 'science' as a whole hasn't paid off as a 'logical financial investment'. That claim is so challenging to my worldview, its hard for me to engage with.
For a start, what might we have invested in instead that might have had a higher rate of return?
You missed his point. Science pays itself off in the long term but in the short term it is a pretty bad investment. In the current economy we favour short term gain hence why science is in decline.
R&D is a bit different than science. Usually it has a time horizon of 3-10 years to commercialization, sometimes 10-20 at the high end. While much of science is in the 10-100 range. The 20-year patent window is crucially important for industry R&D. The timeframe can be extended a bit by keeping things as trade secrets, if you're very careful, but it gets trickier. Unpredictable and non-application-specific research also tends to be disfavored: improvement on a chemical process is one thing, since it has a clear path to application in a specific area that the company does business in. But new basic physics results are not favored, since it's not clear whether they will lead to profitable applications at all; and even if there was some way of knowing they would, those applications might not be easy for the company to take advantage of if they don't align with its business.
You couldn't directly build a product on Einstein's major results, for example, certainly not within the time window of patent protection (if they were even hypothetically patentable), but they were hugely important anyway. Same with much of the mid-20th-century materials-science work that is now proving useful to chip manufacturers: the tech industry didn't fund that research in the 1950s-70s, because they didn't know at the time which physics results would be useful to them in 2010s chip engineering. But they're definitely using them now! It's important to the tech industry that this pipeline of not-sure-where-this-will-be-applied basic research exists, because it's hard to do things like improving manufacturing processes unless there is an existing understanding of how physics and materials work in the first place. But the case for them funding it directly is weaker, because it tends to have the property that advancing the general level of knowledge benefits everyone, not only you, and you don't know decades in advance which specific knowledge you'll eventually need, anyway.
>Science pays itself off in the long term but in the short term it is a pretty bad investment.
Respectfully, that statement doesn't make sense. (Bear with me.)
Whether an investment is good or bad is orthogonal to whether the returns manifest in the short or long term:
any calculation of investment return will discount value that we have to wait for.
Maybe you mean that the payoffs don't come in the short term. But, at a societal level, so what? As long as the expected payoff (appropriately discounted) is large enough, that's what matters.
That is my objection to FD3SA's comment that "Science has never payed off as a logical financial investment."
and my objection to FD3SA's comparision to art.
Just because science's financial payoff is long term does not mean it isn't a good investment. To confuse the two issues is illogical, in an important way - which is what I'm trying to object to in FD3SA's comment.
I'd accept the argument that it is difficult to capture privately the return from fundamental science. That might be a reason why much science has been government funded. But, again, that's a different issue.
Governments should continue putting money into science because it is a good investment - it has a high expected return, over the long term.
And, in fairness, substantial public money, globally, is still invested on this basis.
>In the current economy we favour short term gain hence why science is in decline.
How our economy is managed in the short term is a different issue from whether the long term expected return of science is positive.
Maybe you mean that the way our economy is currently managed, it doesn't make sense for private actors to do science. But that's a totally separate issue from whether the societal return is positive. Almost regardless of how the economy is structured in the present, if science has a high enough long term return, its still a good investment, at a societal level.
Someone could argue that funding art has not produced meaningful financial gain - if they were very 'practically oriented' (for want of a better term) - but the same argument would not apply to science which has produced huge returns.
Even today, there is a massive delta between the funding science and arts gets - find me the arts' equivalent of NASA or CERN - because the long term return of science is recognised.
Still, we still favour short term over long term. First because the future in uncertain, and long term predictions are less reliable. That basically introduces a discount rate, which any long term investment have to overcome.
Second, there are other specific mechanisms that favour short term over long term. For private companies, it's the desire of the stakeholders for fast return on investment. They want their dividends now, not in 10 or 20 years for now. We only live so long, after all —for now. For public funding, the government favour anything further than the next elections. That spurs decisions that may be rational for egotistical individuals, but which are nuts at a societal level.
If our society really were rational, it would fund basic anti-ageing research more, and basic AGI and FAI research much more. Not to mention cryonics. But what would you expect? Most people are sufficiently nuts to still believe in the supernatural anyway.
Pssst! Try to keep it a little more secret that you joined the LessWrong Cabal for Taking Over the World! And it's your turn to bring snacks to the next meeting. The password will be, "My birthday is on Schelling Day."
Don't worry, you'll learn to speak Normal again someday.
"science is a creative pursuit much like the arts"
While surprising this statement gets at something missing in science today. Until relatively recently basic science was not dominated by a big institution monoculture but by curious and creative individuals working in a great variety of situations often in near isolation.
I wonder if much of the effort to institutionalize and "professionalize" science has been counterproductive. I can't imagine most of the great figures in history of science thriving in the current environment. Much like healthcare is now directed by insurers, science seems to be increasingly directed by bureaucrats. Should we be surprised by the disappointing results and soaring costs?
I know this is a bit of a rant and I know times have changed. But perhaps something similar to the open-source revolution in software can get science back to its roots.
>Much like healthcare is now directed by insurers, science seems to be increasingly directed by bureaucrats. Should we be surprised by the disappointing results and soaring costs?
I think you have really hit the nail on the head with this one.
> The truth is, science is a creative pursuit much like the arts. Like the creation of literary masterpieces or profound paintings, it has never made economic sense in the present. Only afterwards, once the impact can be seen, do we understand its significance. And that is why it will always be worth pursuing.
Are you serious ? It's so obvious that using Science is proving to be profitable for all industries using it. Life Sciences being maybe an exception rather than a rule, because of the complexity to address the issues at hand and the regulatory framework enforcing conservative approaches (we still categorize cancers by "location" rather than protein tracers which makes no sense at all scientifically). All the R&D going on in Google, Apple, Amazon and other companies involved in new technologies are driving innovation forward and proving to provide very tangible returns. The same goes with all the applied sciences done in the automobile industry, the aviation industry with material sciences, and I could go on and on since there is a neverending flow of examples.
"Public funding for science is frozen or being cut, private R&D labs are shuttering their doors, and companies are increasingly concerned with quarterly results at the expense of long term research."
It also has a lot to do with the Bayh-Dole Act and the privatization of research. For-profit research centers are one of the fastest growing areas of the economy. While more public funding would certainly help, the real issue is that under Reagan science became privatized. If there were less research but if it were still high quality, we'd be much better off.
>the real issue is that under Reagan science became privatized.
and science in America in the 19th century was funded how? Because we sure didn't have the NIH, NSF, NIST, DOE, DARPA, or any of these organizations then.
Even into the 20th century: For example, most of Irving Langmuir's research (in both applied and basic science) was conducted privately, long before Reagan. Same goes for luminaries such as Gilbert Lewis, Josiah Gibbs, Robert Millikan, etc.
There's nothing wrong with criticizing science, but the reaction to The Economist article -- which itself was a bit too breathless for comfort -- is heading rapidly into tiny-green-football-linkbait territory.
The scientific funding and publication system has problems that deserve scrutiny, but science itself is far more rational than nearly any large, human-maintained system I can think of.
When we resort to hyperbole like "science has lost it's way", we give a group of vocal, clueless idiots more power to undermine the most consistently productive engine for progress that humanity has ever devised. So let's talk rationally about the problems, but don't throw the baby out with the bathwater.
"Science has lost its way" is usually a statement I see trotted out by people in the media, who think that the quality of scientific journalism by the media, represents the quality of the actual science it fails at reporting on accurately.
"Scientific journalism has lost its way" would be more accurate were it not for the fact it clearly never had one.
First of all, criticizing science is like criticizing democracy. Yes, it's flawed, but still far better than anything else we've tried so far!
If you look at the typical application for funding, you'll see questions that basically prod you to explain why your research/students/etc. are exceptional/revolutionary/ground-breaking. Everything must look like a Nobel prize waiting to happen if you're going to have a chance at beating out everyone else making the same application (and exaggerations). It's utterly ridiculous! It's as if thousands of guitarists were auditioning at the same time and, in an effort to be heard, each has cranked their amp to 11. The result is a cacophony where even each individual sounds awful because of the distortion. If everyone dialed it down to 5 things would be bearable, but there's always someone willing to nudge it up to 6 or 7...
A nice long list of high profile publications is great to hype when your amp is set to 11. If you have published many papers in high impact factor journals (again, often by inflating the significance of your work), you must be worth funding!
Perhaps scientific funding needs to be awarded in a manner that is more... scientific. Heck, perhaps funding agencies should reserve a certain percentage of their funding specifically for reproduction of results. Currently, if you apply for a grant to check other peoples work, people doing original research will win absolutely every single time. Unfortunately, the preference for original research goes right to the very top of governments. Politicians want brilliant nobel price winners, not competent fact-checkers.
One idea to improve the state of things is to require graduate students to verify some number of external studies. In addition to helping with the problem of not enough review, it would make an excellent practical test for doctoral candidates.
It wouldn't work for every field and area, but it could work for a significant subset of research.
require graduate students to verify some number of external studies.
That's how Reinhart and Rogoff's “Growth in a Time of Debt” spreadsheet error was finally unearthed, unfortunately that was after it had already been instituted internationally as public policy, but hey, you can't have everything.
As a PhD student in CS, this is an incredibly good idea---even if it stops short of verification and stays somewhere at "reproduce". It can be a lot of work to reproduce a single publication, and often requires very careful reading and attention to detail.
I've done it a few times myself (partially because my work requires it and partially because I wanted to convince myself of the work I was using), and it was an incredibly valuable learning experience.
In the case of CS, replication should become a matter of 'git clone' and 'make'. (Or running a prepared virtual machine, etc.) Yes, it can be worthwhile learning by reimplementing from a high-level description -- I've done it a lot -- but that doesn't excuse making replication difficult.
> Yes, it can be worthwhile learning by reimplementing from a high-level description -- I've done it a lot -- but that doesn't excuse making replication difficult.
I don't understand what you're trying to achieve by saying this. Did I somehow imply that people should be excused for not making their work reasonably reproducible?
It my sub-field, it is utterly ridiculous to expect to reproduce work by running `git clone && make` or by getting a prepared VM. Computational biologists are not known for their systems or software skills.
Well, the context here is discussion of the problem of unreplicable research. I do get the impression (including from your post and reply) that computational biologists need to get their act together on that front; see http://ivory.idyll.org/blog/ for one researcher in that field who sometimes blogs on this theme.
Indeed, we do need to get our act together. I'm still just starting, but I have a couple of ideas on proactive things I can do, but it's an uphill battle.
Thanks for that link---it's always nice to see other people in the field concerned with this. There aren't enough of them.
I'm not convinced that's "replication" in the sense that's important in science. Bugs in the original test will be reproduced in the attempted reproduction. It's useful for "this person didn't flat out lie about their results", which is probably also a good thing, and for finding issues when replication fails, so I don't disagree that it should happen. I just disagree with calling it "replication".
Fair enough -- maybe we should find a different word. It's a reasonable bare minimum standard in the computing world, where it's practical: it makes it possible to trace back from questions and problems with the claim.
Sure - it's clearly a good idea, where feasible (and git repo + whatever-special-hardware should allow the same thing, for the one reasonable case I can think of where it wouldn't be feasible).
Motivation is an issue - each verification would require a bunch man-months of effort, so it won't happen unless there is separate funding for that or somehow magically started to be as prestigious as putting the same effort in a new experiment/publication.
"Requiring" has the same motivation problem - those who could require it, currently would rather require those students to do something that brings funding or prestige, so they won't.
Graduate students are already required to take courses and various tests that don't bring funding or prestige. This could simply be an additional requirement, or replace an existing requirement.
In time, such a program would bring quite a lot of prestige as flaws are discovered and fixed in existing work. It's really the easiest and most immediate way to address this problem since it only needs to involve single institutions (whose faculty presumably care deeply about this issue).
It doesn't seem likely that journals will suddenly start valuing verification work. Similarly, politicians and funding agencies appear uninterested in actual science; they care only about their careers or immediate application to the politically popular cause of the day.
Graduate students already verify people's work. Because every new graduate project involves building on the work of previous science. Which by definition involves re-verifying the work to show you get the same results.
If the results are hard to replicate, or dependent on other causes, then usually that's when a project shifts or when the knowledge pool expands (for example chemistry is fraught with environmental effect dangers - fluorescent lights provide UV to reactions, your glassware has imperfections, the temperature and humidity of labs varies with climate).
I would say that falsifying an existing publication should easily be at least as prestigious and funding-worthy as having made that publication in the first place.
This is an excellent idea for a graduate course; it could be more collaborative for bigger projects, or one-offs for smaller ones, dependent on the field and type of study, as you've mentioned.
I think students should get course credit for it, instead of adding another requirement for PhDs, just to keep them from taking longer than they already do...
I think this is a horrible idea. The average age of R01 has risen to over 40! Why add another 2 years by requiring students to reproduce work that is already published? It adds virtually no value to a graduate degree. The most difficult part of science is creating new hypotheses and designing and executing the experiments to test them. It takes years to learn how to do this as is, without adding on time to use a method you have no interest in, to re-demonstrate an experiment that has already been designed, to test a hypothesis that is already in the literature.
I like the idea. Though in my field (mathematics), I think this is actually not at all uncommon already (reviewing and sanity checking unpublished work of your adviser and their colleagues).
Then again, we have the benefit of being able to "make" our own data so to speak, don't have IRBs to worry about, etc. This is trickier in other disciplines.
I’m a scientist and I agree with this article.
Fact is that the way we incentivize science is what is causing these issues (that and tenure, but that is a longer post).
Science, at least in biology, is just like any business. Both are motivated by $.
The good news is that open science and the impact that the internets is having on science can help this problem.
In my opinion, transparency in science will fix many of these issues.
Speaking of econ, this feels like a market failure to me. I'd be in favor of redirecting some portion of government research dollars towards an independent "validation" shop staffed by scientists who attempt to independently replicate submitted findings, and vow to write up all results (even detailing events that lead to the interruption or cancellation of an experiment). Findings that cannot be replicated by the validation shop should be viewed with extreme skepticism. Researchers would quickly learn not to fudge the results, and find more effective ways to control their own unintentional biases.
It wouldn't have to be government. It could be useful as a nonprofit, I just don't think it'd be sexy enough for anyone to support, despite the sort of urgent necessity of something like this.
For subjects with a lot of data analysis you could largely automate a part of the validation as scientists use programming languages.
Papers would have to provide access to their raw data and the code used to process it. You could then just run code(data) to generate the figures / results in the paper.
Two problems here:
1) assumes that the raw data is ok (which is a big assumption).
2) existing scientific code is largely terrible (completely imperative, no abstraction, no documentation, poor typing conventions), and it is very unlikely you can do
figures <- code(data)
to regenerate a paper.
This is what forced validation would aim to change though.
I agree, but this would give you a starting point in that you could see exactly what steps were used to create the figures in a paper.
It might be that their high level description (usually in papers anyway) is correct but their implementation is flawed in some subtle way that peer review doesn't pick up.
Assuming a correct implementation this would be useful for anyone wanting to use the methods of the paper.
For example, I've spent the last 3 weeks coding a numerical solver for some equations in a fluid mechanics paper. Having the code for this available would have saved me figuring out all the quirks of the solution.
"Because of science - not religion or politics - even people like you and me can have possessions that only a hundred years ago kings would have gone to war to own. Scientific method should not be take lightly.
The walls of the ivory tower of science collapsed when bureaucrats realized that there were jobs to be had and money to be made in the administration and promotion of science. Governments began making big investments just prior to World War II...
Science was going to determine the balance of power in the postwar world. Governments went into the science business big time.
Scientists became administrators of programs that had a mission. Probably the most important scientific development of the twentieth century is that economics replaced curiosity as the driving force behind research...
James Buchanan noted thirty years ago - and he is still correct - that as a rule, there is no vested interest in seeing a fair evaluation of a public scientific issue. Very little experimental verification has been done to support important societal issues in the closing years of this century...
People believe these things...because they have faith."
From Kary Mullis, the Nobel Prize in Chemistry winner (and the genius inventor of PCR) in an excellent essay in his book "Dancing Naked in the Mind Field".
Real science depends on room to fail, but starting in middle school science fairs, it's clear that negative results and "failed" experiments aren't what the teachers/judges are looking for.
I made it through to the state science fair in 8th grade. It was based around magnetism, and after months of work, it turned out my tests just weren't sensitive enough to measure any difference in any of the electromagnets I built.
When I mentioned this to my teachers, they encouraged me to fix the results with a wink and a nod. Sure I could have turned in a "failed" project and maybe got a B, but I was an A student and there was no room for failure.
I'd love to see some data on how many high level science fair projects are faked each year.
People love negative results. But proving something doesn't work requires proving you didn't stuff up the implementation. That is amazingly hard, and about 10x more work then showing a positive result.
For example what you're talking about with magnetism isn't a negative - you can prove that an effect is bounded by the lower limit of accuracy of your instruments. Physicists do this all the time, because it's the correct way to phrase the result. I'll not to speak to the quality of your teachers though.
This is not only in medical science, but apparently archaeogy too. (Anecdote alert) A very close friend took part in a dig by a very well known professor. Artifacts that were found that disproved the professor's theories were destroyed before my friend's eyes.
It really makes you wonder what percentage of what we "know" is true.
There are a lot of issues flying around that are being inappropriately mixed up for all sorts of political purposes.
The Begley 'study' is impossible to assess, because they didn't report what the studies were, nor any of their methods, or anything. It's BS and hearsay, not science. Moreover, according to the Begley article itself, "The term 'non-reproduced' was assigned on the basis of findings not being sufficiently robust to drive a drug-development programme."
Nobody said that the purpose of every single scientific paper was to enable Amgen to go start a drug-development program.
There are many problems with the science funding situation, the glamour pub game, excess hype, funding getting sucked up by mega-projects, lack of open-access, inability to publish negative results, etc, etc, etc, but in general, it is not true that scientists are making up sexy results to get them into Nature and Science.
This is just way too true. I know someone (anecdote alert) learning at a really prestigious university in a medical field and she told me multiple times that they intentionally cheat the results to match the expected output.
By cheating I mean... flat out lying. I don't know the implications of this (how far misinformation can get) but it seems like a wrong culture and attitude, especially for science.
Such a thing happened in my graduate school lab. I'll even cop to having a data point in one graph where I just got sick and tired of doing the experiment so there's an N of 4 instead of an N of 5 as it states it is in the methods section (this is actually impressive for the field I was in at the time, which typically didn't even do replicates at all. I am sure my colleagues' results are, at best, cherry-picked).
You know, I did this in my freshman chemistry labs. I mean, sometimes I just made up the numbers for my "observations", varying them by some amount from the predicted numbers.
At the time I definitely wondered how much of that went on in "real" science. I suspected (and still suspect) it's a lot more than is ideal.
"lost its way" suggests that science was once firmly on a sure path of rigorously verified studies, never a thrice-checked statement assumed. That was never the case.
Acknowledging, attempting to quantify, and then (some institutions) attempting to fix systemic issues in the peer review system is not an emerging crisis. We have to fix incentives, but we aren't about to see some fundamental tenets about to be overthrown here.
It isn't clear what the "big cost" is referring to. Certainly money has been spent on poorly-founded studies with fundamentally inconclusive results. If it instead refers to opportunity cost, fortunately we have the entire future of humankind to pick up what we might have figured out earlier.
While I certainly agree with much of the factual content presented both in the article and in the comments, I think that science already has a lot of self-correction mechanisms built in. None are perfect individually, but the big, messy system has a lot of redundancy built in. It's just not always so visible to journalists or science writers, who don't hang around the scene for the years that it often takes for science to find its way again, so to speak.
For example, many of these high-profile, possibly erroneous (or occasionally fraudulent, it seems) Nature or Science articles are high-profile because they seek to address a contentious or long-standing problem in the field. When this happens, there are typically existing alternate hypotheses. It's much easier to get papers published or grants funded by seeking to test competing hypotheses than to simply try to verify an isolated study. It can also be easier to find weaknesses in an individual study by testing it in a different way, or against other models, or whatever, than by simply trying to reproduce it. Often, a single study might be impossible to directly replicate, or the underlying flaws may not be apparent until the problem is approached from a different angle.
Granted, this can take a couple years or even decades, but falsehoods (intentional or not) tend to become more apparent as their context becomes more clear.
Science is working exactly the way it has always worked.
Most papers have always been flawed, wrong, or not reproducible. There has always been pressure to publish--going back even to Newton's battles with Hooke over gravity, or Darwin's rush to publish On the Origin of Species before Wallace.
What has changed are the cultural expectations. Culturally, we've become spoiled by physics. We're used to the precision, speed, and accuracy of physics and engineering. Moore's law, the iPhone, incredible bridges, the 787 and 380 airplanes--they all just work, safely and reliably.
Note that the reproduction problems are most prevalent in chemistry, biology, medicine, etc. These are areas of science that are far more complex, and about which we know far less, than physics. It will take a long time, and a lot of failed research, to even start to approach that level of knowledge. Given the complexity, it might be impossible.
If you are a biologists and you want to keep your lab going, and you want to have RA-ships for your graduate students, you need to get grants. You aren't going to get grants unless you are cranking out publications. The days when as a biologist, you could work on a problem for several years, being careful, checking your work before you publish... those days are over. I'm confident the system will right itself eventually, hopefully in my lifetime.
It seems like the solution to this is fairly simple. Use some statistical or machine learning method to figure out the probability that a certain thing is true using the information we know about it, like what journal it was published in, the results of replications, maybe even stuff like how crazy the result seems or the experience/reputation of the scientists, etc. There is a ton of data to work with, on top of the actual data itself.
You could predict with decent accuracy how probable a study is to turn out to be true or false. Then you can use that information to decide whether it would be worthwhile to do more studies.
If you can work out what learning problem you've got here, and what methods you can apply towards a solution, you should go and pitch that as a start-up.
I'm not sure there would be anyway to monetize it but I considered trying it as a personal project. It would be way to much work to manually enter the data of thousands of papers into the computer though. I would also need objective data on which studies actually turned out to be true or false. Or at least which ones could be successfully replicated.
The PubMed Commons initiative[1] by the National Institutes of Health, mentioned in the article kindly submitted here, is a start at addressing the important problems described in the article. One critique[2] of the PubMed Commons effort says that that is a step in the right direction, but includes too few researchers so far. A blog post on PubMed Commons[3] explains a rationale for limiting the number of scientists who can comment on previous research at first, until the system develops more.
Some of the other comments mention studies with data that are just plain made up. Fortunately, most human beings err systematically when they make data up, making it look too good to be true. So an astute statistician who examines a published paper can (as some have done) detect made-up data just by analyzing what data are reported in a paper. A researcher who does this a lot to find made-up data in psychology is Uri Simonsohn, who publishes papers about his methods and how other scientists can apply the same statistical tests to find made-up data.
From Jelte Wicherts writing in Frontiers of Computational Neuroscience (an open-access journal) comes a set of general suggestions
Jelte M. Wicherts, Rogier A. Kievit, Marjan Bakker and Denny Borsboom. Letting the daylight in: reviewing the reviewers and other ways to maximize transparency in science. Front. Comput. Neurosci., 03 April 2012 doi: 10.3389/fncom.2012.00020
on how to make the peer-review process in scientific publishing more reliable. Wicherts does a lot of research on this issue to try to reduce the number of dubious publications in his main discipline, the psychology of human intelligence.
"With the emergence of online publishing, opportunities to maximize transparency of scientific research have grown considerably. However, these possibilities are still only marginally used. We argue for the implementation of (1) peer-reviewed peer review, (2) transparent editorial hierarchies, and (3) online data publication. First, peer-reviewed peer review entails a community-wide review system in which reviews are published online and rated by peers. This ensures accountability of reviewers, thereby increasing academic quality of reviews. Second, reviewers who write many highly regarded reviews may move to higher editorial positions. Third, online publication of data ensures the possibility of independent verification of inferential claims in published papers. This counters statistical errors and overly positive reporting of statistical results. We illustrate the benefits of these strategies by discussing an example in which the classical publication system has gone awry, namely controversial IQ research. We argue that this case would have likely been avoided using more transparent publication practices. We argue that the proposed system leads to better reviews, meritocratic editorial hierarchies, and a higher degree of replicability of statistical analyses."
Like all other sectors, scientific research can get inbred and peer review corrupted as a mechanism. It's similar to a character/job/performance reference: "We want to hear what other people say about you," can be problematic when the folks talking are untrustworthy--yet they wield credentials imparting trustworthiness. Peer review only worked when the majority of peers were rock-solid good scientists. (when there were fewer scientists, with personal reputations and the discoveries to back up their reputations) Wouldn't it be great if Pierre Curie, Darwin or Tesla were doing peer reviews? (or men/women of similar caliber)
At leading research schools (they aren't all universities), falsification of data exists at the student and professorial level.
Funding is definitely cut at NASA, whereas "sexy" research funding is increasing. We need excellent researchers across all areas of expertise. And we need increased accountability and transparency. And more funding.
I argue that what is most needed is increased scientific literacy at the level of political leadership and the general population so that findings can be accessible/understood/evaluated on a more concrete level by all.
Coming pathogen shifts associated with climate change, and extreme weather events, etc. alarm people. And people want to be able to trust science and to trust science reporting.
1. Reproducing work is a waste of resources. Use the money and researcher hours to develop tools that are more reliable, cheaper, and easier to use. A major reason why no one reproduces experiments is that the initial work was very difficult. Let's invest in technologies that make science easier. Reproduction (of experiments) should be done in high school biology classes.
2. Technical reproduction is rarely done, but conceptual reproduction is common. Findings in the literature become incorporated into disparate subsequent hypotheses tested by many other labs. If something doesn't add up, this will often increase the impact of the paper and eventually be addressed through experiments to resolve different models of the phenomenon.
3. There is no widespread fraud in science. Your academic career rests on your integrity. When I publish a paper, I do my damn best to make sure it is accurate. My reputation relies on it. When scientists continue to publish results that are false or fraudulent, they become discredited within the community. All graduate students in the life sciences are required to take a class on the ethics of science.
4. Publishing is a bitch and a source of real rot within the community. Fortunately, many researchers and academics recognize this problem and are addressing it. Look at the new journal eLife, or open access journals, or the increasing interest in arXiv.org (moving to a publishing model closer to that found in math and physics, which appear to be healthier research communities than life sciences). As experiments become more technically advanced, the expectation for methods sections have increased, not decreased.
5. People want to commercialize scientific findings that are relatively new - it's obvious that is risky! Why put the burden back on (under-funded) scientists? Drug companies are the ones that would benefit financially. Or wait until the phenomenon is better understood. Notice they're talking about drugs for complex diseases like cancer, metabolic disorder, etc., not Mendelian diseases. It's as if people complained that they couldn't get their lasers to work using a 1917 understanding of the physics of light. But Einstein demonstrated the fundamentals! Why did it take 40 years to make it work in practice?
Someone do a sentiment plot with "goodness" on the Y-axis and years ago relative to writing no the X-axis. I won't be surprised if there's a positive correlation. Successes, new challenges, and shortcomings become apparent. Whatever worked looks like it was good principle in hindsight. Whatever hasn't panned out due to new challenges looks terrible. Cherry picking in order to build a case that allows one to write authoritatively doesn't make anyone a saint or cultural leader.
Therefore, when I see an article like this with such a broad, generalizing headline, I just think it's click-bait. Lost it's way? I've read some absolutely terrible papers. "Theory of the Origin, Evolution, and Nature of Life" by Erik Andrulis is an excellent example of such unfathomably speculative garbage. I've also read a huge number of well-done papers on topics in aerospace engineering and materials science. It's always on the reader to re-produce experiments if they depend on the result, to understand the paper correctly etc. This is what my professors did. If part of the community is circle-jerking, let evolution run its course. We used to treat Aristotle as canon in the western world. Obviously things get better over time.
Skimmed article. Old news. The fact that someone is raising the flag, saying "there's a lot of low-hanging fruit to use to establish yourself as a more accurate researcher," just means we will see more of such review activity, making the title seem inaccurate. You never know when you can free yourself up an adjunct professor position in exactly your preferred field of research.
Especially in medical studies, where you've often got cases of n=40 or similar (even in later stages!), this is a huge issue.
In contrast, just think of the size of n you need in physics to be taken seriously!
The major reason for that is, however, that most people in the medical & biological area are rather lacking a profound mathematical education. There are cases where papers get rejected because they are too mathematical.
There are 2 reasons you want a large sample size: (1) to have enough subjects to expect a reasonably representative random sample (typically ~30+ for social science [1]) and (2) to have sufficient statistical power. There's nothing inherently wrong with n=40 and n=40 from an unbiased sample is better than n=400 from a biased sample.
Physicists are generally looking for very very small effects, hence very high n (higher n = higher power = more sensitive to treatment effects). This doesn't mean lower samples sizes are insufficient for other areas of research.
Anyway, the real issue is over-reliance on convenience sampling.
There is a fair share of ignorance of stats to go around, but in the biological sciences and in medicine, getting a big enough n is often practically difficult or impossible. Sometimes it's funding for the study that is to blame. But biology is messier than we often like to admit to ourselves. Sometimes it's just hard to get the materials. I was in a session just this week that showed a gene therapy cure for two kids who were cured of lysosomal storage disorder. It was incredible. But still an n of 4 I think. The process of growing the cells and transferring the genes is so time consuming and difficult, that's the best they could do. That's an extreme case to be sure, but even more mundane stuff can suffer. For example, if your study requires a muscle biopsy instead of a blood test, your n is going to be lower because muscle biopsies are a tough sell to patients. People, animals, and cells are all a lot more fickle than bits and electrons most of us around here are accustomed to working with.
Mathematical biologist here. There are lots of criticisms of the state of mathematical competence in biology I could offer, but power analysis really isn't one of them-- that is typically the one thing that _is_ taught well, if only because data collection can be so difficult. That is not to say that no clinical trials have ever suffered from badly handled statistics, but the issues are usually subtler than "we forgot to determine how many observations we'd need to detect an effect."
There is also the criterion of "impact factor", for papers and publications. It's very similar to a "Karma" system as used by HN, Reddit, etc., and it has many of the same problems. Imagine a system where you have to choose between doing research that might be vital but probably won't, versus something safe and predictable that ensures that you get paid next year.
The problem is not so much science as the management and funding of science, which have been infected by the same managerialism that causes so many problems in big government and corporate projects.
Here's a case study of mass-media attention-seeking through pseudo-science: an evaluation of behavior on sinking ships, coincidentally issued five days before the centennial of the Titanic's sinking. POS, but well, ok, surely serious scientists wouldn't take such work seriously. The Proceedings of the National Academy of Sciences of the United States of America (PNAS) received the paper for review on May 2, 2012, approved it on June 29, 2012, and published it on July 30, 2012. Science has lost its way....
For details, see
http://purplemotes.net/2012/04/22/deadly-sex-discrimination-...
While I would say that the drop in rigour is worse in Medicine than Physics, it is clearly still present even there.
The way funding works, in particular, means people publish stuff-that-will-get-references with a similar attitude to web start-ups iterating their code, (by which I mean too damn fast and not listening to the peer reviews.)
I'd love to see this change, but I don't know how central agencies can easily/affordably work out which research(ers) to fund and which to cut. As others have said, by definition we don't know what work was useful/valid/critical until many years later.
Why do authors and publishers of articles like this, which invariably turn out to be about medical knowledge/studies/research, resort to using the misleading, incredibly broad word "Science" in their titles? All of "Science" has lost its way? Really? Phyics? Chemistry? How about this for a headline: "Journalism has lost its way"? The article could then simply be a list of all the sensationalist, purposely misleading crap that's published in major publications. Long list.
This will continue because the tendency to make excuses for it is directly proportionate to the ability to do anything about it, same as with most/all social ills.
Reproduction of results sounds like a good area of AI and automation. Thankless toiling, not very gratifying work, largely mechanical (not creative).
It would be very cool if someone came up with a unit test framework for various fields of science. Then we could make reproduction unit tests a requirement of publishing, so anyone with the proper equipment/framework could sync and run the tests themselves.
In the spirit of inquiry, I'm waiting for the other half of this story. These articles seem sensationalist. So what's the catch? Does it have to do with studies being pre-clinical and more likely to be wrong? Or is it that they're being held to a different level of scrutiny? Or the evidence shows correlation, but not statistical significance?
Why not allow comments from everyone? We all know that closed clubs lead to politics. Many life scientists are extremely allergic to feedback, and they go to great lengths to avoid scrutinizing their own or others' results.
When you make publishing a number of papers mandatory and tie it to tenure, it is inevitable that quality will suffer. The need is to rethink quantitative metrics and move to a more quality oriented one.
The institution of science is undergoing a catastrophic decline. The reason behind this is simple: it is no longer a growing economy. Public funding for science is frozen or being cut, private R&D labs are shuttering their doors, and companies are increasingly concerned with quarterly results at the expense of long term research.
And why should it be otherwise? Science has never payed off as a logical financial investment. It is the riskiest of gambles by definition, requiring inordinate expenditures of time and resources in the present for a chance at some distant breakthrough decades or even centuries in the future. Institutional science is not an economically sound choice in the best of times, let alone during the current span of never-ending recessions.
The truth is, science is a creative pursuit much like the arts. Like the creation of literary masterpieces or profound paintings, it has never made economic sense in the present. Only afterwards, once the impact can be seen, do we understand its significance. And that is why it will always be worth pursuing.
The reality is that, increasingly, we live in a society that does not understand this philosophy of life. People only care about how they will survive tomorrow, and who can blame them, as the world economy gets ever more competitive and cut-throat.
Increasingly, it has become clear that our society does not reflect one designed with its own best interests at heart. Why this is, how it happened, and how we can change it, will be the greatest challenge of our lifetimes.