Hacker News new | past | comments | ask | show | jobs | submit login
Bias Against Novelty in Science (nber.org)
105 points by wyndham on June 8, 2016 | hide | past | favorite | 44 comments



Having experienced bias against publishing truly novel findings, I can testify this form of bias is a potential hazard re: scientific progress.

Back in the early 2000's, in the course of clinical practice I encountered some unexpected outcomes of treatment which correlated to a particular, if unexpected diagnosis. I tried to find info about this, but to my surprise a thorough search of the literature came up empty.

I felt a duty to report my observations. Putting it into a formal paper was difficult for me to accomplish, but did get it organized. Believing in the idea of open access, I decided to submit to a newly established open-access publisher.

Peer review did not go smoothly. Because the subject had never been reported before, at least in a peer-reviewed journal, the reviewers expressed doubt about the reality of the data. One "peer" was determined to disparage the report to the extent of making glaringly inaccurate, distorted comments about the nature of the condition in the subjects of my report. After a rigorous defense of data and procedure the editors decided to ignore the negative review and the report was published.

Novelty is not only apparent in terms of methodology or cross-discipline application of ideas, but also arises when attempting to share previously unreported observations. The closed-mindedness of many in the academic community (who are the predominant peer reviewers) is apparently a pervasive problem in the sciences, and possibly endemic in academic culture in general.


It sounds like your review process went very well. Those hardened anti-novelists have probably rejected piles of genuinely crank material. They're needed to keep the literature free of true rubbish. Of course they're not perfect so sometimes rubbish gets through and sometimes good work gets blocked, but if the researcher really believes in their work, they'll go to extra lengths to break through, as you did.

Maybe the world would be a slightly better place if your overly skeptical reviewer had reviewed this http://briandeer.com/mmr/lancet-paper.htm


Keeping "rubbish" out of the journals is a reasonable goal, though judging by what's in the journals I read, you're right rubbish still leaks through from time to time.

I believe I was the beneficiary of good fortune in overcoming the barriers, but luck is a very thin and unreliable conduit to success. Indeed I know authors not as lucky, and their good work was blocked.

Fields of research are often highly specialized and a small community of interest can become insulated from voices outside that world. It's my impression that was the basis of the difficulty I encountered.

Not that I have a solution to the problem, but I've wondered how many worthwhile novel contributions by "outsiders" not affiliated with a field's main research centers are being lost under current peer review regimes.


Extraordinary claims require extraordinary evidence. It very good that you pushed back so hard, I see no other way to overcome this problem. The status quo was also built on (usually) a lot of evidence, it is not good if it would be easy to overthrow.


Yep. Professor in molecular biology here. Been at numerous universities you know. I can confirm very unscientific groupthink as the main driver of science progress nowadays. Being a lone wolf, rogue isn't really a viable option given the funding climate. Perfect sheep extends all the way from college to the professor level. Think this would surprise most lay people given the conception of scientists as off in their own world.


I think a number of things contribute to the rise(?) of groupthink in the sciences (particularly in biology). One is the increasing age [0] of the peers in the peer review systems. Younger people are typically willing (and able) to perform more risky novel research. However, the biggest single factor is the drop in science funding, which leads to fear and an inherently conservative response (on the part of both grant applicants and reviewers).

[0] http://scienceblogs.com/transcript/2009/12/03/nih-grants-by-...


There is also the whole NPM-ish attempts at using things like publication frequency and number of references to gauge the ROI on research, and therefore the performance of the scientists involved.

All this runs smack into Cambell's law.

https://en.wikipedia.org/wiki/Campbell's_law


wow, thanks, I had never heard of Campbell's law, and it really puts into a concise form what I believe about a lot of things (from science publishing to interest rate manipulation).


I think i originally ran into a British variant, called Goodheart's Law, while reading about reforms to their National Health Service.

https://en.wikipedia.org/wiki/Goodhart's_law

The particular example being of how hospitals where given a negative rating based on how many patients were left waiting on stretchers (the rating system being introduced in an attempt at producing market like forces within the NHS). To get around this, the staff started removing the wheels on the stretchers while the patient was waiting. That way it was a bed, not a stretcher...


You can be rogue if you've been a famous professor for decades, by which point your ideas are stultified or just state funded intellectual masturbations... you might get lucky like k Barry sharpless and stumble on an old idea that works really well and was forgotten.


This is the beginning of the end of life sciences research.


Meh, "Science advances one funeral at a time", today is no different than it ever really was. Nice thing about science, though, is that it does advance and one funeral at a time is relatively quick, as compared to religions, governments, and DMV lines.


I feel like if you break down what's being said here, that it's just a consequence of having to communicate ideas to people.

I publish "Gravitational constant is actually (some slightly more specific thing)", with extension of existing methods. Someone sees that and already will be convinced just by the title!

I publish "Gravity is actually caused by micro-gnomes pushing things around with walkie-talkes", and it's pretty novel. I don't really have existing work to base it on.

Why should anyone believe my work has any basis in reality then? The burden of proof is on me to convince people. Fool-proof methodology, simple explanations. Bringing it back to current understanding helps. But I'm the person saying everything is different, the world owes nothing to me just yet.

Not to mention that the header graph is totally confirmation bias. Yes, you're going to cite the original paper citing the effect you're studying. You probably won't cite most incremental research. Definitionally novel papers that get any traction will collect references.

---

I actually have an example of my own novel research! I'm the first person to post about a bug with Ubuntu's (at the time) new IME manager[0] on Ask Ubuntu. Turns out there was a real thing (and it wasn't me being silly). So now I have 550 points on Ask Ubuntu because of this question, and get notifications about it all the time.

I have cornered the "Ubuntu 13.10 keyboard bug" citation space. Mainly for being first. And I have gotten extra rewarded for it. Novel papers enjoy the same perks when they get any traction.

[0]: http://askubuntu.com/questions/356357/how-to-use-altshift-to...


It's likely not the same phenomenon, but I'm reminded of the story of Millikan's oil drop experiment and how the accepted value slowly changed from Millikan's reported value to the curent accepted value.

https://en.wikipedia.org/wiki/Oil_drop_experiment#Millikan.2...


Their full paper is here http://cepr.org/sites/default/files/news/CEPR_FreeDP_180416.... . If I followed it correctly they measure "novelty" by:

- bucketing citations into journals

- building a network of journals with edge weights defined by the number of papers that cite both journals

- Using cosine similarity between journals (cf https://en.wikipedia.org/wiki/Similarity_(network_science)#C... ) in this network to define how banal it is to cite them together

- For a new paper, they recompute this matrix and define novelty by summing 1-banality over the new edges

I don't know if their method reduces to a standard thing from network science (eg. betweenness, centrality, ?) but I would not be surprised if it did.

A closely-related paper with a totally-different writing style did something like this a few years back:

"Citing for High Impact" https://cs.stanford.edu/people/jure/pubs/citations-jcdl10.pd...


My preconceptions here are the reverse - that mundane studies seeking to replicate existing studies to confirm their findings are boring, and as such are not pursued or funded, despite their importance. I understand that 'novel' is being used in a slightly different context here, though.


I think an interesting dichotomy exists here and you touch on it.

Studies that seek to replicate/ confirm work are rarely funded. Similarly studies that are too radical from more standard paradigms are also neglected.

It seems that incremental novelty that can logically be inferred tends to be the most attractive to funding agencies/ review committees.

In some ways it seems a bit like a bell curve with the probability of being funded (y axis) vs the radical nature of the proposed research (x axis).

This is an oversimplified view but something that struck me.


In biological sciences it's a well known secret that most of what you write your grants on is for work that you've already done. This is not about being "boring" vs interesting. It's pragmatism to survive and get funding.


Is it not that way in other sciences? It's funny that I just assumed all sciences did the same thing. In fact, my wife works in a museum, and I was stunned to find out that they write grants for work that isn't even started yet.


> Is it not that way in other sciences?

I can speak to the fact that this is absolutely the case in the physical sciences.


CS is somewhere in the middle of those two extremes. You write grants for things you haven't done yet but also you typically have some initial work (maybe a minor publication, maybe not) to prove the direction is promising enough to fund.


Thats interesting. Thats how a friend of mine who's finishing up her major in biology explained it when she told me about how her professors lab got grants.


I also think there is a huge bias for publishing "novel" findings... that's how you get into bigger journals.


Status-quo bias is a phenomenon that is well documented in the psychology literature: https://en.wikipedia.org/wiki/Status_quo_bias Bias for the established ways of thinking seems especially common (to me) in the physical sciences. I do not think that should be a surprise since the fundamental view of natural processes does not advance quickly. Perhaps this is why Einstein remarked that if you really believed in your ideas then you should quit being an academic and go get a job like a lighthouse keeper. That maybe works in mathematical physics, where all you need is a pen and paper, but in the other sciences it does not seem like a great plan. What I have observed many do when they find their ideas fall foul of the times is they go get a line-of-business job that does not require a lot of mental energy. That is a plan I have seen work for many. Finance is particularly good for that. It is a field where knowing how to think in contrarian fashion and also knowing how to run with the herd are both valuable. I see a lot of mathematical physics folks do that as a way to reconcile the glacial pace of scientific development with the urge to think and do something new and innovative. There are quite a few famous mathematicians who paid the bills with finance work. For instance, Gauss wrote insurance contracts and used his distribution to help price the risk. Of course, in modern times Jim Simons runs a hedge fund.



Good! There should be a bias against new things until the new things are proven to be correct.

Optimizing science based on "what do people who want to get tenure want" seems like a great way to be bad at science.


I agree, I have some experience from doing medical research, and this seems like a given to me. My friends who work in the softer sciences like psychology have different opinions though, since they often work in degree's of correctness.


That's a good point. Do you feel this is becoming more of the norm in certain realms of scientific research? what is your view?


My aunt's been studying the sun magnetic field for decades. She has measured (yes, measured) parts of the sun inner magnetic field (quite complex to do actually). Div(B) not null there. She can't get it published because Maxwell's equations and it's a "new" discovery. Maxwell starts with "in the void", I think we can agreed that centre of the sun is not the void. It is incredible that scientist MEASURING stuff can't get its measures published in recognized science papers.


> Maxwell starts with "in the void", I think we can agreed that centre of the sun is not the void.

Maxwell's Equations apply in the presence of sources, i.e., not just "in the void". That includes the equation Div B = 0. In other words, "sources" are electric charges and currents; there are no magnetic charges and currents. So, as GFK_of_xmaspast commented, most likely there is an error in her measurements somewhere, and she should be trying to find it.


My expectations are that if she discovered a magnetic monopole there's probably something wrong with her calculations.


Has she posted on arxiv to get feedback from other experts?


Is this bias, or does it just take time for other researchers to go into areas opened up by novel papers and then publish other papers that cite those novel papers?


Intuitively this sounds like confirmation for the current system. The thing is, that it takes time to understand truly novel research, while a measurement like the discovery of the Higgs has a immediate impact, because the Higgs mechanism was studied for 40 years.

(To be clear, I believe that the focus on citations is stupid for other reasons, but this does not seem to be a failure of the system.)


If anything I've heard the other thing - that it's hard to get published unless something is novel enough. There's not much interest in studies that just confirm or slightly extend, which is why reproducibility becomes an issue.


Somewhat novel research is fine, just as long as it conforms to what is expected. Novel AND counter to the expectations of the field is much harder to publish.


Is this because journal editors like to see their own lines of research expanded on? (Incentive problem?) Or harder to referee novel papers? (Time and attention problem?)


sad but very true


Can you think of a low impact paper that later turned out to be too novel to get published in a high impact journal at the time? How often do you think this happens? I for one don't see this a lot. What I see more often are people that think there work is super novel, and either it's not, or they don't have enough evidence and don't know the field well enough to know that they are wrong.


What if it is an education issue? Maybe by training young scientists for wayy too long before they start doing science, we "overtrain" them, and their mental models are over-fitted to current science. Naivete can sometimes be a major source of creativity (that's why people who cross disciplines sometimes have great impact).



Obligatory mention of pseudoscience in a topic about real science: check. Might as well hail Deepak Chopra as Einstein reincarnated.

https://en.wikipedia.org/wiki/Rupert_Sheldrake


Obligatory knee-jerk accusation of “pseudoscience” at the mention of Sheldrake: check. It’s a topic about overly conservative attitudes in the scientific community. Thanks for demonstrating that tendency with your insulting comment.


Sheldrake's ideas of "morphic resonance" are the worst kind of pseudoscientific garbage. The man literally believes that insulin molecules inherit a collective memory from all insulin molecules that have existed in the past.

He has constructed a New Age world of fantasy based on a fundamentally new type of physical phenomenon that is completely unsupported by physicists.

Extraordinary claims require extraordinary evidence, and he has not delivered such evidence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: