But there is one feature I notice that is generally missing in Cargo Cult Science. That is the idea that we all hope you have learned in studying science in school—we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.
That's great in principle. In practice, to keep doing the work of science you need to keep getting funding; and to keep getting funding you need to keep publishing (read: to respectable conferences and journals).
Unfortunately, the most respectable journals have a nasty tendency to refuse to publish negative, or even equivocal results. And that puts a lot of pressure on researchers to find a signal, somehow, anyhow- and to viciously attack any suspicion that the signal was just a spider on the lens or whatever.
Which is not to say Feynamn is wrong. On the contrary: reform is sorely needed. But reform can only come from the inside, and the only people "inside" are those who accept they must play by the rules, rotten as they are.
Bottom line: we all want to Do the Right Thing, For Science. But doing the right thing and still doing science is getting harder and harder as time goes by.
"As far as I can tell, the BICEP2 scientists haven’t suffered much professionally from the fiasco. When David Spergel talked here at Columbia about the subject, he noted that this hadn’t stopped the PI, John Kovac, from getting tenure at Harvard. In the book, Keating mentions some “embarrassment and guilt”, but no negative professional consequences"
In the long run, it is good for society if researchers can make mistakes and keep their careers.
In the long run, it is good for society if journalists are sometimes embarrassed by reporting on a story when they don't have access to the data behind the story.
So in the long run, it is probably for the best that these researchers suffered no adverse effects. If anyone changes their behavior, I'd rather it be the journalists. They should not give too much publicity to teams making big announcements while hiding their data.
> In the long run, it is good for society if researchers can make mistakes and keep their careers.
I'm all for people making honest mistakes, without career-ending repercussions. But this wasn't an honest mistake, this was outright deception. They knew that there was a good chance of their experimental-results being faulty (due to dust), and they also knew that a different team had the ability to confirm/deny this uncertainty. But instead of owning up to this, they decided to wilfully mislead the world about the quality of their findings. Stuff like this is exactly why people lose trust in science - why climate-change-deniers and anti-vaxxers gain more credibility than they deserve.
The sad part of this whole story is that the next time such an opportunity presents itself, people have no reason to do what's best for society/integrity. Heads I win, tails you lose.
It wasn’t a mistake. These guys made conscious decision to not disclose critical facts to hype up their findings and get on the NYT front page. This is extremely unfortunate because it’s almost becoming a pattern in many scientific fields. Researchers want fame and funding (or even worse get rich quick scheme). They put out claims of revolutionary results while sneaking away tiny little caveats that unsuspecting public might not even notice. They do this with full knowledge that all these doesn’t measure up to acceptable scientific ethics. Just getting on NYT front page means they will have immediate line up of less technical folks with funding and/or offers of higher up jobs. Even if at later date their claims gets busted, it’s mere little embarrassment that don’t get same level of publicity and everyone moves on saying “more work needed” while they have already cashed out from the hype they had created.
Lots of people are playing this game and new people are encouraged to play it when they look at how other incompetent and ethically loose people quickly climbed the ladder. It’s truely a cancer in science.
The arms race between bullshit artists, finding shortcuts to extract resources "for free" using the right spin, and the rest of the society, is as old as world.
This is even more pronounced in business: selling dreams has always been the surest way to profit. Who wouldn't want to believe the solution has been found and is easy and cheap?
Note that there's no "fixing" this optimization arms race. Not even in principle—it's a core feature of Nature.
Shortening the path between resource generation and resource consumption helps, the gradients are simpler and quicker to adapt. Keeping the rights and responsibilities closely tied, essentially. Science is certainly out of whack, with its reliance on a super complicated system of wealth redistribution and taxation (people who pay have no say).
Yes. If you report a spectacular result, and it is not proven wrong within ~5 years, its effect on your career is almost the same as if the result had been correct. The unfortunate thing for these folks is that their claim was shot down quickly.
> Researchers want fame and funding, or even worse get rich quick scheme
Of course they do. (Even worse?)
Actors want to be mega-famous. Nobody bats an eye... Musicians and basketball players want fame and loads of money. Is perfectly ok. They deserve it for playing a bass or being able to hit a ball. Yes, eeeeverybody wants to get rich quick. The difference with scientists is that scientists get blamed for this all the time.
Is this a reasonable thing to ask of journalists? I can't imagine a reporter telling their editor, "Yes, this team tells us with confidence that their findings have confirmed a huge story, but I disagree with their interpretation of the data, so I don't think we should run it." A reporter shouldn't be embarrassed that a team of scientists lied to them. Obviously the publications should retract/update these stories, but journalists, even ones with science backgrounds just aren't equipped and trained to be able to independently verify every claim they report on.
A journalist should always be embarrassed when they publish some bullshit because a source lied to them. Why should science reporting be any different?
Science reporting should more than spell checking university press releases.
It depends on the level of deceit, and who the source was. And we all have good reason to be annoyed if the deceitful source was the team performing the investigation.
> Science reporting should more than spell checking university press releases.
This implies a false dichotomy. Many journalists know a lot about the matters they cover, but of course they can be misled, just like the rest of us, by people who are nominally the experts in one corner of the field.
Science reporters should be just as skeptical of their sources as any other reporters. Especially where money, power or honors are involved.
The fact that most reporting on BICEP2 didn't even mention the politics behind it and the competition with the Planck experiment tells you a lot about the quality of coverage.
I think it is quite clear that the original reporting of BICEP2 was over the top, and that talking with some of the group's competitors would have pointed to problems in the experiment and analysis.
The original reporting sounded to me more like PR than journalism.
Even pointing out the old joke that 5 sigma discoveries tend to be wrong half the time would have been helpful.
> but of course they can be misled, just like the rest of us, by people who are nominally the experts in one corner of the field.
And they should learn their lesson, and do better the next time. Unfortunately, this does not seem to happen (with a few notable exceptions).
The best science reporters earn their keep by balancing "wow!" and "really?" at moments like this. Sometimes it's a matter of spotting an outright hoax (i.e. Piltdown Man fossils in the 1920s.)
Other times, what's at stake is an overly decisive attempt to interpret exciting data that might not be so exciting after all. The cold-fusion "discoveries" of 1989 fall into that camp, along with this gravitational-wave claim.
It's hard for journalists to referee everything in the first few hours. But by talking to other experts in the field, and by being willing to comb through the methodology underlying the claims, it's possible to get a sense within a few days (weeks?) of what's holding together very, very well, and what's starting to seem a little shaky. There's nothing wrong -- and a lot right -- with writing post-announcement articles that acknowledge such uncertainties.
It’s not hard for a journalist to pick up their phone and get some independent scientist with knowledge in the field say what they think about the findings. And then not hide the quote about inconclusive data and looking forward to further confirmation in the last paragraph of the article.
It's not really mistakes that is the issue though, its more the fact that they got reeeeelly close to outright fabrication.
They had a very shaky estimate of an important background (the foreground) and instead of being cautious they put out a video like this as part of the announcement: https://www.youtube.com/watch?v=ZlfIVEy_YOA
Wow. I don't see how one can exonerate the researchers and condemn the messengers.
> If anyone changes their behavior, I'd rather it be the journalists
> it is good for society if researchers can make mistakes and keep their careers
According to the article, it was fraud, not "some mistakes". How is it good for society that frauds remain unpunished? Why would we want the behavior of frauds to continue unchanged?
Even if it isn't fraud, should there be no consequences for our actions? If my software fails miserably, would it be best that I suffer "no adverse effects", or perhaps should the market and investors allocate resources to people who can do a better job?
Why does everything have to involve some rant against journalists? This was an extremely well credentialed collaboration claiming to have made a major discovery. It took the physics community itself months to unravel it. And the articles did actually mention that verification was still required.
Researchers frequently seek publicity just as much as journalists look for stories. I don't think too many journalists would complain if researchers were required to deliver to them all the data they used in their research. Don't think that's a sustainable model though.
Go read Hossenfelder's review instead, its a better teardown. The fourth paragraph is:
> “Losing the Nobel Prize” is well written and engaging and has a lot of figures and, ah, here I run out of nice things to say. But we all know you didn’t come for the nice things anyway, so let’s get to the beef.
The linked article makes some very pertinent additional points (as well as offering a counter to Keating's claim that Hossenfelder was unfairly reviewing an uncorrected proof, though Hossenfelder herself makes a good case for that being a non-sequitur.) For example, there is the part about how they needed data from the Planck team, and tried to get it without saying what they were doing, which is how they ended up scraping data, that they apparently did not fully understand, off a presentation slide.
The article seems one-sided and, from the outside, I don't know what I don't know. Can someone comment on other substantive points of view? And was BICEP2 really as reckless/dishonest as is portrayed?
EDIT: Some other perspectives in the comments:
> ... part of what gets overlooked in some of this is that BICEP2 had done lovely work — they had great data at far higher sensitivity than any previous polarisation experiment, if I am recalling things correctly; the analysis was flawed but the data was good.
> PRL referees saved the BICEP2 people from themselves by getting them to remove the worst of their mistakes. If the referees had quickly let the paper through, likely the paper would have had to have been formally retracted.
> I did some searching around for examples of any apology from the BICEP2 people or negative effects on any of their careers, could find nothing. A quite remarkable fact is that Kovac’s Wikipedia entry still has the following as description of his career: / “He is the principal investigator of the BICEP2 telescope, which is part of the BICEP and Keck Array series of experiments.[3][4][5] Measurements announced on 17 March 2014 from the BICEP2 telescope give support to the idea of cosmic inflation, by reporting the first evidence for a primordial B-Mode pattern in the polarization of the CMB.[6][7][8] If confirmed, this measurement provides a direct image of primordial gravitational waves and the quantization of gravity.”
> If Keating is complaining about how a single prize determines research (ehm… does it really?) then he’d better complain about tycoons determining research by handing out money to people from failed experiments.
...
The latter is a troubling sign to me. It was predictable (I predicted it, at least) that as governments cut taxes and cut funding to universities and science, more power would concentrate in the hands of a wealthy few (private and corporate) and research would become far more dependent on pleasing them. What value is research without independence? Arguably it does more harm than good, by creating disinformation and undermining legitimate research.
There seems to be plenty of evidence that almost everyone in the community thought that claims made by bicep2 was reckless and hyped for the purpose of generating the news headlines. They didn’t even disclosed clearly about possiblity that claims could be wrong even when they full well knew they could. A reviewer forced them to add that possibility in that paper which they only did grudgingly. If it wasn’t for the reviewer, the paper would have been now looked as outright lie. This was one of the most sought after experimental result and it was expected that team would be very careful in making any claims. So when big claims were indeed made, it didn’t took too lonng to do independent review and refute their claims.
I don't know how to estimate their reckless from the outside, but it is simple fact that however sensitive, they only had one waveband and so no real way of estimating the foreground contamination. I think it is also well established that the foreground estimation was made by looking closely at pictures from a presentation showing preliminary Planck skymaps of the dust emission, and then building a foreground model from that.
I’ve often wondered: Should the number of Nobel prizes awarded scale with the population? There are far more scientists working today than 50 or 100 years ago, and cappping the number of winners makes them more prestigious over time. Perhaps they will become TOO prestigious...
The fact that they can’t be awarded to a collective rather detracts from the prestige, because for big experiments it is really the collaboratin as a whole that made the discovery and not some guy at the top.
That is true for any discovery after the stone hand ax.
A "true" discovery that can be ascribed to one person 100% instead of to a network of people working jointly in space and time (the dead are involved too!) needs that person to work without using knowledge as well as any tools created by others (in space as well as in time).
Humanity is more like a brain - the secret of our success is in the connections between us (again, in space as well as in time).
My own conclusion, but I found this in support:
https://youtu.be/jaoQh6BoH3c ("Henrich is professor of human evolutionary biology at Harvard University.")
> The number of individuals who know how to make a can of Coke is zero. The number of individual nations that could produce a can of Coke is zero. ... Invention and creation is something we are all in together. Modern tool chains are so long and complex that they bind us into one people and one planet. They are not only chains of tools, they are also chains of minds: local and foreign, ancient and modern, living and dead — the result of disparate invention and intelligence distributed over time and space.
----
Same in business. Each time you hear about an extremely well-paid executive or a "self-made(!!?) billionaire" remind yourself that those people's main "accomplishment" is to live today. How much of the network effects (of the human network in space and in time(!)) does one person "deserve"?
I would tell those "self-made" people complaining about taxation that if they go into the deep forest and stop using the human network - which is everybody else (let's even give them access to the dead people's achievements in the form of vast knowledge) - they can keep 100% of the grass and mud huts they manage to create.
Humanities wealth and more important its ability to create more of it is almost all network effects. If all the people alive would work a) in isolation and b) without anything they got from previous generations, what would we accomplish?
Einstein was a genius, so was Newton. What would they have accomplished a) without the exchange with their peers (network in space), b) without building on prior knowledge (network in time)?
So, who deserves that Nobel Prize? That billion dollar empire?
I don't mind any of those existing - I'm not an "evil communist". I do mind when people get a bit too excited about "deserving" and "stealing from me" (taxes). Relax and remember that you are one lucky bastard, able to benefit from network effects that you did next to nothing to help create (because most of it was and is created by everybody else).
Hey, thanks for writing all that out. I shared your comment with my family and it really made a positive difference in a long-standing disagreement we’ve had. I hope you continue to post your thoughts here.
Derek Price's "law of exponential increase" from "Science Since Babylon" (1961) is an observation that there has been an exponential growth of science over a long period of time.
The Nobel Prize hasn't had a corresponding growth.
The population has also had an exponential growth, so there is a definite correlation. Should some day we reach zero population growth, we'll learn if Price's law breaks down.
Science as an institution has definitely grown exponentially by any metric: the number of journals, papers, citations, academic positions, administrative positions, PhDs awarded, PhD students.
This does not necessarily correspond to growth in real rigorous knowledge about the world.
The former is easy to measure, the latter is hard to measure, but is what we should care about.
Isn't the error rate and the disagreement rate in the latter so high that any attempt to quantify it would end up in flame-wars and bicycle sheds?
That is, do you have any point other than to express a sort of personal cynicism?
To get back to the topic, do you believe there is less growth in "real rigorous knowledge about the world" being done now than 100 years ago when the Nobel Prize was established? Because I would disagree with that assessment.
> Isn't the error rate and the disagreement rate in the latter so high that any attempt to quantify it would end up in flame-wars and bicycle sheds?
Yes. Still much better than counting papers/journals/conferences/citation/degrees/students/PhDs.
> That is, do you have any point other than to express a sort of personal cynicism?
I believe that measuring science quantitatively is damaging. It creates a situation where those who game the system do much better than those who do good science.
> do you believe there is less growth in "real rigorous knowledge about the world" being done now than 100 years ago when the Nobel Prize was established? Because I would disagree with that assessment.
As a lay observer, I'd say that theoretical and particle physics is not advancing very quickly compared to 100 years ago. Late 19th century and early 20th century were magical times - 1905 alone brought special relativity, the photoelectric effect, and E=mc^2 (and that's just Albert Einstein).
Other fields, such as the more "mundane" areas of physics saw and still see a lot of progress, but Nobel prizes are rarely awarded there. Medicine had a great period in the first few decades after the war. Nutrition has gone backwards in recent decades.
Math, computer science, software, a lot of electrical engineering also saw a lot of progress, but it's irrelevant to the discussion. There's no Nobel prize in those fields.
BTW, Einstein never got a Nobel for special or general relativity.
I don't think it's fair to single out particle physics. The Standard Model is one of the best tested and most accurate theories we have. While there are certainly problems with it, it's a bit like saying that explorers these days aren't advancing as quickly as they did in the 1600s because we aren't discovering new islands at anywhere near the same rate.
Modern genetics is amazing. The new genome sequencing techniques have (among many things) helped us understand evolutionary history in a way that people didn't even dream of 75 years ago.
Astronomy has also undergone explosive growth in the last few decades. The discovery of new exoplanets doesn't even make the news any more.
The Nobel Prize for Physics in 2017 and 2011 were astronomy projects.
Some of the recent EE-related Physics Prizes were 2014 ("efficient blue light-emitting diodes"), and 2009 (fiber optics); at least one person in each case was an EE.
Ugh, I always hate hearing the griping from the people who didn't win the Nobel Prize. Get over it, it's not an objective listing of the best scientists of all time. It's a human-driven process driven as much by circumstance as by talent. It's like politicians griping about losing elections or middle managers talking about being skipped over for a promotion.
Richard Feynman