The job market right now in the US forces grad students to attempt innovative research with remarkable conclusions. As a consequence, everyone's trying to prove more outlandish things while few are bothering with replications.
If the field were sane, you would train all the apprentices on replication studies. Once they demonstrated dispassionate expertise with the tools, only then would they be allowed to try to use those tools to test their own ideas, where they will have a strong emotional preference for how the study will come out.
If universities hired grad students based on their replication work, not on their eye-popping original research, we'd have better science and better scientists.
That is nearly the opposite of what the PhD was designed to do. Modern academic training comes out of the church and the old guilds of middle Europe and is still in use today in many fields (chefs and plumbers to name a few).
The Bachelor's degree is loosely similar to an apprentice's role. The young boy (they were almost exclusively male) worked in a shop or with a priest for some time. He learned the trade, the tools, and gained some experience from 'level 0'. When you are done with the apprenticeship, you are 'cleared' to work in other shops and are known to not be a total moron or break tools or burn down shops.
The master's degree is just that. You are considered a master of the craft (like plumbing or prinitng) or the discipline (like The Book of Mark or Crusader History). As such, you typically have a master's level project. Something that is 'new' or shows that you know your stuff. That might be a very decorative silver bowl or a thesis.
The Doctorate means you are 'world class.' Not just a mastery in a field, but a paragon of it. Today, that means that you are the expert in your little niche of underwater basket weaving. There should be no-one better than you. This means you MUST have produced something new or novel way of thinking about the God or something. This has always been the idea, if not the practice.
To change that and say that the doctorate should be the bachelor's is very big. To suggest that PhDs should just replicate experiments is anathema to the idea of graduate education and would be a tremendous waste of time and energy. When you enter the Phd, you are assumed to already know how to do all the replication and the facts about the field. Granted, fields are exponentially larger than they were in the 1600's, but you still should know stats and biology if your PhD is in cancer biology.
I think you are totally wrong about this. What you are suggesting should be covered in undergrad and I think it largely is.
The problem is that nobody's doing replication work in undergrad, either, nor does it happen in Master's programs.
I do think it makes sense to stick with the PhD meaning you're a world class expert in some area, but if so then we need to adjust our expectations for what Master's level work means in the sciences. Right now it seems to just represent a hurdle you need to whiz past on your way to the PhD.
A good Bachelors degree will prepare a good student for replicating science, and a good Masters degree will definitely leave a motivated and skilled student with a good advisor a master of his or her specific field.
The problem is that grade inflation means the majority of students will fall short of these goalposts. I agree with your assessment that undergrad degrees represent hurdles, regardless of whether a student is planning to stay in academia or not.
My experience with getting a Masters degree was that it was really tough work that required my full dedication for two years. But I had a world-class scientist as an advisor breathing down my neck the whole time and expecting results, and my experience doesn't seem to match that of many other MScs I know. Some departments seem to be "degree factories"; it takes an unreasonable amount of effort to follow up students in the classical "apprenticeship" tradition described by GP. It would be very strange if every department at every university managed this level of dedication, with student numbers being what they are.
> a good Masters degree will definitely leave a motivated and skilled student with a good advisor a master of his or her specific field.
I'm having trouble believing this.
In UK, most Masters degrees last 1 year, and there are several degrees considered good such as Imperial's MSc in Machine Learning, Cambridge's MPhil in Machine Learning, Speech and Language Technology, Edinburgh's MSc in Cognitive Science, and others. Is it really possible to become a "master" of machine learning in one year?
Also, at least in Computer Science, most of the Bachelors degrees considered good in UK do not seem to focus at all on replicating science. In fact, for my final year undergraduate project, I was encouraged to find something novel, and at no point my supervisor hinted towards focusing on replicability.
An MSc is not an MSc. It can commonly vary from 1-3 years, depending on institution. I was thinking of the two-year variety.
Of course, you can define the term "master" to mean pretty much whatever you like. But I'd say that two years of additional, focused study when you are already proficient in your field should be more than enough to have a mastery of the specific skills and knowledge that is at least on a high national level. I'm from Norway, so the US picture is unknown to me.
Its really dependent on the department, but yeah, the MS is now the HS degree for a lot of places. Especially in Engineering. Good luck trying to get hired with only a BS. Credential creep is real. More here: https://en.wikipedia.org/wiki/Credentialism_and_educational_...
Everyone's doing replication work in undergrad, it just happens to be replication of the "highlights" or most important results. Farmed out the replication of every result to undergrads would, one be a practical/logistical nightmare, and two lead to less reliability in what a degree means. One student could spend their time working on a important result gaining tons of insight and expertise while another could be stuck replicating a task that turned out to be worthless bullshit and have very little to show for it.
> Farmed out the replication of every result to undergrads would, one be a practical/logistical nightmare, and two lead to less reliability in what a degree means.
Actually I think farming out replication to undergrads would be an excellent approach. Your final year undergrad project should be to choose an under-replicated study and repeat it, publishing your findings. Each individual study might be less reliable if done by an undergrad than done by a seasoned researcher, but if each study is repeated by say 5 undergrads and 2+ of them fail to replicate the results, that would be enough to indicate that the study warrants further attention.
> One student could spend their time working on a important result gaining tons of insight and expertise while another could be stuck replicating a task that turned out to be worthless bullshit and have very little to show for it.
The whole point of science is that we don't know what will turn out to be an important result and what will turn out to be worthless bullshit. No study is worth a damn unless it's been replicated but everyone is too busy trying to land-grab the next little piece of unexplored territory to actually validate anything that comes before.
If nothing else, we need to regain the perception that a negative result is just as important as a positive result - to paraphrase Edison, discovering 100 things that don't work is just as important as discovering one thing that does.
However, PHD students on average are very far from world class. And in just about every case they are simply looking at a problem so unimportant that nobody considered it before, and most likely nobody will ever look at again.
It's almost always a waste of time for both the student and everyone else involved.
PS: There are plenty of counter examples where PHD research happened to be valuable, but that's a tiny minority of cases.
You could require both reproduction & novel work for the PhD. Or you could require reproduction for the MS, as a stepping-stone to novel work.
Today the MS & PhD are both supposed to prepare you to do novel research & science. Since reproducibility is a core part of research & science, it would make sense for you to reproduce another study as part of your learning process.
History isn't a useful guide when our current system is broken, it only reiterates what we've been doing wrong all along. If you really think replication is a waste of time though, then you won't understand why journals are filled with junk science.
We used to think placebo controls and double blinds were a waste of time. One great thing about the history of science your version excludes is that the way we do science is subject to review too, and we continually throw out what doesn't work in favor of methods that do.
Academia has always forced grad students to attempt innovative research. The core requirement of a PhD dissertation is "novel findings"—e.g., something completely new.
Apprentices work on papers and research for their professors. No one is incentivized to do replication work: either you replicate it successfully (great) or you find flaws. In the latter case, you'll probably be a nit-picker anyway—also not really additive. On the off-chance that you refute a high-profile study (e.g., some of the outright instances of fraud), you might get some recognition, but now your name is associated with something negative (e.g., fraud: "X is not true") vs. positive ("X is true").
Finally, this is some of the role of review panels in journals: too pass the burden of proof.
A lot of the time those same students are doing the bulk of the legwork for those findings then attributed to a professor, who is under greater pressure to advance (not confirm) science, who may or may not have even participated heavily in the grunt work of that research. The professor then punishes the work under his/her name, with no credit to the work of the students.
> A lot of the time those same students are doing the bulk of the legwork for those findings then attributed to a professor, who is under greater pressure to advance (not confirm) science, who may or may not have even participated heavily in the grunt work of that research. The professor then punishes the work under his/her name, with no credit to the work of the students.
It's important to note that though this seems to happen in all fields, it is far less common in some. I rarely hear of such things in astronomy; it does happen, but more often I hear about faculty explicitly working to ensure they can protect projects for their students so their students can get the credit deserved for doing the project. I hear of professors appropriating student research in biology and chemistry more frequently, however.
The occurrences of professors taking and publishing student research is certainly a problem, is unethical, and should be stopped. But the way in which it is usually discussed ("in science") implies that it's a systemic issue across all science, which (from my experience) isn't true. This topic of credit and attribution for research deserves more nuanced discussion and fewer blanket statements.
I think it does vary a lot by discipline. In my area of mathematics I don't recall ever hearing of it. For whatever it is worth I even published papers without my supervisor being on them at all, if he hadn't been involved in that work.
How does attribution and second/third authorship work if you "just" bounced ideas off your supervisor or fellow students in verbal discussions? I'm trying to gauge how strict attribution rules are in what I would consider a gray area.
In mathematics, multiple authors are always listed alphabetically by family name; ie there is no "first" or "second" author.
In the situation you describe, the paper would probably be singly authored, and the author would write something like "Thanks to my advisor _____ and to my colleagues ____ for many helpful discussions" in an acknowledgments section.
If the contributions were more serious, then possibly the author would invite the others to co-author with him/her, and the others would then either accept (and then help write the paper) or politely decline and say "just mention me in the acknowledgments".
At least in my experience, co-authorship carries responsibilities: to help with the writing, the figures, references, dealing with editors and with submission to journals, speaking about the research at seminars/conferences, etc.
In life sciences, the last author is almost always the person whose grants paid for most of the research. Usually, they also helped supervise the research, but the grant aspect is more important.
For example: Mike Synder, a brilliant biologist, 'supervises' 36 postdocs, 13 research assistants, 11 research scientists, 9 visiting scientists, and 8 graduate students (http://snyderlab.stanford.edu/members3.html - thanks to Lior Pacter for noticing it).
In 2014, he had 42 published papers. How much scientific input do you think he had on each one?
> What are the differences between postdocs, research assistants, research scientists and visiting scientists?
These are some working definitions, based on my experiences in astronomy:
Postdocs: researchers with a a PhD who generally have fixed-term contracts. Generally these positions are full-time research, though some may include teaching components.
Research assistants: researchers, typically without PhDs, working in a group/lab. This is frequently a synonym for undergraduate or graduate students, or interns. It can also mean people with technical skills who are working in a group/lab but not working towards a degree.
Research scientists: Non-tenure track reseachers, often with PhDs. Their positions may be fixed-term or indefinite.
Visiting scientists: Researchers whose primary affiliation is with another institute. They may be fixed-term visitors (e.g., faculty on sabbatical at another University) or frequent but non-constant visitors of an institution. Their salary is often paid by their primary institute, unless the host institute has provided funding.
>In mathematics, multiple authors are always listed alphabetically by family name; ie there is no "first" or "second" author.
Is there any research showing this standard to be fair? People pay more attention to the first item of a list than to the middle ones, making me think that one's position on such a list could have a small benefit. Less important if there are overall less multiple name papers, which it seems from the rest of your comment, but still a factor as long as multiple names on a paper do happen.
Is there any research showing this standard to be fair?
I don't know of any.
Less important if there are overall less multiple name papers, which it seems from the rest of your comment,
Actually, I think multiple authors is far more common than single authors, though I don't know the numbers to back it up. I was responding to a very specific "what-if" scenario of bouncing ideas off someone else, and describing what would happen in that case.
I was thinking the same thing. I think math lends itself to this because the work is so esoteric that often the advisor doesn't even completely understand the work, much less can claim credit for it.
I think the size of labs is more important. Bio and Chem labs -- especially the best ones -- are often freaking huge. There's no way the PI actually has time to meaningfully contribute to every paper when they are bringing in enough money to support dozens of full-time positions.
In math, the group sizes are way smaller. A PI might only have 1 or 2 students.
If the field was sane we would not expect breakthrough results or any results all of the time. We would happily say to scientists, thanks for your continued efforts... and allow them the money and time to do real science. But this is not how the crazy Capitalist world works. You have sociopaths which run these institutions always barking orders and demanding results. So the poor employee gives you what you ask, and you say good boy, and give them a treat.
If the work you are doing isn't reproducible, then it won't be cited as the building blocks for future work. The system is cleans itself out as a side effect. Therefore I prefer students to work on cutting edge research.
Yes that may be true. Instead of trying to reproduce it, they attempt to build upon it. The projects which successfully build on previous projects become original research articles and the cycle continues.
tl;dr A false technique can be described and it can be hard or impossible to detect the technique is flawed by using it
For example the paper at http://www.jstor.org/stable/222500 describes a method of using the stationary bootstrap to eliminate data snooping bias in studies of "technical analysis" in finance.
Published in the Journal of Finance in 1999 at the time I worked with it in 2010 it had been cited over 500 times.
The proof in the paper is inscrutable. I could find no one who could explain or verify the proofs at my institution.
I attempted to reproduce the results in the paper which is where the problems started. The authors did not give enough information to do this in all cases, but in some cases I was able to reconstruct the algorithms.
They did not perform as described. Of the five algorithms I could reproduce (from memory) one of them worked roughly as described and two others were not completely hopeless.
Looking more closely I realised that the original authors completely disregarded important factors in the implementation of the techniques described by the algorithms. Transaction costs. Allowing for transaction costs (a difficult but not impossible task) the effects noticed by the authors disappeared completely.
Looking even closer I examined the assumptions behind "White's Reality Check" that the paper relied on the Stationary Bootstrap by Politis and Romano from 1994. But financial returns are not stationary. Not at all.
So dodgy logic, misuse of statistics, irreproducible experiments, ignoring important aspects of the data and I suspect wishful thinking add up to a paper that is comprehensively false. Cited hundreds of times and used many times to verify other results.
Which exactly proof are you talking about? I briefly looked at the paper (I've seen it before, but it's been quite a while...), but it seems that they pretty much use a previously known approach, and the only proof in it is simply "replicated for convenience of the reader". Also, this would certainly be neither the first nor the last paper that ignores transaction costs, and their omission does not really invalidate the argument (even if you cannot profitably trade on an anomaly, why is it there in the first place?), so I don't think you can accuse them of bullshit just based on that.
Non-stationarity is a problem though. Still, you need a bit more to call complete bullshit on this imo -- e.g. say something like "after switching to a different bootstrap method that works in presence of stochastic volatility the result suddenly disappears". Perhaps that is what you do in your paper :) I should take a look.
All that being said, not many people believe in technical indicators working in equities these days, or anywhere really (FX was a bit of a holdout -- not sure if it still is?), so perhaps science has kinda sorted itself out in this case :) I've seen far worse cases of snooping and non-replicability though, and some are still going strong.
Non-stationarity is enough to call bullshit. Really! How can a stationary bootstrap be used on data (financial returns) that are so prone to non-stationarity?
Irreducibility is also enough to call it very bad and should not have been published in that form. They talked a lot about their algorithms, without properly describing them.
Ignoring transaction costs is also enough to call bullshit. It is a mistake that should only be made by rank amateurs, and it is the most common mistake made by amateurs in the Technical Analysis field IMO.
It is a very very bad paper but because it gives a technique that can be used to show that TA is possible it is much beloved by researches in the field.
My own conclusion is that (generally) TA is not possible to do profitably at these time scales.
"Non-stationarity" is really an umbrella term; the truth is, there is no single authoritative model of asset returns, and really there will never be one. This should not preclude all statistical analysis though, and it is done by making simplifying assumptions, just like in every other case, and in every other field. Your claim is that they are too strong in this case, but it's a claim that can be fairly easily shown empirically or in simulations, and it really should be IMO, particularly since other bootstrap methods exist.
I agree about algorithms; rather unfortunately, this is true in more than this one paper. There certainly is movement towards requiring people to make their code fully available, but we are not quite there yet. But if you describe your failure to replicate, this is definitely a strong argument that the authors would IMO need to address.
Ignoring transaction costs would be a major problem if the paper's main point was "we found a strategy returning X% above market, it's awesome and people should give us money" -- but this is written for a very different purpose and audience. That being said, today it would not be published without a transaction cost analysis -- but I would have no problem with them saying "with such-and-such costs, profits are not there any more", it would not invalidate the paper at all. But at the time it was written, TC analysis was not as standard in academic literature as it is now.
I agree with you about TA, and TBH most serious researchers are of the same opinion, and have been for a long time -- even at the time of publication, it was a bit of an outlier, and this is not a particularly popular area of research (how many of those citations are in recent top journal articles?). Forex was a bit of an open question last time I checked, but it's been a few years, not sure if it still is.
> Yes that may be true. Instead of trying to reproduce it, they attempt to build upon it. The projects which successfully build on previous projects become original research articles and the cycle continues.
Whether this "works" or not depends on how the previous projects enter into it. If the results of previous projects are used as assumptions to justify the methods, data, etc. of the subsequent project, there is no check and we risk the research becoming a house of cards which could collapse due to faultly, untested assumptions that were used.
If the subsequent projects are performed in such a way that they also test the previous results/assumption, this can be avoided. I can't tell from your wording which you are suggesting, though it seems to lean towards the former.
And then there is the bullshit that permeates from science research into bullshit blogs that later folk people use to guide their decisions (and editors/writers use to drive ad prints): what to eat or not according to recent research, how to raise children or not acording to recent research, how long a workout, how to be happy, how to this and how not to that. The bullshit then jumps from blogs into everyday conversations and daily live, into arguments over coffee (how much coffee is good by the way? According to quotable recent research that is). The most recent bulshit that really makes me lose hope is the don't vaccinate children bullshit.
I see an entire civilization confused by a ubiquous mass communication tool which they invented to do exactly the opposite: enlight them.
"I see an entire civilization confused by a ubiquous mass communication tool which they invented to do exactly the opposite: enlight them."
There was never a stream of pure, refined truth available, that just somehow lacked a distribution method. All there ever was was a confusing mixture of an uncountable varieties of different possibilities, theories, and facts of varying veracity. Now you can have more direct access to that confusing mixture, with all the attendant privileges and responsibilities.
There were things that claimed to be streams of pure, refined truth. Those claims didn't become false. They always were false. Now you can tell that better. It may not feel like progress, but it is.
> There was never a stream of pure, refined truth availablem, that just somehow lacked a distribution method
Academic educational programs have exactly been the source-without-a-scalable-distribution-method for at least the past century.
You're unlikely to find a treatment of mathematics or physics or even Computer Science that is as well-curated, free of coercive bias, and well-presented as it often is in the undergraduate programs at universities that care deeply about their educational programs. (Such universities and colleges do exist; they unfortunately also tend to be expensive, selective, and not always well-represented in top N lists -- especially in the US.)
The reason the distribution method of university education is lacking has more to do with economics than anything else. Hiring truly high-quality people to teach small groups of people difficult content in a rigorous way is expensive. If you skimp on any of those features, the quality of the end result goes way down (c.f. typical MOOCs and the university courses to which they are purportedly equivalent, in pretty much any dimension.)
> There were things that claimed to be streams of pure, refined truth. Those claims didn't become false. They always were false.
Again, I fail to see how this critique applies in any meaningful way to high-quality undergraduate education programs in hard sciences. Nothing is perfect -- and in fact I doubt any of those programs ever claimed to be "streams of pure, refined truth". But they come far closer than your comment seems to suggest.
Now, going back to the contents of the article, universites certainly aren't "streams of pure, refined, and new truth". Such streams very likely don't exist.
They are talking about how the internet, which was once seen a conduit of 'truth', is just a conduit which also spreads bullshit, and that there is no 'truth'. You are talking about education and scientific rigor as a means of talking about truth. While what you say may be true it doesn't seem applicable to their arguments.
> They are talking about how the internet, which was once seen a conduit of 'truth', is just a conduit which also spreads bullshit
That's what bikamonki was saying.
jerf's comment was tangential to the central thesis of bikamonki's post, and stated "There were things that claimed to be streams of pure, refined truth. Those claims didn't become false. They always were false"
> You are talking about education and scientific rigor as a means of talking about truth
No, I'm stating directly that there exist educational institutions that, through educational programs, transfer truth from one person to another.
'
Instead, “It’s about a methodology for investigation, which includes, at its core, a relentless drive towards questioning that which came before.” You can both “love science,” he concludes, “and question it.”
'
I've noticed that questioning some scientific finding often makes people think I am either anti-science, anti-intellectual, conservative, religious, or any combination thereof (I am none of these). To state quite the opposite, I think not questioning science makes you religious - you are putting faith in the findings, rather than disputing them or scrutinizing them with the scientific method (not that I think there is anything wrong with faith or religion, within their own realm).
I consider pseudo intelectual leftist dogmatic equvalent to religious right - but it doesn't mean i treat every questioning/skepticism as equal/valid/productive.
When you're discussing anything nontrivial it takes a lot of effort/knowledge to dispute a flawed argument (disproportionately harder than making one IMO) - disregarding someone on biases/agenda is a decent heuristic.
If the science has "social" before it, it always needs to be questioned - too much political interference. The entire field is low in actual results, high in political tooling..
True. But social science is where the uncertain standards, and political gains are. No one can, or cares to, taint research into fundamental particles. Studies into any kind of demographic is a different case...
I find it vaguely amusing and ironic that he cites rationalwiki on the gish gallop, given that my impression is they do this themselves a bunch.
(Note that running a gish gallop doesn't mean you're wrong, it just means you're intellectually dishonest.)
(I try not to pay too much attention to them, and I'm not super interested in refuting their bullshit, so this comment is going to be pretty unsatisfying. Sorry.)
You can't make a comment like this with absolutely zero explanation other than that it was 'your impression'. I understand the whole refuting bullshit is a lot of work thing, but it's equally as irresponsible, if not more, to make a comment like this with no evidence for a claim. At least give us something!
Never heard of rationalwiki until now, but particularly in context of this discussion, it would be nice to see actual examples of what you are talking about (it seems that you are talking about them selectively withholding specific and important evidence -- I think it should be quite easy to give examples of that, no?) :)
I haven't read much on rationalwiki, but nothing I did struck me as gish gallop. They might have a lot of angles on attack on some bunk concept, but they expand upon all of them.
I haven't found them guilty of gish gallop, but some of their articles are very biased and one-sided. They seem to be a reliable source on most issues of philosophy, science, and "woo", but anything political, societal, or remotely controversial usually isn't given a balanced view.
It's worth asking how much you already know about the subject? It's easy to make an article that looks convincing and devastating to someone with no prior knowledge, but laughably incomplete to to someone in the field.
For example, a cited claim could be contradicted by the citation, or missing valuable context provided by the citation, and you'd need to look up the citation to discover this. Or the citation could be bogus, and you'd need to do further research. Or the claim could be a really trivial objection and its reply omitted, and again you'd need to do further research.
I'll accept that not all of these things necessarily make for a Gish gallop as such. But they do all seem to be in the broad category of "intellectual dishonesty through weak argumentation which is easier to perpetuate than refute". I'm not going to quibble about how to subcategorise that.
I'm reluctant to give specific examples, because that risks starting an argument about those examples. Hopefully this thread is now old enough to avoid that.
---
So for example, we can look at http://rationalwiki.org/wiki/Cryonics (permalink in case of edits: http://rationalwiki.org/w/index.php?title=Cryonics&oldid=159...). Just skimming it, the "engineering problems" section keeps referring to "freezing". Trivial objection: "freezing damages the cells!" Omitted reply: "yes, so we moved on from freezing". It does also talk about vitrification, but it makes no particular effort to distinguish between the two or to tell the reader that freezing is no longer current practice.
Elsewhere, "Alcor Corporation calls cryonics "a scientific approach to extending human life" and compares it to heart surgery.[8] This is a gross misrepresentation of the state of both the science and technology and verges on both pseudoscience and quackery."
What the citation actually says is: "Cryonics, like heart surgery, is a scientific approach to extending human life that does not violate any religious beliefs or their principles." They aren't making the comparison that RW wants to paint them as. They're sort of hinting in that direction, but they're not making any actual scientific claims here. (They do make scientific claims elsewhere. If they ever say that cryonics has the same chance of success as heart surgery, RW should feel free to call them on it. I'm pretty sure they never say that.)
Their citation for "Some advocates literally propose a magic-equivalent future artificial superintelligence that will make everything better" is not someone proposing that (later in the thread, he says nanobots would be sufficient but not necessary). You might want to read the citation for more replies to RW-level objections to cryonics.
I can tell you that "Belief in cryonics is pretty much required on LessWrong to be accepted as "rational."" is simply false, and probably wasn't true when it was written. The citation leads not to a survey of LWers opinions on cryonics, but to the opinion of the founder of LW.
Note that these objections are true, and reflect badly on RW, even if cryonics is complete bunk.
And also note that I didn't follow a single citation on that article and then decide "no, this seems fair". I picked things to follow according to what I expected to see, but I was never pleasantly surprised.
(In the interests of fairness, I'll say that I am pleasantly surprised they don't call cryonics a scam. RationalWiki: not quite as bad as it could be.)
And I've spent over an hour on this now, when I should have gone to bed. So I'm done.
"In the case of Lord Voldemort, the trick is to unleash so many fallacies, misrepresentations of evidence, and other misleading or erroneous statements — at such a pace, and with such little regard for the norms of careful scholarship and/or charitable academic discourse — that your opponents, who do, perhaps, feel bound by such norms, and who have better things to do with their time than to write rebuttals to each of your papers, face a dilemma. Either they can ignore you, or they can put their own research priorities on hold to try to combat the worst of your offenses."
So the scientific debate equivalent of the Trump campaign.
I was just coming here to cite the Trumo campaign as an example. The unfortunate thing is that the amount of effort to counter this bullshit is significantly higher than the initial utterance and most likely will not reach the same audience.
For instance, Trump utters something completely off the wall. Retweets galore. Everybody gets spun up. Let's say it reaches 1million people which some portion now repeats the soundbite. Someone like Politifacts comes along after the fact and points out the discrepancy after fact checking. Reaches a much smaller follow up audience.
This is the sort of informational asymmetry that I find infuriating...
> I'm a bit tired of seeing critics of various hypotheses being dismissed out of hand simply "Because Science".
While this is certainly an issue, more often than not I see "critics" claiming broad statements without even trying to verify/falsify their claims scientifically.
And if I have to choose between two sides, where one tries hard to fulfill scientific requirements while the other doesn't care, I'm certainly on the scientific side - even though our current academic landscape is far from perfect.
I would like to point out that you are missing one of the most important stances: one of probability. Sometimes you dont know, but factors indicate a percentage probability, that should be updated as the data changes.
So few understand that most "real life" things involve inductive logic (IE, probability) because real life is simply too complex and variable to give absolute deductive proofs of most things. You can still come to solid conclusions, but those conclusions are still "most probably true" rather than "absolutely true."
That's kinda weird thing really. In science the probability for a given theory to be true is usually not very important. Usually you operate with the best guess that you have, or best guesses that you have. Until shit gets proven. You could argue that "probability" is name for this best guess, but scientific probability is mathematical term, that requires you are able to calculate some sort of numerical value for your probability. You usually can't do that to "most viable alternative". For science this is not a problem, because science only cares about "truth" and there are no time dead lines for finding it or necessary decisions that you have to make before 2030. Allocating probabilities would serve no purpose other than bias researchers.
In politics "probable" is fuzzier concept and often just a cop out. You can justify anything with that idea, but you have no responsibility afterwards (damn, we went to the 5% range after all, sorry!). And you don't need any kind of real sources, just authority or lack of imagination suits just fine.
When we wander into personal world views or some casual conversations, assigning probabilities become really good idea. But then it's often about who's authority you trust most, than what data you can actually consider.
And end of the day most people still gravitate towards either "extremely likely" or "incredibly unlikely" most of the time. You just use likelihood as a way to show that you actually considered that your statement might be false, but you don't actually consider it to be.
This psychological tendency for people to make up their minds, and then not change their minds is actually at the heart of the problem. These "critics" could not have mass following to their bullshit ideas without it. But currently lots of people fail to understand that just providing half decent criticism against an idea does not prove the opposing idea true. Because human beings are naturally bad at not making their minds.
Actually the critics should be dismissed as soon as they criticize scientific findings without demonstrating their knowledge and valid and effective use of the scientific method.
Somebody's ignorance of the background of some scientific claims is absolutely not an excuse to accept to even spend the energy considering his claims of "wrongness." The only thing we should consider is how we can educate as much people as we can, but we certainly should not treat them as having any contribution to the understanding of the topic.
The modern concept of "journalistic balance" (in reality, trying to win the eyeballs by creating "conflict") by representing 99% percent of all world scientists with one person and some group with silly claims with another person and giving these two then the same air time or coverage space is exactly one of the things that produce this effect:
"The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it."
Also never ignore the "accidental" fact when the "silly" group represents the interests of the people with immense amount of wealth and/or power, or potential financial or power gain in maintaining the "controversy." That's where the events really get nasty.
For all of its flaws, the scientific method is clearly the correct way to perform such criticism. It isn't perfect (it isn't close), but it is orders of magnitude more effective than any other approach humans have ever tried - assuming applicable area of study.
So that leaves us with the critics. If you can't be bothered to do the work and are taking potshots from the sidelines there really isn't likely to be much value in your criticism.
It's not impossible that there is value there, but the odds aren't very good at all, so you can't really blame people for not engaging with it easily. I agree it is unfortunate when people are quickly dismissive of an idea with "Because Science". But it remains true that by far the best answer to "Because Science" is, "Not, so fast, what about Science".
Picking at potential flaws in a study rarely adds much signal. Suggesting improvements and helping make those happen has potentially immense value.
Arguing about it on the internet usually has negative value.
Yes but in fairness there is a fair amount of BS in any area of human endeavor. There are BS movies, BS politicians, BS music, BS programming languages; it's certainly not unique to science.
Science is a movement that's about simplifying reality to discover principles. (Examples: Galileo ignored friction and air resistance to get at principles of motion. And if something's too complicated, the physicist hands it to the chemist, who hands it to the biologist...)
There's a link between that and critical thinking; one of the first theoretical physicists wrote: "The duty of the person who investigates the writings of ancients [scientists?], if learning the truth is their goal, is to make themself an enemy of all that they read, and ... attack it from every side. They should also suspect themself as they perform their critical examination of it, so that they may avoid falling into either prejudice or leniency." — Alhazen, ~1000, Cairo
However, since science is a social enterprise, it's vulnerable to institutional bullshit. Certainly, Alhazen had to pretend madness to avoid angering the local Powers That Be. Nowadays, scientists must contend with all sorts of brokenness at universities; managerialism/bureaucracy, class/sexism/racism, etc.
>... He Who Shall Not Be Named predictably rejects all of the studies that do not support his position as being “fatally flawed,” or as having been “refuted by experts”—namely, by himself and his close collaborators ...
This speaks to part of the problem - the undue weight that non-scientists place on expert opinion. Trained scientists see appeal to authority arguments for what they are: bullshit.
I see this most frequently in areas for which few controlled studies are available to light the way. Human nutrition and toxicology come to mind. Oddly enough, these are the areas that are most likely to be of interest to non-scientists, setting up a vicious cycle of guru-ism complete with economic incentive to continue spouting nonsense.
>>the undue weight that non-scientists place on expert opinion
Speaking as a non-scientist, I can recognize an appeal to authority probably just as well as a scientist. But having recognized one, what do I do? I lack the training, knowledge, and time necessary to evaluate the research directly. I can choose to only trust studies that are peer reviewed, or in major journals, or backed by whatever relevant government body there might be, or that my friend who knows about this thinks are right. And maybe that's a good idea, but it's still just appealing to different kinds of authority.
Most of the time laypeople have no realistic alternative to expert opinion.
The alternative is consensus. As part of my research I found that a particular method of estimating disease prevalence in small geographic areas performed better than this other method when tested on real data from schools.
The tendency of the general public would be to look at that study and say the first method is better than the second method (assuming the general public would care at all, which they don't). That's incorrect. I would only conclude that method is actually better if several other people found similar results for similar methods in separate studies.
People tend to place far too much importance on one paper or one study. In theoretical research this can be okay sometimes, but in applied research this is almost always the wrong way to go.
Consensus seems to be the best available alternative, but it still doesn't seem to be very good in many cases. As the article pointed out it is not difficult to manufacture enough bullshit that it looks like a "competing consensus." Prime examples of that being the anti-vaccination movement, climate change deniers, and the pro-smoking studies from cigarette companies.
Counting up and evaluating all the studies is itself a time-consuming task, hence the existence of meta-analyses. But then you get into duelling meta-analyses, and you have to choose which one(s) to trust, and you're back to expert opinion and appeals to authority.
And yet otherwise reasonable people embrace the concept of "scientific consensus" most notably in the man made global warming discussion. This is nothing but an appeal to authority . While it is true that commonly good ideas are accepted by a majority of researchers in the relevant field, it is not 100% guaranteed. It is such a shame that we reach for the "scientific consensus" argument when there is no need due to the overwhelming evidence for man made global warming. It's lazy, sloppy, and encourages group think over rational inquiry.
Except the case for anthropogenically induced global warming is just not nearly so airtight as those constantly talking about the "overwhelming scientific consensus" would have you believe.
Note; not "climate change" in general, or even just "global warming" but the specific hypothesis that it's all down to co2 emissions by humans, invariably followed up with a proposed solution to the problem consisting of an increase in state power and interference into the markets in order to avert certain catastrophe, that has the neat side effect of allowing the political authorities of the world to impose yet another tax on almost all economic activity on the planet.
And all of the above gets neatly rolled up into a label like "climate change" instead of the greatly expanded problem / solution combination the harder the expanded problem / solution combination is critically evaluated.
It's not "undue". It's a heuristic. It lessens the amount of effort one needs to get to a semblance of truth rather than to do all of the heavy lifting oneself for more precision. As mentioned by another commenter.
Part of the way through earning a degree in social science I found I had acquired an amazing power. I can take almost any research paper in a social science and poke enough holes in it that anyone who doesn't like the results will no longer accept it.
For anyone thinking, "The words in the title sound sort of familiar, but I'm not sure why", it's a reference to "The Unbearable Lightness of Being", a great book by Milan Kundera which translated into Daniel Day-Lewis' worst film.
I believe this is the primary issue and the cause is from one of two causes:
1) secret sauce in research -- details are lacking because there is a push to commercialize things that come out of academia
2) insufficient experimental design -- small sample size, poor controls, etc.
I would like to see an open publication that as a part of publication the result must be reproduced in a separate independent lab or two. This would almost double the required funding (maybe less because you eliminate false starts). Maybe just a few institutions could handle many reproductions.
There is a bit of a self-healing aspect in that the non-reproducible and non-interesting/advancing studies just get dropped on the floor. However, it would lend a lot of credibility to a journal that required an independent research confirmation.
Another tactic I've seen from the Lord Voldemorts I know is to cite loads of references that aren't readily available online. Such a citation can be used to prop up _any_ argument, whether or not the citation actually supports the argument, or even has any bearing on it at all. It's the same trap, though. To disupte the citation, you have to wait months for an inter-library loan, read the cited work in detail, and then decide what it really has to say about the argument.
I personally fell into this trap, not because I was trying to refute something, but because I was trying to back up one of my own assumptions and I found that Lord Voldemort was citing Obscure Reference X to back up the same assumption. The joke was on me when I actually tracked down Obscure Reference X in the 30-years-out-of-print proceedings of a symposium on Y. Obscure Reference X had nothing at all to say about my assumption! Needless to say, I no longer trust Lord Voldemort or anyone who publishes with him.
The problem is that saying "Professor Lord Voldemort is a liar and a fraud" puts an end to scientific discussion and forces people to assume nakedly partisan positions. Publicly at least, we have to assume good faith in our counterparts, even if we know in our hearts that they're self-inflated gasbags.
The tactics I and the original article were describing allow Lord Voldemort to clothe any assertion he likes in the robes of science. The root of this problem is a broken incentive system for publication. We're required to publish a lot to show productivity, and we're trained to put in lots of citations to back up our work. This creates an unmanageable avalanche of worthless papers and makes it easy to build a false trail of scammy citations.
Compare this to the situation 50 years ago, before publication inflation had set in. John Nash wrote a 30 page dissertation, and cited two works at the end of it. Simon's classic "Behavioral Model of Rational Choice" cited 5 works. The entire Cowles commission report on Activity Analysis devoted only 4.5 of its 418 pages to citations, and that included a detailed lit review in its introduction. Nothing makes it to press these days without five to ten times as many citations.
I heard a well known neuroscientist suggest that studies should be required to be replicated by at least another lab before being published/ established. That may sound impractical for some fields, but especially in biology/neuroscience most projects are small enough to make this feasible. Is it worth the money? I would say absolutely, not only for the validation of the science, but for the shift in culture, to finally stop designing studies chasing minute (but statistically significant) effects just to make another publication.
I think its also about time we have post-publication peer review and about time scientists get off their high horses and start responding to it.
But why would that other lab care enough to replicate? I suspect this would only exacerbate the "rich getting richer" effect whereby well-known labs/professors would have an easier time getting replicated (and thus published).
But there is a cost problem with that, and behind the cost problem is a funding problem. "The servant is the one who takes the money" - Lawrence of Arabia.
The immune system of science is the peer review system and other mechanisms to reduce faulty science. This author points out published papers that made it past that immune system, and it some cases became part of it (e.g. as peer reviewers). What analogous structure in the immune system could repair problems with the system itself?
Yes, the article describes how certain "viruses" have learned how to beat the current immune system. That just means the immune system needs to evolve to address the new threats.
In nature, immune systems evolve based on natural selection. We need to keep trying new things and see what works. Maybe "peer review" needs to extend not only to individual papers, but to scientists and institutions themselves. Maybe their reputation needs to be evaluated over time so that a scientist who has been "infected" then has a standing presumption against their work until they can overwhelmingly demonstrate they are "healed."
This idea might be good or might be terrible, but I believe that we should be trying things like this to see if any of them stick and cause more good than harm.
Well, it is still a variation "On Bullshit" (Harry Frankfurt's excellent article). If someone in science is more interested in a position than whether the presented arguments individually holds up or not, that person is unscientifically invested and we should have some kind of mental allergic reaction to that.
The "Publish or Perish" has made Gish Gallops much harder to catch and almost impossible to punish.
There's also the entertaining "On the phenomenon of bullshit jobs" by David Graeber, although it has less to say specifically about science: http://strikemag.org/bullshit-jobs/
There is indeed a lot of bullshit in the practice of science. But that doesn't mean science is flawed, it means people are flawed. Remember Sturgeons Law: 90% of everything is bullshit. [1]
It would be great if we could fund science purely for science' sake, and if scientists didn't have egos or careers or reputations or children, but despite the objections, I will expect a certain amount of bullshit to continue unabated. In the mean time, the author's most important point, IMO is "if you love science, you had better question it, and question it well, so it can live up to its potential." This is true, and always will be, regardless of how much bullshit is involved!
Bullshit is a byproduct of a highly incentivized research market. This is what respected journals are supposed to help curate. There are opportunities to create better forms of curation with less bullshit and I would bet people will pay for it.
We routinely say things that aren't quite true -- sometimes with the best intentions or out of necessity. The truth can be a very complicated thing.
What we are calling "bullshit" may go under more serious names (and serious discussions) if we look at specific cases - e.g. finance, medicine, game theory, biology, etc... My back-of-the-envlope definition is that of an "approximation to the truth".
That's the reason why theory should be backed by hard evidence or a product that can be experienced. Without this, a theory of any sort whether it turns out to be right or if it is pure bullshit should not be given undue credit.
In other professional disciplines particularly ones that require some sort of certification there are ethics tests and ethics committees. For example if your trying to get a CFA half of the test is ethics. Likewise for becoming and being a lawyer.
Given the importance of academia and its impact on policy in general perhaps something similar should exist?
I'm not sure if ethical tests and committees curb bad behavior but it could be a start or at least improve awareness. Maybe there is even something like the above already in place for academia that I'm unaware of?
I take issue with Brandolini's maxim: "The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it."
This assumes that we know, a priori, what is bullshit and what is not. Sometimes bullshitters know they are bullshitting, but most often they do not.
What I think is really going on here is that the scientific method is crap for this sort of thing, there is no such thing as "empirical truth" that exists in the real world, and subjective debate, reasoning, and so on, is hard and requires enormous effort on all parts.
Let's consider another hypothetical work of bullshit by one Maleficent. I, an oblivious third party, come across her published work. How am I to know that this work is bullshit?
The traditional response is that we use the scientific method, verifiability and empiricism, to test the bounds of a proposed model against observation. "You said Planet X should be at y, but it is actually at z, therefore bullshit." To quote Laurence Laurentz: 'Would that it were so simple.'
The problem here is that observation is fraught, and is usually based on its own assumptions. For example, let's say Maleficent is studying treatments for depression; to do so she must observe whether an individual is "depressed" or "not depressed". How the fuck should she do this? Frequently people use questionnaire measures like the Beck Depression Inventory (BDI). Is this a valid tool? I have been heavily depressed when I scored low on the BDI, so I would say not. But what IS a valid tool? Is there any objective criterion we can bring to bear, here? What is it? Does "depression" even exist as a thing?
This problem, that observations are themselves laden with assumptions and based on pre-existing models, is a mire that all science is forced to wade through. Before we make decisions about anything, we must have a lens with which to view the world - but that, itself, is a decision!
More broadly speaking this is a problem with deductive reasoning, and because empiricism claims to be based on deductive reasoning it falls into error as a result. Because it is impossible to begin with truth, any scientific observation must be riddled through with approximations. And usually, we are unaware of the approximations that are blinding us when we build flawed models on top of them.
This is the main reason we get bullshit: there is no good way to do science.
Brandolini's quote is brilliant. Going through evidence and conducting studies and doing research and looking through it with a critical eye is orders of magnitude more difficult than spouting some BS like "vaccines cause autism!!11!!111".
Instead of a sisyphean refutation of every attack, wouldn't a better strategy be to research and prove the attacker's conflict of interests (and thus discredit them)?
Frankly, in some cases that's likely to backfire. Point out that a global warming sceptic accepts huge amounts of money from an oil-company linked foundation and they'll take great pleasure in pointing out which green advocacy groups fund an awful lot of research on the other side of the debate. If the net effect is less trust in any climate science, they win.
And as everyone knows the green advocacy groups are making a TON of money from reselling non polluted air, and they are making a killing buying inland beaches!
There doesn't seem to be any equivalance there. I'm sure that they'd try to pretend there is, and indeed they already do make vague claims about a "climate change lobby" making millions, and it all being a front for "lefties", but that doesn't really alter the fact that it isn't there.
Equivalence doesn't need to exist for FUD to work.
(and to be fair, research whose funding appears to be predicated on the study being designed to support the funding foundation's official position on the issue is open to question regardless of whether the foundation has ulterior motives or not)
Indeed. IMHO, the only antidote to this is teaching people (and scientists) to be very, very sceptical whenever someone doesn't care if one of their arguments are refuted.
In an ideal world where everyone was a rational, disinterested superhuman who devoted themselves to a scientific pursuit of truth, only ever focusing on arguments would be the perfect approach.
Unfortunately there do exist people who deliberately and knowingly bullshit other people in order to get what they want. Such people absolutely win from a policy of "attack the argument, not the person" because their goal is not to further understanding or even win arguments, it's to confuse people into acting a certain way ... often paralysing them into inaction by creating the appearance of an unending debate.
Thus refuting one bullshit argument simply results in two more popping up to replace it. Even if some people remember the first argument that was refuted, this doesn't help, because:
• Lots of other people won't remember the names of who was involved, or won't be aware of the previous arguments at all.
• Of the people who do remember, the fact that a debate was happening at all may be taken as evidence that the people involved must be "experts", and thus the fact that they lost the argument doesn't necessarily reduce their credibility.
• If someone was a good enough bullshitter to require a response in the first place, they will probably be good enough at it a second time to ensure that if they get no response, some people will start to assume they must be correct.
This can rapidly turn into complete defeat by the people who are actually making reasonable points because they simply become exhausted and burn out faced with an unending wall of plausible sounding nonsense, which then eventually replaces reality with itself.
I've seen this problem play out in brutally sharp detail not so long ago. The people involved knew they were bullshitting, but didn't care because in their eyes it was all for the greater good.
The only solution to this is, in fact, to attack the credibility of the people doing it once they have repeatedly made absurd or invalid arguments, because otherwise it's much harder for people to learn to tune them out.
Although, in the case of a "Gish Gallop" you can very well argue against the method, which is not so much about presenting evidence as it is about keeping you from presenting your side of issue.
If I present my argument in a manner that intellectually dishonest and/or distorts your position, you can call me out on that without going ad hominem.
Arguing ad hominim is an easy way of getting out of the actual argument. If you watched any republican primary debate lately you can see it works really well in distracting you (weaker debating) opponent and creating confusion among the viewers about the candidates actual standpoint.
It serves not to disinterest the public, but the debater as it avoids the actual argument, with a side dish of confusion to the public.
It is however notable for often being used by those without actual arguments and thus should always be countered by an argument for your cause combined with one against theirs. But if left un-countered it might however get a life on its own and while this might be less true in science, it it killing in politics (see the electability 'arguments').
It can be a viable argument if bias is overly dominant in the research but is always supporting and never a single argument.
note: This is an opinion, like most 'arguments' are.
Once it has become clear that someone is a "Lord Voldemort", it seems to me there's little value in continued careful consideration of their positions. At that point, if it's possible, why not discredit them and allow the rest of the world to start ignoring them and spend time on more promising endeavours?
What conflict of interest does a Young Earth Creationist have? Not money. If you say religious belief, you'd be right, but it's not a terribly effective criticism because the majority of U.S. scientists are religious, too.
>What conflict of interest does a Young Earth Creationist have? Not money.
Why not money? Most of them built a career that is celebrated, while being lousy scientists, by playing the Creationist card and appealing to particular political/religious publics. And working for similarly minded organizations and "research" institutes.
>If you say religious belief, you'd be right, but it's not a terribly effective criticism because the majority of U.S. scientists are religious, too.
That would be relevant only if they let their religion influence their science. Which a computer scientist or a chemist doesn't have to, or at least as much as a evolutionary biologist.
> Why not money? Most of them built a career that is celebrated, while being lousy scientists, by playing the Creationist card and appealing to particular political/religious publics. And working for similarly minded organizations and "research" institutes.
I am not saying this is the case for YEC, but usually when you hold and defend some fringe belief, you often have books, videos on Youtube, dvds to sell, conferences to give, in some case museums [1]. (See conspiracy theorist Alex John films [2])
So yes, for some people there are some financial interest. But I would say this is not the main factor. If you have defended an idea for a long time, maybe you have sacrificed some part of your life like family or friend to your passion, and at some point it becomes so tied to your identity, that it must be impossible to realize how wrong your were.
The latter is true for any kind of belief, religious, political or even scientific.
Young Earth Creationism is something that people with lots of free-floating anxiety attach that free floating anxiety to. It's a belligerent, fear-based reactionary position but what bothers them is the perceived hubris and arrogance of scientists.
Plus, some people simply can't suspend disbelief of self-organizing systems. I have to wonder if there was trauma in science class for many of them at one point.
When I mentioned (as a 5th grade student ) to a 5th-grade student that evolution isn't exclusive to Creation , he burst into tears.
In general, highly charismatic religion seems somehow related to the general changes of the 1960s, coupled with some more sinister uses of mass media. I have to wonder if decades of television have made people addicted to willing suspension of disbelief of a particularly unexamined kind.
Conflict of interest does not at all imply bullshit of this nature -- it may imply a biased presentation, but a biased presentation is not the same as deliberately repeating arguments one knows are bad. Also, more often than not the source of the conflict of interest is simply the desire to protect one's reputation -- there are not many authors who do not suffer from that :)
Best way to deal with biased bullshit is open discussion where the biased parties make their arguments, and others evaluate them on quality and completeness. Competition on who is better at hiding biases and conflicts of interest, as opposed to who has the strongest argument, is not the best way at getting at the truth.
Now, knowingly and repeatedly making the same questionable arguments without even mentioning criticism, is quite another matter and should indeed count against one's reputation. Too bad we so often see all sides guilty of that one :(
I think the number one reason of science bullshit (and it's written in the article) is not conflict of interests in the sense of "money from evil big corps" but rather the "publish or die" system in the academic world.
Or you could say the scientist interest is to keep his job, which requires a certain volume of publications, and that conflict with publishing only results that are both sincere and worthwhile.
Of course, that would be the ideal way of combating this. If you're talking about financial stakes in certain findings however, these might be hard to prove (and stalking colleagues to prove they're playing dirty is not and shouldn't be in the spirit of the academic debate).
I think the procedure itself entails a conflict of interest: If you dispute certain findings by certain researchers (and you have a clear agenda), how can you be trusted to write an objective year-end summary of relevant findings in the field? I think these kinds of articles are the root of the problem. Of course, it would be far from easy to find an objective voice interested in writing these without having an 'ulterior opinion'. Still, I think editors should at least bar researchers from summarizing what they have a stake in (or summarizing a debate that they have taken part in during the last couple of months).
And who said that valid theories can't have scientists with conflict of interests behind them? Especially if the interests aren't "is paid by big corp to lie", but something ideological.
I read this as a thinly-veiled attack on prominent climate-change activists. I thought the author ended a bit abruptly; was expecting a comparison of the tactics used by the fictitious "Voldemort" to the real-life actions of Bill McKibben, Michael Mann, etc.
Notice how a couple Voldermorts have gone through the comments and down-voted anyone who questioned catastrophic global warming predictions. As Voldermorts are wont to do.
If the field were sane, you would train all the apprentices on replication studies. Once they demonstrated dispassionate expertise with the tools, only then would they be allowed to try to use those tools to test their own ideas, where they will have a strong emotional preference for how the study will come out.
If universities hired grad students based on their replication work, not on their eye-popping original research, we'd have better science and better scientists.