Hacker News new | past | comments | ask | show | jobs | submit login
You Should Downvote Contrarian Anecdotes (thobbs.github.com)
219 points by tylerhobbs on June 18, 2012 | hide | past | favorite | 152 comments



In a world were 47 of 53 landmark cancer research papers were found to not be replicable... where one scientist admitted (after the meta-researchers tried 50 times to replicate the result and failed) that they had run the experiment 6 times and got the result once and reported that because it made a better "story"... where the act of suppressing refutations of other's bogus research is "culturally commonplace"...

Something is rotten in the core of science. These days I give the research the same weight I give the anecdote... and that is no weight at all.

Science has been supplanted by money and politics... At least anecdotes admit they're anecdotes!

http://news.yahoo.com/cancer-science-many-discoveries-dont-h...


This is utter hogwash.

I'm as critical as anyone (probably more so, check my comment history) of academic biology because of my background in it. There are certainly things wrong it. And due to the nature of biology, replicating results is really hard. It's a fact of life when you deal with systems that are not perfect, not identical and very opaque.

But to say that "Science has been supplanted by money and politics" is stretching the problems of biology into a mountain of conspiracy.

Furthermore, I'm reading your "source" and it reads loudly as "I'm an underfunded big-pharma research who has neither the time nor the resources to properly replicate studies". Did you know that most big pharma labs do not have access to the academic literature? They mostly read abstracts because there is little budget to actually purchase the required papers.

How much do you trust labs that are A) only trying to recreate data so they can make a drug out of it and B) aren't even reading the original data? While academic labs can have grad students toil away on hard experiemnts for literally years before they perfect them...how long do you think Pfizer or Merck or Glaxco-Smith is going to let their paid researchers fiddle away on a project that is probably low priority anyway?

Because, of course, the high-priority projects are the reformulations of penis-enlarging drugs or cholesterol medication...you know, the ones that actually make money.

If you are looking for snake oil and shady research, I dare you to read any research paper that comes out of big pharma labs. We would routinely read them just for laughs because they are (often) downright terrible.


While I agree that "Science has been supplanted by money and politics" is going way overboard. I'm going to take the other side of the argument because I think you are way off base. I worked in both an academic lab and a "big pharma" lab(4 years and 6 years respectively).

To say "most big pharma labs" do not have access to the literature is laughable. We had better access than most academic institutions. If we needed a paper we didn't have access to, it took a few hours to get it. The company was more than willing to pay the $50 to get a copy of whatever paper, since we would often blow $50 running one experiment. Many of the smaller biotech might have poor access to journals, but even then, if you could justify the cost, you could get it.

Second of all, yes I trust labs that are trying to recreate data to make a drug out of it. You have to remember that these attempts to recreate data were a very important data point on a potential multi-million (billion?) dollar investment in a new target, these are NOT low priority projects. They WANT the data to be true. They have zero incentive for the data to not be reproducible.

Having worked in both academic and commercial labs, I would say the incentive to "tweak" results in much great in academic labs for the following reasons:

1) Often results are never double checked in an academic lab unless the work is use in a later project. Contrast this with a pharma lab where if the data is positive, you'll have to prove it again and again. 2) Academics (both profs and students) live and die by papers, not so in academic (in fact, in the company I worked in, they preferred if you didn't publish) 3) Work in academic is often performed by relatively inexperienced ungrad and grad students, while big pharma scientists often have years of experience.


Fair points. My thoughts:

>To say "most big pharma labs" do not have access to the literature is laughable. We had better access than most academic institutions. If we needed a paper we didn't have access to, it took a few hours to get it. The company was more than willing to pay the $50 to get a copy of whatever paper, since we would often blow $50 running one experiment.

I'll admit that my knowledge of big pharma journal access is colored by those in big pharma that I've talked to (anecdotal evidence, oh the irony). Perhaps they just had poor departments or bad access, I don't know.

However, every university that I've been at has instant access to journals. I never had to wait hours for a paper...we had free reign of just about every journal. Even at my relatively small and poor undergraduate institute.

>1) Often results are never double checked in an academic lab unless the work is use in a later project.

99% of projects in academia are building off some previous grad student or post-doc's work. Sure, there are projects which are nearly impossible to replicate (I should know, I spent 1.5 years of my life trying to replicate a previous grad's project). But it's equally laughable to say that data is never double-checked - professor's career is a long string of projects building on previous projects.

>2) Academics (both profs and students) live and die by papers, not so in [industry]

I'll concede that there is often pressure to publish positive results in an academic setting. However, as you rightly mentioned, academics live and die by their papers. It just takes one lab refuting your paper to have a burned career. While I agree that many academics prefer to just ignore papers they can't recreate, there is still a lot riding on publishing replicable data.

>3) Work in academic is often performed by relatively inexperienced ungrad and grad students, while big pharma scientists often have years of experience.

This is a pretty baseless statement? I know plenty of techs working at big pharma that just graduated with an undergrad degree and have zero of wet-bench experience (just like I know of plenty who did the same in academia). Conversely, I can't even count the number of post-docs and senior scientists that work at various universities, with literally centuries of experience between them.


To address your points:

1. The big pharma guys have instant access to journals. When I say we had to wait a couple hours, it was because I was looking for a paper from "The Russian Journal of Chemistry" from 1912. We had a vendor who could track down anything. For any of the big journals, we had the same access as academia.

2. We agree on this point. If a lab experiment is used in a later project, it HAS to work or else the future work can't occur. However, lots of projects have "arms", where the experiment is an interesting observation that is never pursued. These are often "one-off" experiments that are published, but never repeated in the same lab.

3. I am by no means painting academics with a broad brush here. I think most academic research is done on the up-and-up and the results are valid, if not hard to replicate (this is research!). I think one issue is the one pointed out in the parent comment. You run 5 reactions, two fail and the three that work produce yields of 50%, 70% and 80%. What gets published? 80%. The devil is in the details. In big pharma, you are trying to make a drug and the science better work or else you can't bring it to market. Much higher standards for reproducibility.

4. I guess my thought here is based on the fact that big pharma typically hires from academic labs. All those post-docs and senior scientists with years of experience? That's who big pharma hires. So overall, I would imagine that the level of experience in big pharma is greater than the average you would see in academia (which makes sense since academia is training for working in places like big pharma).

Once again, I always shy away from descriptions that put all "big pharma" or "academic" researchers into one pile. There are brilliant people on both sides and crappy people on both sides.


Ok, I'm with you on all your points. I suppose I over-reacted to the grandparent post - it felt like useless sensationalism and conspiracy-mongering.

Thanks for the useful counter-points...I'm now armed with some more anecdotes (hah!) on the other end of the "big pharma" spectrum.

=)


The incentives are great on the first test. But, there is a lot of money to be made for tweaking the finial study from inconclusive to slightly positive.


Did you read the original article in Nature before devising your ad hominem?

http://www.nature.com/nature/journal/v483/n7391/full/483531a...

They certainly had access to the original data. To quote:

> To address these concerns, when findings could not be reproduced, an attempt was made to contact the original authors, discuss the discrepant findings, exchange reagents and repeat experiments under the authors' direction, occasionally even in the laboratory of the original investigator.

There are quite a few other studies which raise similar questions statistics about medical research; e.g.:

http://www.ncbi.nlm.nih.gov/pubmed/11600885 http://www.nature.com/neuro/journal/v14/n9/full/nn.2886.html http://www.sciencedirect.com/science/article/pii/S0895435605... http://dx.doi.org/10.1097/EDE.0b013e31818131e7


Well the one nice thing about scientific research is that it's open (cue conspiracy theories), and that you can usually read the paper/draft. No one is asking you to "just believe" research; that would be stupid, and it's not how the scientific process works. The fact that you are dismissing all research outright is ridiculous, and willfully ignorant. Your sources are shabby, and you talk of "science" and "scientists" as some great collective, which is frankly impossible to do. The idea that "Something is rotten in the core of science" is a wild extrapolation; there are problems with certain fields' practices (and may always be), medicine is probably a good example. However, saying that you give research no weight is stupid, and you discredit yourself further when you say that "Science has been supplanted by money and politics"; you sound like a conspiracy theorist, and I cannot believe you actually know many academics who will back you up on that claim. For whatever the system's flaws, there is _no_ reason to say that all research cannot be trusted.


> No one is asking you to "just believe" research;

Did you read the study the parent post is talking about? A well funded laboratory, that was trying to not "just believe" research (as everyone else apparently does) was trying to replicate these results. If the science was good, all it should have taken is time and money (both of which they had enough of). And yet, 47 out of 53 celebrated results published in peer reviewed papers of the highest caliber could not be replicated . Let that sink for a minute before you reply.

> there is _no_ reason to say that all research cannot be trusted.

Ok. Your reason to state that research can be trusted is that it is eventually replicated (thus confirmed), or thrown out (thus shown false), is that right? (You didn't state that as your reason, so perhaps you have other ideas -- but that's a common one, so I'll reply to it).

Assuming that's the case -- do you have any idea what percentage of results are replicated? And how much time after official publication?

Because if it takes e.g. 30 years until a bad publication is discredited, and (as the data point given by the parent shows) there are areas in which 90% of the data apparently can be discredited when you try to replicate it -- then, there actually might be reason to distrust research in general, because at any given point in time, more than 90% of non-discredited published results are wrong.

See also http://saveyourself.ca/articles/ioannidis.php (and the paper it references). This situation is not science fiction. 90% un-replicatable publications is probably limited to very few subjects. But 50% overall in medicine and biology is totally believable.

Which is not to say science (the abstract idea / discipline / method) is wrong - it's right. It's just that the things we human practice and often call "science" is very, very far from the ideal of science. Ignore that at your own peril.


This is a terrible perspective.

I would argue that if even those research papers could not be replicated, an anecdote is all but worthless.


An anecdote, if true, is a contrary example which can be useful. It can even disprove an absolute claim - it only takes one counterexample. In mathematics this is done all the time.

Statistics are themselves misleading - there are whole books on the subject (oh no! an anecdote! better close your mind now). They are highly contextual, but the popular press excels are stripping that context and proclaiming absurd extremes. Anecdotes are excellent context, putting statistics into perspective.


> oh no! an anecdote! better close your mind now

Another idiotic strawman argument.


Forgot whimsy doesn't play on HN - too many literal thinkers. Sorry, I'll refrain.


Jokes such as those are really bad for a good discussion. They serve to prove to people on your "side" (an idiotic notion, but I digress), how stupid the people on the other side are. I would downvote if I could.


Sorry again; I commented while in an excited emotional state. Though it embarrasses me how many times that give better karma than a reasoned paragraph putting forth a cogent argument.


If you stand for nothing, you'll fall for everything...or something like that. Science and especially statistics have never (and sadly never will be) perfect or free of human bias, but not believing anything outright leaves you open to manipulators that make you "feel" whats right, instead of having you think about whats right. We might think we're super rational, but it just ain't so. Unrelated example: Female students told before a standardized math test that its genetic that women are worse at math also did significantly worse than males and control females. They were (probably) all rational people, yet this seemingly unimportant event changed their rational performance.

If science and anecdotes are equally bull to you, how do you make up your mind about things? Magic?


> Something is rotten in the core of science.

It's not science that is the problem. It's that biology considers a 95% confidence sufficient. Considering how many studies are done each year, this virtually guarantees incorrect results.

The reason they do that is that it's impossible to get better results, they just can not do enough trials. So they are stuck.


95% confidence would be fine (or very nearly fine) if a requirement for funding/publication would be that all data, including negative results, be published as well.


That would only be true if

a) all data, everywhere in the world, including negative results, was published regardless of funding/publication.

b) someone actually looked at that data, normalized it, and used it to assess the real significance of every result, in a sane manner (e.g. by using a bayesian inference with some reasonably behaving universal prior).

Neither a, b will ever happen, and both are essential.

(note: publication of all data is not a sufficient requirement: if 20 independent labs each do the same random experiment, one of them is expected to have a 95% confidence, and when they publish all their data, it consists of that one experiment that seems legit. This _will_ and _already does_ happen by chance)


The problem with much of modern medicine is that much of it is based on flawed and biased statistical studies. Whether this is done because medical personnel don't have training in statistics, or because such studies generate funding, I don't know, but something is definitely rotten.

Let's take anything involving nutrition. Some challenges are: (1) people lie, (2) such studies can't be double-blind so placebo kicks in, (3) the statistical significance of short-term studies is zero, (4) you can't control all the variables, unless you lock those people in a cage and (5) most conclusions of such studies have the potential to confuse the cause and the effect.

But not all of science is like that. Just medicine.


When you cannot control all the variables, it's important to have a large enough sample that the randomness in each direction for the different variables essentially cancels itself out.

Also what does "the statistical significance of short-term studies is zero" mean? I don't think it means what you think it means.

I would argue that short-term studies (for nutrition anyway) have little clinical significance, despite their statistical significance. I'm in medicine, and I read papers all the time detecting a statistical difference between control and experimental groups, but the difference is so tiny that it's meaningless. This is the balance you have to strike with large sample sizes. With a large enough sample, small differences are likely to be statistically significant but the key is determining if the difference is worthwhile.

I blame bad science reporting for a lot of the anger you are feeling. Reporters don't seem to understand what they are reporting, and often the scientists themselves are (accidentally or on purpose) making it worse.


> When you cannot control all the variables, it's important to have a large enough sample that the randomness in each direction for the different variables essentially cancels itself out.

That's nice in theory, but does not happen for most published research.

> I'm in medicine, and I read papers all the time detecting a statistical difference between control and experimental groups, but the difference is so tiny that it's meaningless.

I'm trained in statistics. My ex was an MD. I used to read the NEJM for fun for a couple of years. Most of the results published are barely statistically significant for the small group they tested ("our sample included 40 caucasian females between the ages of 37 and 48, and we have a p value of 0.03" with no mention of the context which might make that p value meaningless - but let's assume they got that part right). And then, a couple of years later, some other study takes that result as absolute truth, but assumes it applies to any woman aged >30. And a couple of years later, it is assumed to be universal and speculated to apply to males as well.

Is your experience different?

> I blame bad science reporting for a lot of the anger you are feeling.

I blame tenure publishing requirements. While bad reporting certainly deserves its share of contempt, people these days do everything in order to meet the publishing requirements for tenure. Most stay away from outright fabrication, but otherwise every manipulation of the data that would make it fit for a higher caliber publication is being done as long as it is not outright fraudulent -- including dropping the background context so nicely exemplified by this xkcd comic http://xkcd.com/882/ . It often is the researchers doing the bad reporting with no outside help.


Speaking only to the middle part of my experience with modest research creeping up in significance, I would say that I see that sometimes but not regularly. Given, as you say, the tenure publishing requirements, I feel that I often see a flood of similar studies after a "proof of concept" that actually helps to flesh out the issue.


Not even a large sample will help against systematic errors.


Good point; upvoted. However I was just trying to address the idea that there are ways of minimizing problems in study design. No study is ever perfect, but many of them are sufficient.


Surely you could find some scientists or scientific institutions you can trust?

Or maybe you can trust those experiments that do get replicated successfully?


Given that the context is "downvote anything that Reddit or HN disagree with," it seems that the point is "don't follow flocks while voting; make your own decisions or don't vote."


Stop using anecdotes to support extremely broad statements about all of science.


So from your example, ~10% contained really good information, and this was found using a meta study, conducted by scientists. And you are using that as an example of how you should give scientific research no weight at all? Why do you give weight to the scientific meta study if you think it means that you should not give weight to scientific studies in general? This position is pretty ridiculous on the face of it.


No. Contrarian anecdotes are good. They may turn out to be without merit, but then again so may the article itself. Having a real discussion is a good thing.

Also, I would like to propose a logarithmic scale for weighting such things. Say, if the article in question found something extraordinarily significant with 100 out of 100 samples resulting in A, then it's still not rational to weigh a contrarian viewpoint resulting in B with 1/101 - it should maybe be closer to 1/3 or something.

Consensus culture and worship authority are not desirable in my opinion. Arguments should be weighed on their merits and it's appropriate to explore other viewpoints or explanations even if they turn out to be dead ends most of the time.


Let's take an example, since they seem to work best: vaccines.

Vaccines protect you from the risk of contracting particular diseases, some of which are crippling, lethal, or incurable. Plus, most are extremely effective: once you take your shots, you are effectively immune. That's good.

There is a downside however. Sometimes, vaccines have side effects. Most side effects are quite benign, but if you're unlucky enough, they can be crippling, lethal, or incurable. That's bad.

From a medical point of view, vaccines are a net good (let's leave aside logistic considerations, or the effort required to go to the doctor). When you look at the stats, you stand a much better chance at life and health if you take the shot. Even for relatively minor illnesses like the flu.

Now, let's say someone post a heartbreaking comment about how her 9 year old daughter died of a vaccine shot, with all the gory details about the suffering, how she couldn't participate in her school's festival, the size of the coffin… I'm quite sure there are stories of the kind. Given the sheer amount of readers here, maybe one of you will more or less directly relate to that. My apologies to those who do.

Nevertheless, what makes a good story doesn't necessarily make good evidence. When you know of reliable statistics, and you read a contrarian anecdote, you should shift your belief in the direction of the anecdote by a precise amount, which is almost always tiny. What your brain will actually do behind you back however, is shifting your belief by a significant amount, often crossing the "reasonable doubt" line. That's not rational, but that's what will happen. Nameless statistics feel abstract, remote. An anecdote on the other hand feels concrete, real, close. Worse, you can spend far more time reading about the salient anecdotes than learning about the end results of reliable, but boring, scientific studies.

Another example: you don't win the lottery. Period. You don't know of any close family of friend that ever did. But maybe one of you readers do. Maybe that one could comment and say "Hey, but my cousin did win the lottery!". Would that prove me wrong? Not at all. It's just that when the sample size is huge enough, even the tiniest chance can actualize.


Are you actually arguing against giving anecdotes weight by using... anecdotes?


Yes, I'm evil.

Here is the "good" version: Bayesianism is correct. Those who don't believe me may want to read E. T. Jaynes[1] or Eliezer Yudkowsky[2] (long, and may feel abstract and dull). But countless studies about biases showed that we humans are poor at correctly assessing evidence at our disposal. Some of those studies showed that some failure modes come from anecdotes. Downvoting seems harsh, but it's the best we currently have to combat those failure modes.

Now we don't want to overdo it. I suggest we put a comment citing which reliable statistics contradict the downvoted anecdote. Maybe that'll help avoid groupthink. We may also want to allow people to just say they have anecdotal evidence to the contrary of whatever.

[1]: http://bayes.wustl.edu/etj/prob/book.pdf

[2]: http://yudkowsky.net/rational/technical


Those are examples, not anecdotes.


Yes, but I did make use of strong emotional story-like elements to make my point.


Those are hypotheticals or thought experiments, not anecdotes... they would be anecdotes if they were claimed to have actually happened.

I'm not arguing against anecdotes, but there is an important distinction there


Those aren't anecdotes, they are hypotheticals. They are useful, but statistically they occur even more infrequently than anecdotes.


I didn't see any anecdotes in the post.


So you're suggesting that because some people are bad at weighing information, anything based on personal experience should be downvoted? Successful entrepreneurs should no longer give advice? Programmers shouldn't weigh in on programming issues unless they're citing third-party research? Marketers can't suggest tips for how to promote a startup because it's possible their methods aren't universally successful?


It's just not some people. It's everyone. Including you, me, and those who know about the various failure modes.

And please don't straw man me. Personal experience is mostly great. Successful entrepreneurs may have better decision making processes, not just more luck. Programming issues should be weighted on, since there are so little reliable studies here, and the field is so young. Etc. I was just talking about the cases when the evidence that contradicts the anecdote is solid and definite.


Disagree. Successful entrepreneurs are lottery winners with attitude, by and large. The programming field is no longer young, we should give up that old excuse. And no you are not being straw-manned, the argument is. Good to not take things personal here.

Studies are necessarily narrow and context-laden, even 'solid and definite' ones. The suggestion to automatically downvote anecdotes is too broad, and should be refuted.


> Successful entrepreneurs are lottery winners with attitude, by and large.

Possibly. Actually I don't know. Anyone knowledgeable should disregard my opinion.

> The programming field is no longer young, we should give up that old excuse.

Right. However, I don't feel like we're anywhere near clearing the chaos around the psychology of programming. I still don't know for instance why so many people cannot understand functional programming, which I personally find simpler than procedural programming in most cases I deal with. Or why technical debt doesn't seem to be taken seriously. Programming is several decades old, but it still feels young to me.

> And no you are not being straw-manned, the argument is. Good to not take things personal here.

Hmm, yes, I was too aggressive here. Sorry.

> Studies are necessarily narrow and context-laden, even 'solid and definite' ones.

Ah, I didn't think of this danger. You're right, we at the very least need safeguards. Like, tying downvotes to reasons why they happen, so we (high karma users, moderators?) may be able to nullify those which turn out to be bogus. But that's complicated.

Or, maybe we could just not downvote, but point out in a reply that this is contrarian anecdotal evidence?


Yes; it would be wonderful if comments could be categorized. Something like meta tags, created by the community or automatically even. I would like that; it would be like an extra conscience telling me "This guy was making an observation, not an argument; target your response correctly".


A vaccine is a virus in latent form, forcing your B-cells to produce antibodies for it. So yeah, personally I never take a vaccine that hasn't been in circulation for some time.

Also, considering that medicine is at the stage of alchemy and that doctors simply have no idea what long-term effects these vaccines have on our immune system, some questions do have to be asked.

Like, isn't it possible that with the prevalence of vaccines, our own capacity for generating antibodies gets affected?

And remember here that an exaggerated response of the immune system may be even worse than a lazier response. Such an exaggerated response may even kill you (e.g. Influenza). So either way, the long-term effects of over-reliance of vaccines may be quite bad.


>Also, considering that medicine is at the stage of alchemy and that doctors simply have no idea what long-term effects these vaccines have on our immune system, some questions do have to be asked.

What the hell are you talking about? There is probably no single more life-saving intervention in medicine than vaccines. It is true that a small number of people have a bad reaction to them, but more people have a bad reaction to tetanus.

Those who do not vaccinate are risking re-emergence of preventable epidemics: http://www.sciencebasedmedicine.org/index.php/whooping-cough...


If some people have bad, even fatal, reactions to vaccines shouldn't the physicians first duty (do no harm) mean that they should separate the people who would suffer sideffects from those who wouldn't?


No, not if 1 in 10000 people suffers that reaction, while there are equally negative consequences for 2 or more of those 1000 people if they don't get the vaccine and there is no known way to detect who will have the bad reaction.

And the ratio is much worse for actual vaccinations. You don't want to see what not vaccinating kids against polio results in...


I have seen the pictures, and the interviews.

But those are hardly data and the kids who got the live version (due to a fuck up) are hardly better of.


In the US, we do a screening for risk factors for reactions, and review information needed to give informed consent.


> A vaccine is a virus in latent form

Not exactly, it's not a virus in latent form, it's either a killed virus, a piece of a virus, or a different virus that is weak, but provokes the same reaction as the more important one.

(Do you know what latent means? It means that it shows up later, which vaccines do not do.)

> So yeah, personally I never take a vaccine that hasn't been in circulation for some time.

Yah, me too, but let's not overreact with nonsense.

> Like, isn't it possible that with the prevalence of vaccines, our own capacity for generating antibodies gets affected?

No, it's not possible. That's completely ridiculous. Do you know anything about vaccines at all? Seriously, that really makes no sense whatsoever. A vaccine does not do anything at all to our capacity to generate antibodies. All it does is take the exact same virus you would get if you got sick, and expose you to it in advance, that's all. It gives you a head start in making antibodies, but does not affect the generation of them in any way.

> And remember here that an exaggerated response of the immune system may be even worse than a lazier response. Such an exaggerated response may even kill you (e.g. Influenza).

And a vaccine creates a muted response, quite the opposite. Compared to a simple cold a vaccine consists of a minuscule number of virus particles. The entire trouble with making a vaccine is trying to get enough of a response, most of the time the body ignores it.

> So either way, the long-term effects of over-reliance of vaccines may be quite bad.

And how do you figure that? I'm not following your logic at all. Unless your logic is that the vaccine somehow changes the bodies response, which it doesn't. So hopefully now that I've cleared that up you will no longer claim this.


On "latent" ... English is not my primary language and this was just a bad translation.

     Do you know anything about vaccines at all?
I guess not, but I can't help to not get worried about the rise of autoimmune disorders and I haven't heard yet a plausible explanation for the phenomenon.

     Unless your logic is that the vaccine somehow changes
     the bodies response, which it doesn't.
And how in the world would you know that?


> but I can't help to not get worried about the rise of autoimmune disorders and I haven't heard yet a plausible explanation for the phenomenon.

The most plausible explanation is the http://en.wikipedia.org/wiki/Hygiene_hypothesis see also http://blogs.scientificamerican.com/disease-prone/2012/02/15...

> And how in the world would you know that?

How could it? If a vaccine could cause such a change so could any illness. A vaccine is just a piece of virus put where your body can notice it. Everything after that is entirely from the body.

For example rabies: Lethal right? But the body can actually clear the rabies virus with no trouble - almost. The trouble is that by the time the body gets rids of the virus it's too late.

So what do you do? You give the body the rabies virus ahead of time, and you do it in a way that prevents the person from actually getting sick. Then next time the body encounters rabies it's ready.

All vaccines work exactly this way: You let the person encounter the illness ahead of time. You make no change whatsoever in the person - all you are doing is making them slightly sick, but in a way that doesn't kill them.

Whatever change the vaccine causes, the illness also does - except the illness also causes damage as the virus replicates.


I'd say that contrarian anecdotes are good if -- for instance -- the discussion spawned by the anecdote leads to a better understanding of the subject at hand. However, because anecdotes are stories we tend to remember them vividly and they affect our judgment disproportionally. Human brains are very flawed, and it may very well be the case that the negative effect of contrarian anecdotes vastly outweighs the benefit we get from the discussion it spawns. I don't know the answer. But I do know that we can't state with certainty that "having a real discussion is a good thing".

As for how we should weigh new evidence, this is essentially a solved problem: use Bayes' rule. Suppose that 100 out of 100 studies indicate that smoking is a leading cause of cancer and a contrarian viewpoint ("My grandfather smoked his whole life and lived to 123!") indicates otherwise. Then that anecdotal viewpoint should get approximately 0 weight. Zero. Nip. Zilch. Nada.

We're all experts in a few subjects at best. In those subjects we can easily explore different viewpoints, balance different arguments and keep track of the different schools of thought. We can even confidently diverge from expert consensus if needed. But in most subjects we're enthusiastic laymen at best. I don't think debates and exploration of different viewpoints then lead to much greater understanding. Just look at any forum on the internet (including this one). Debates aplenty and the few knowledgeable people get drowned out in a sea of contrarian musings.

Expert consensus is just the aggregate opinion of those who have the best understanding. So when a layman disagrees with the experts he's almost certainly wrong. What I see is the opposite of consensus culture. I see a willingness to disagree with the experts before understanding the subject material in depth.


> ... a contrarian viewpoint resulting in B with 1/101

Well, see, that's part of the problem. The original sample of 100 was, hopefully, selected at random. Whereas the anecdote was selected by the person telling it because he or or she thought it was apropos. With a large enough population of potential commenters, the chances of someone doing so gets really high.


The simplification in the title is dangerous without a 'when' attached. Mindlessly downvoting contrarian anecdotes will lead to a hive-mind. Beware.

Opinions overwhelm all other forms of material in a discussion. Anecdotes are actually one up from opinions as they are concrete. You should weight them like so:

Statistical Evidence + Logic > Statistical Evidence > "Common Knowledge/Wisdom" > Anecdote + Logic > Anecdote > Opinion + Logic > Opinion


In my experience, some of the best advice I've ever received has been from anecdotes that run counter to the text of an article.

...I kid. It's actually an interesting idea. I don't know about scrapping them entirely, but I think a lot of sites (say, Reddit) could benefit from moving the anecdotes elsewhere. Partially because of this, and partially because anecdotes in any form tend to derail the discussion pretty darn quick.


This is what makes r/AskScience so good. "Free of anecdotes" is a rule for posts, and it is vigorously enforced.


The idea that there was an impending credit crisis was a "contrarian anecdote" in 2008.

The idea that the US Government conducted illegal eavesdropping/wiretapping operations against US Citizens was a "contrarian anecdote" in 2003.

This blog post condescendingly claims that HN readers are not sophisticated enough to balance out the sources that are input to their rational decision making process.

In reality, contrarian opinions sometimes turn out to be correct, and mainstream opinions sometimes turn out to be wrong. Often, the benefit of a contrarian opinion is that it causes people to ask more questions, which is rarely a bad thing.

People are sheeplike enough without having to be encouraged to follow the herd!

I suggest everyone find a few contrarian theories and imagine what it would be like if you rearranged your life as if you expected them to be totally true. Most people are unwilling to go that far, and yes, in that way contrarian stories can weaken rational processes.


The idea that there was an impending credit crisis was a "contrarian anecdote" in 2008.

As the saying goes: they laughed at Galileo. They laughed at Einstein.

And they laughed at Bozo the Clown.


Contrarian opinion is often correct, but contrarian anecdote in the face of statistical evidence seldom is. The article was about the later, not the former.


You shouldn't downvote contrarian anecdotes. You should just take them at face value for what they are.

His notion that in the field of medicine you should disregard contrarian anecdotes because there's statistical evidence is horrible advice. If you actually looked at said statistical evidence, you'd realize it is very rarely strong, and often only relevant to 70% or so of the population (which is great for 2 out of 3, but useless for the third one).

"Best practices" often aren't, and common sense is not at all common in medicine. A significant number of published results are plain wrong (see http://saveyourself.ca/articles/ioannidis.php and the paper it references). A lot of medical advice is wrong, harmful or useless; The archives of seth roberts' blog are an enlightening read.


>You should just take them at face value for what they are.

The problem is that this is impossible for most people.


That's true, and they also get to vote (both on HN, and in real world elections). Imagine the consequences! Or just observe them in the real world....


If the comments below are too laborious for you to read, here is the TL;DR of them:

(1) Use your personal discretion. Some anecdotes are funny. Others can be perfectly cogent rebuttals, especially when people make overbroad statements.[a]

(2) The author is right in some respects: you need to be aware of the ways that stupid stories can bias you. Even being aware of this fact is sometimes not sufficient, so use downvoting to try to protect others.

(3) Lots of people believe in anecdotal responses to anecdotal original-claims. [b]

(4) A lot of people tried to be funny by offering anecdotes to be downvoted. Unfortunately, people haven't consistently taken the advice above, so they are not all in one place (at the bottom). That is a pity -- this would have been a fun and interesting use of downvotes.

[a] Actually, the discussion is full of overbroad statements of this form like "no universal truth" and someone presenting "nothing should be above questioning" as above question.

[b] I would like to formally respond that this is generally stupid -- you don't clean up a house by flinging crap at the crap. Your mileage may vary.


I had to vote this up because of the meta-nature of the article. If anybody makes a comment disagreeing with it while sharing an anecdote they only play into's the author's theme! It's like accusing somebody of being in denial: there's nothing they can say in return that doesn't somehow support the idea that they are in denial. Love it.

Having said that, this is another really bad article in what seems like an endless series of bad articles on HN. Here are a few of the more obvious flaws:

- There is no universal truth as the author seems to imply. Simply because some publication or source you may like has performed some sort of statistical study doesn't have a lot of meaning on it's own. Yes, 100% of the people who eat bananas are dead within 120 years of their consumption. No, that does not mean bananas are bad. The study or scientific reporting is simply the beginning of a much longer conversation society has over many decades that leads us to higher-fidelity models.

- The purpose of a social site is to behave socially. While places like HN have (or used to have) a lot of different guidelines for the types of behaviors that are encouraged or not, being social means sharing stories, anecdotes. We are not robots.

- The idea that people are unable to sort out personal anecdotes from other forms of information. The follow-up idea that since they are not able to do this, we should prevent ourselves from sharing such stories. This is bad, bad, bad, bad, bad, bad .... bad(n). We are humans. We share stories. Anybody who says "people are so stupid" can justify just about damn near anything as long as they keep emphasizing the stupidity and danger some people's actions represent.

Every now and then, gasp(!), published research is either wrong or doesn't show anything near what the reporter claims. Anecdotes don't help with this, of course, but they serve to remind us that even well-known scientists working at the highest quality standards available are still just sharing with us a very specialized form on anecdote. We did these things in this way and this is what we observed. Here is how you also can observe this. The really "good" part of the story they are sharing is talking about hidden assumptions, population variance, reproducibility, and so forth. Anecdotes don't do this, but they help us brainstorm ways in which we can improve the discussion, take the next experiment to an even better place.

I'm very uncomfortable with the line of reasoning that goes somewhat like this: people are broken in some way, therefore we must somehow control what they read, say, or think for their own good. To me the beauty of western civilization is that really broken people can do these amazing and awesome things. The fact that we're deeply flawed is the magic. Science and human advancement work because of our flaws, not in spite of them. This is a very important thing to understand! Setting up some ideal of perfection, no matter how well-intended, and then mucking around with the way societal interaction works in some effort to improve on things is heading down a very dark path that has a very unhappy ending. This attitude seems rife in the technology community, however, perhaps because we are such analytical people.

I don't want my fellow man to be irrational and distrustful of science and knowledge. But I'll take that any day over silencing contrarian articles and dissent. We've done the math on this: wrong people who share emotional stories and persuade crowds about all sorts of illogical things are a price that a dynamic community pays for progress.


I generally disagree with the OP and agree with your dissent, but do you not think that the OP could be more charitable read, not as "suppress certain kinds of replies", but as "don't exploit known cognitive biases, and punish people who seek to exploit them"? I'm quite comfortable downvoting naked appeals to emotion ("please won't someone think of the children?"), because I don't think they add anything, and are harmful to, the conversation. I think the OP is making a similar proposal for anecdotal contradiction. Both his examples are chosen well in this light: non-medical replies to a study that found a weak correlation. To me, the contradictions look more like a strawman: the argument they attack seems to be that coffee consumption will, in 100% of cases, cure Alzheimer's, a claim the study never made. I'm not sure we should privilege (or penalize) anecdotal contradiction on anything other than the merits of its argumentation, but in many cases I find that to be extremely weak.


But how do you distinguish between people who simply share their stories because they think it's moderately relevant to the thread, on the one hand, and people who "exploit known cognitive biases", on the other? Exploitation seems to imply deliberate manipulation, which most people who share medical anecdotes don't intend at all. Yet, humans have so many cognitive biases that even the most mundane thing can trigger one or another bias.


"Trigger" is a better word than "exploit", in some ways, but it makes it sound like it's an accident: "sorry, I just fell over your cognitive bias, I had no idea it was there!". In fact I think people have a weak responsibility in good discourse to be aware of known cognitive biases and not to appeal to them. I'm comfortable downvoting people who appeal to emotion even if they're not setting out with the intent of doing it, simply because I think it leads to a worse discussion.

You distinguish between people who are sharing anecdotes and people who are making strawmen arguments in the usual way that you'd distinguish them - by tone, follow-ups, etc. But if your sole contribution to a thread is "me too" (or "actually, not me too"), maybe that contribution isn't particularly valuable in itself.


I'm comfortable downvoting people who appeal to emotion even if they're not setting out with the intent of doing it, simply because I think it leads to a worse discussion.

And I'm comfortable making a "corrective upvote" because I think downvotes should be reserved for obvious spam, completely OT comments and comments that add nothing to the discussion at hand.


Thank you. I was writing a large diatribe of my own, but refreshed in another window and saw yours echoing what I wanted to say, plus a lot that I wouldn't have thought of.

The one other thing that strikes me is that, for the sake of argument, I will often convert more dependable facts into anecdotal form to ease understanding. I've found, through trial and error, that just stating the hard facts tends to lead into a circle of explanation, but stating that same information in more relatable terms is, simply put, more relatable.


I'm not carrying water for the original poster.

"There is no universal truth"? Are you sure? Because if that's true then there is no standard to judge whether one model is "higher-fidelity" and in fact there is nothing for science to do at all. Do you really believe that?

Do you really need to give up the idea that anything is actually true in order to dispute this blog post?

I don't understand how you figure that there is a choice between distrusting science and knowledge and silencing dissent. You seem to think that science and knowledge are just some form of political orthodoxy.


> Because if that's true then there is no standard to judge whether one model is "higher-fidelity" and in fact there is nothing for science to do at all. Do you really believe that?

Theoretically, there is a "universal truth", but for all intents and purposes, there isn't outside the realm of Math.

We judge science's fidelity by how well it correlates with repeatable experiments - which may be characterized by some "universal truth", but that's besides the point. In Newton's day and age, newtonian mechanics seemed to describe essentially everything. And then it turned out to be a crude approximation that only works in large scales.

In 1900, there was a Physics convention, in which the tone was basically: We have everything worked out, except for 3 minor things - Michelson Morley light aberration (solving this required developing the theory of relativity), Black body radiation (solving this required developing quantum theory), and the Photoelectric effect (which also requires quantum theory to explain properly).

> Do you really need to give up the idea that anything is actually true in order to dispute this blog post?

No. But you do need to give up the idea that you have certainty of knowledge about how true things are.

> You seem to think that science and knowledge are just some form of political orthodoxy.

In math, they aren't. In physics, they aren't.

In biology, it's not so clear.

In medicine, and nutrition, there's a ridiculous amount of political orthodoxy and "religious" beliefs -- and last I heard, they were considered sciences.


I never made any claim regarding my own certainty of knowledge, that is a straw man. Please refrain from giving me personal advice when you know nothing about me.

There is a huge difference between saying "something is true, but I don't know what (yet)" and "there is no such thing as truth"; between "a lot of people try to commandeer medicine to sell things" and "there is no actual truth of anything to discover in the field of medicine".


> Please refrain from giving me personal advice when you know nothing about me.

I was not giving you any personal advice. I was taking your "you" as a general statement to the reader, and replying with the same language pattern (e.g. if I said "you can bring a horse to water", I would actually mean "one can bring a horse to water".)

> There is a huge difference between saying "something is true, but I don't know what (yet)" and "there is no such thing as truth"; between "a lot of people try to commandeer medicine to sell things" and "there is no actual truth of anything to discover in the field of medicine".

Indeed, there is a huge difference, I don't think anyone is disputing that.

What some people (me included) are disputing is that what is considered "the state of the art" in the many sciences (other than math and physics), is actually not the result of rigorous scientific study that it is assumed to be, and that therefore well reasoned and supported contrarian explanations, data and opinions should be welcome (they aren't; there's active suppression).


It's one thing when learned people dig into the science and find issues to disagree about, it is completely different when some ignorant reporter or blogger disagrees just based on anecdotes or how truthy it feels to them


Someone who is good with statistics is more learned than the prominent experts in many areas when it come to disagreement. See e.g. this: http://blog.sethroberts.net/2012/04/08/gene-linked-to-autism... - Roberts is a professor of psychology, but he wears a statisticians hat in this post. He does this often and in many fields, and more often than not, his review (though well argued) is discarded as "a blogger who disagrees just based on anecdotes and truthiness".

Yes, most criticism is useless, but ...

No, most research is NOT as sound as the researchers themselves believe.


You have provided no support for the idea that MOST scientists don't understand statistics. You link to criticism made about one reporter. I am much more prepared to believe that reporters don't understand statistics than scientists. Still, you have provided an anecdote in support of broad sweeping statements.


> You have provided no support for the idea that MOST scientists don't understand statistics.

That's true, but neither did you (or anyone else ever, for that matter) provide support for the idea that MOST scientists do understand statistics. See how easy it is to discard anything you disagree with?

> I am much more prepared to believe that reporters don't understand statistics than scientists.

That's fine, but (a) it doesn't say anything about how bad scientists are with statistics (only that they are slightly better than reporters, which I tend to agree with), and (b) this is an argument from bias/faith/religion/prejudice, not from science or data. You are just as guilty as anyone you criticize. You might be more right or less right, but you* don't have the moral ground. (* general you).

> Still, you have provided an anecdote in support of broad sweeping statements.

What was that statement of yours about learned people digging into science? So now it is not enough for those people to know what they are talking about, they have to do it in a format you approve of.

I can provide tens more valid criticisms. I charge $200-$1000/hour for my line of work, and I'd be happy to take as much to work for you finding them, when I have some free time.

But I'll throw in a freebie: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/ - though I suspect it will stop at most people's "no true scotsman/scientist" filter...


> There is no universal truth as the author seems to imply.

Indeed not, but to use the coined term, some things are truthier than others. By and large, we can make more informed and better decisions on the "truthiest" truths without mucking up the argument with individual, pointless anecdotes, especially when those anecdotes aren't backed by anything but what simply "happened" to them.


Another thing worth pointing out: how common is it that an actual valid scientific study is posted and someone says "Well this isn't true in my experience"?

More often than not, it's a case where some other article summarizes (most likely in an incorrect way) another study or someone on HN mentions a study, etc.


The author was referring to contrarian anecdotes in reply to statistical data, not someone's opinion.

- He is only arguing that there is universal truth in proxy by arguing that the scientific method is more valuable than anecdotal evidence. If you don't agree with that, you probably disagree with most of HN (pure conjecture, does anyone else here think anecdotal evidence is more valuable than statistical analysis?)

- I agree with you on this point. I don't think we should downvote comments because a fun discussion is what the comments are for, not to try to prove or disprove a study.

- Statistical analysis is not a specialized form of anecdote. That's a stretch.


> He is only arguing that there is universal truth in proxy by arguing that the scientific method is more valuable than anecdotal evidence.

The strawman here is in equating (published) statistical analysis with the scientific method. Of course the scientific method is more valuable, but that's not necessarily relevant.

Please have a look at http://xkcd.com/882/ if you haven't already - what this comic describes is a very valid statistical analysis, according to the "scientific method", (only neglecting base rates like 99% of published papers do).

This is (unfortunately) very commonly practiced in the life sciences, including medicine -- sometimes knowingly but mostly unknowingly. Bad reporting not required for a horrible, long lasting effect on the future.


Scientists' use of statistics is often problematic, but that doesn't mean that its valid to counter statistics with anecdote unless you have some larger argument.


Right. But it is also not valid to bring up statistics unless you can properly qualify their relevance, which is almost never the case. This requirement sets much higher bar for anecdotes than published statistics, when the latter rarely deserves that high bar.

As a result, most arguments about science are invalid from a scientific-method point of view. But the claims brought up -- including anecdotes -- are often interesting and informative.


does anyone else here think anecdotal evidence is more valuable than statistical analysis?

To a certain degree I do. I would go even so far and say that the provider of the contrarian perspective is making a scientific contribution, by pointing out the lacking external validity of the original study. I personally believe that the more interesting phenomenons in science are those incidents, when "things" are acting different than expected. Contrarian anecdotes are most often the best starting places for these phenomenons.


By this logic, HN'ers should downvote any anecdote describing successfully exiting a startup because statistics show that the vast majority of startups fail and these anecdotes describing success unduly influence the thinking of entrepreneurs.


Only if its offered as a counterexample to the proposition that most startups fail.


One might suggest that that such a proposition is an underlying premise of HN.


You should question everything. This includes studies published and labeled "scientific".

Contrarian anecdotes are important (but they too should be questioned).

Nothing should be above questioning; even prevailing wisdom.


>even prevailing wisdom.

Especially prevailing wisdom, since it is least likely to be questioned normally.


Are people joking when they defend anecdotal stories with "once this anecdotal story proved to be true and the study was faulty"?

The proposition that the article makes is not that anecdotal stories must be false, but that they might influence the reader more than they should. Thus they should not be encouraged.


Yeah, thanks for deciding how gullible I am, and please do sanitize what I get to read for my own good. That's how open dialog works, right?


What do you think the voting system on HN does? It promotes good content, thus also demotes bad content. The author is saying "Here is a reason that content X is bad. If you agree with me, downvote it".


Doesn't make sense. If the content is 'bad', moderator could expunge it. Or writer could be counseled on giving better posts. Or any number of things.

The voting system on HN sometimes demotes bad content. Far more often (discussed widely elsewhere) its used to assert agreement or disagreement. When used in this sense, its a sort of instant-poll.

In the case of anecdotal argument, it forms an ad-hoc 'scientific' experiment. The HN population weighs in using their own experience. Those who's memory (or ok, memory-triggered emotion response) aligns with the author may upvote etc.

To 'bad content' is uniformely suppressed instead, this social experiment is lost, and the community loses. The merit of instant-polls is debatable, but other such polls are supported here. In fact the test group on HN may exceed the original 'scientific paper' group by an order of magnitude. Its statistical significance may exceed a graduate student's narrow study.


> Its statistical significance may exceed a graduate student's narrow study.

And thanks to selection bias, its external validity is likely zero. "Our HN poll found that 1 in every 10 people has founded and sold a company."


Rebuttals to articles (ones that are voted up anyway) in the comment section are almost always well reasoned refutations. This article is addressing a non-problem.


Contrarian anecdotes are an ill-conceived method of contributing to a conversation, and doing so in some sort of attention-getting way. They appear to all but undermine the entire premise of the original content, and yet, are based on absolutely nothing but one's individual experience with the matter.

I see this kind of stuff all the time on Reddit, and it's become so numbing that you know, no matter how well you construct an argument or how many facts you cite, someone will always come out of the wood works and tell you how you're wrong because it didn't happen that way for them. It wouldn't be so bad if people didn't take that as a genuine counter-argument.


Contrarian anecdotes can be very useful (and logically sound) when they are contrary to a generalization. If someone asserts "f(x) = 5 for all x in A", "f(a) = 3 and a in A" is a logical contradiction proving the assertion false.

And, frankly, if we're talking about public discussion of scientific papers, inappropriate generalization is as big a problem as contrarian anecdotes, if not bigger. Scientific papers often cover highly specific observations that are useful primarily to other researchers, and often even then not for many years. People then try to apply that specific knowledge to practical day to day situations.


Have you ever seen that sort of generalization in an article on Hacker News? Even when you have situations like, e.g. nobody has ever recorded a case of a person with a certain genotype being infected by a norovirus, you don't see papers claiming that its impossible.


Yes. Realize that often the generalizations are a lot more subtle than your example, and of course anecdotes aren't always the best or only counter-arguments you'll see, even if they are valid. Start looking and I am sure you will find examples.


I'm all in when it comes to call bullshit on something. But because we (hackers) are used to boolean decisions, we are looking for counter-examples.

But these are just non-arguments when the finding is "In 9 out of 10 cases, ...". The anecdote is the 1 out of 10 case.

Let's try to question anything, and when it is suspicious - by all means, say so! But, I guess what the author is trying to say (beyond the whole Downvote discussion) is that we should try to reason and make a proper argument. Especially one that is not already addressed in the research.


> But these are just non-arguments when the finding is "In 9 out of 10 cases, ...". The anecdote is the 1 out of 10 case.

The whole point of the article is that people won't judge that anecdote as only 0.1 relevant, but much much more (I'd say close to 0.9 if they personally prefer what anecdote says to what research says). It's a general human flaw that's very visible in day-to-day interactions with people.


"But in the presence of statistical evidence, don’t tolerate contrarian anecdotes, and don’t make them yourself, knowing the exaggerated impact they can have."

When you start to design a statistical experiment, you have already made an important methodological choice. See

http://en.wikipedia.org/wiki/Positivism

http://en.wikipedia.org/wiki/Grounded_Theory

I've noticed that in my own field which is education, there appears to be a fondness for sophisticated statistics, even though no manager ever allocated students to teachers on a double blind random basis. An excellent example is the way the UK Education ministry has decided that 'phonics' is the way to teach reading.

http://www.bbc.co.uk/news/education-18462214

I think that this general tendency might be an example the 'white coat syndrome' in action; belief that using formal statistical techniques might increase the meaningfulness of the results. I suppose that is a form of cargo cult.

This is hacker news, a forum aimed at people with novel business proposals and new software to try out. Should you be trying to find 'the Truth' or should you be building some grounded theory that tells you what to do next, provisionally, now, today?


I guess it depends on the story. If HN is supposed to be about startup culture and technology, the anecdote is essential to being vibrant. What is a startup if not a story whose premise is unproven?

Now if you're telling me that your sister stuck a magnet in her ear and cured her cancer, I'm not going to give that credence without some real data.

Most things on HN are not that, and a good story often compels us to think. So FWIW, I'm not downvoting contrarians, and I'm not downvoting anecdotes for being anecdotes.


I totally get where the author is coming from, but I am not reeeally sure if that is good advice. Anecdotal evidence has its place, eg: proof by counterexample, challenging false, but widely held beliefs, or alerting people to new circumstances.

A: "Murrumbidgee River is fun and safe for children to swim in! No one came to any harm there for the last 100 years!" B: "Dunno... my dog was eaten by a crocodile there last Saturday..."

B's dog is a sample of one, but maybe worth paying attention to.


In your example, A is not relating a statistical or experimental finding, but instead dispensing commonsense knowledge which happens to be objectively false. It's not what the OP is talking about.


Commonsense, like ulcers are caused by diet and anxiety, or that salt raises blood pressure?

I think a good point is made: statistical evidence is also misleading - it deliberately ignores (averages out) the extreme cases. The results are a distribution; statistics folds that into one number. Anecdotes fill out the distribution.


Ulcers are not generally caused by diet and anxiety, but rather by bacterial infection. Anecdote is a much more compelling refutation of "common sense" than it is of empirical statistical evidence.

Statistical evidence is not misleading; it's simply the case that if you are seeking outliers, as in your example, then looking at measures of central tendency won't contain what you're looking for. Anecdote has no role in "fill[ing] out the distribution."


You don't challenge "false, but widely held beliefs" with anecdotes - those beliefs actually is exactly what anecdote becomes when it starts being repeated.

Alerting people to new circumstances is a good use, but that's pretty much all. Proof by counterexample works well only in mathematical logic, which is not how the real world works[1]. In reality you enter the domain of probability theory, and there counterexamples work exactly as the article author says - as an evidence that needs to be properly weighted. And these are those weights that people vastly overestimate in case of anegdotes.

[1] - by this I mean, mathematical logic is not how we model, comprehend and operate reality.


A contrarian anecdote may help expose an article being false or misleading in itself. For example: a review written by the manufacturer itself, or astroturfing.

First we must determine the validity of the article existing in itself and any motivations or biases in it. For this, contrarian anecdotes are useful. This article seems to miss this point.

Once we can accept the article at face value, then we may hit the biases described.


I would say it is far easier to astroturf a contrarian anecdote.


Or maybe just not upvote? Good argumentation, still can get in the way of why HN is here. It's not just a place to make formal argumentation, but to exchange experiences, or get some opinions from others. In some case the opinion is wished from experts (how can I get funded etc), in others it is about how a product is liked by general users. So please take a piece of salt in applying that ;)


This is an extremely toxic position, which will lead to novices (who outnumber the experienced significantly) mathematically abusing voting systems to promote whatever this month's fad tool golden hammer is.

The herd is rarely correct.


Well I think Stalin summed up the perils of totally ignoring anecdotes pretty well:

"One death is a tragedy; one million is a statistic."

This topic sort of came up the other day in the thread about the girl losing the iPad software she needs to talk (Silencing Maya). http://news.ycombinator.com/item?id=4103344

Some commenters thought the story should be ignored as a data point about the societal value of patents. I disagree in that case because I don't believe economics or social sciences have anywhere near the amount of rigor and theory to make a good claim on whether the patent system is a net benefit to society. Deciding such a thing is a very old, and classic problem in philosophy. The OP seems to implicitly relying on a variant of utilitarianism, which IMO is wholefully inadequate to rely on for moral decisions.

Since science is so hard, there is a lot of bad science out there. What gets reported in the wider media has all sorts of weird selection bias, never mind what gets picked for publication in journals. Anecdotes are human stories and they are deeply connected to why we care. Statiscal tools can be used in ways that justify harm people in the name of greater good. I agree that scientific medicine provides tools for "real"'medicine that other methods don't. I just think we should remember to real peoples stories, especially if they are true. Anyone reading this probably gets that one can't exactly counter balance the effect of anecdotes within themselves. But it can still be countered some, I think it's worth it to hear stories (that are relaxant to the topic). Now what is the basis of my beliefs? Mostly intuition, not rationality. But I don't think it's possible to undermine my ideas with some kind of experimentally bases argument. There's just way to many variables!


I don't understand the part about Stalin's advice. What was Stalin saying would happen if you ignored anecdotes? How do you figure he was talking about the value of anecdotes?


He was talking about undertaking large activities (war for example) that would kill hundreds of thousands of people could be tolerated politically by a populace in a way that a single murder might cause an outrage. The large numbers make the horror abstract. It's easier for people to relate to a story about a specific individual.

The quote (and variations) are quite famous, it might be apocryphal, but I think it is true. Hearing about hundreds of thousands of people massacred in a foreign land doesn't hit home when you are reading about far away.

That's why reporters (New York Times style)try to weave in illustrations and stories about individuals even when discussing a larger trend.


Funny thing, I always took this quote as Stalin cynically remarking on exactly OP's point: people overweight anecdotes by thousands to millions of times their actual worth as Bayesian evidence.


As a caveat, you should only do this in response to data based articles. Articles that are themselves anecdotal, should receive anecdotes that are contrarian. I think the more general idea is comments should be held to similar standards of evidence that the article presents.


If a post is about a topic that can and has been thoroughly researched and a reasonable conclusion has been drawn, then sure, a contrarian anecdote should probably be disregarded. But honestly, how often does that apply?

Most of the conversations on HackerNews aren't about things that have right and wrong answers. As a matter of fact, many of the most popular posts are nothing more than anecdotes themselves. So why exactly should an anecdote in the comments be downvoted?


As someone who has had his ability to work saved due to anecdotal evidence, I could not disagree more with the OP. Well, sure, if someone has anecdotal evidence that the theory of gravity is wrong, or that 1 + 1 = 4, by all means downvote them, but not all scientific evidence is on such solid ground.

E.g., I've been prescribed medication and ended up with a side effect not listed on the drug literature. The doctor told me that the side-effect must be psychosomatic. Later scientific evidence revealed that about 50% of patients given the drug experienced the same side effect, but that it had been previously underreported. Who's to say that researchers would have even bothered to research the side effects more thoroughly if they hadn't paid attention to the fact that the anecdotal evidence contradicted the scientific evidence.

Or remember when the scientific evidence seemed to indicate that a high carbohydrate, low fat diet was the healthiest choice? Should I have ignored my anecdotal evidence that that diet made me feel like crap?

Science sometimes gets itself into harmful orthodoxies. See Thomas Kuhn for more info on this if you are unfamiliar. One example of this is Behaviorism in Psychology. In this field, it was scientific orthodoxy for many decades that Behaviorism was scientific and Cognitive Psychology was not because Behaviorism was based on only quantifiable, measurable data. It took Chomsky to point out the idiocy of this orthodoxy, thereby breaking the orthodoxy, allowing science to progress.

Re the anecdotal evidence that saved my career, I've read here that there is no scientific evidence that ergonomic keyboards can help prevent or ameleroate RSI. I am 100% sure, however, that the Kinesis Contour keyboard saved my ability to type. All I have to do to know this for sure is listen to my body. Furthermore, I personally know about a dozen programmers who feel similiarly. I'm sure that someone will pipe in that this is almost certainly the placebo effect. If that's the case, then Kinesis makes the world's very best placebo, as placebo-like things have done precious little for me in any other area. In any case, even if it were the placebo effect, what does it hurt anyone to ignore the putative scientific evidence and try out a Kinesis keyboard for themselves to see if it provides them with relief?

The idea that posts should be downvoted for recommending such ergonomic keyboards is insane.


If best contrarian comments are weak - I assume that original article is probably correct.

If I don't see any contrarian comments, then I suspect groupthink in discussion.


It's true, anecdotes are not data and have no statistical weight whatsoever.

However, anecdotes can and do (and should) play a large role in influencing how we think. They can humanize a problem and create food for thought in a way that no amount of statistics ever could.

For example, it's one thing to cite statistics that children of gay parents do just as well as those of straight parents (I have no idea if it's true, but it's an interesting contemporary question). But that's not likely to change the mind of a homophobe on the issue of gay adoption. On the flip side, a lucid, heartfelt anecdote from a person who had gay parents might actually help someone to understand what it's like to grow up in that environment an therefore become sympathetic.

Of course, it has no bearing on the actual statistics at all, but sometimes statistics aren't the most important thing.


That is an excellent point, and well expressed. I do see how anecdotes and stories can be used in a very positive way to shift opinion; there are definitely some grey regions. (How do you know you're on the right sight?)


Please forgive me if I have fundamentally misunderstood this, but is this article suggesting that anecdotes are some how comparable to scientific research data?

If one wants to "attack" an anecdote, then a contrarian anecdote is the weapon.

If one wants to attack scientific data, you need contrarian scientific data.

At least, I hope that is right. There for, to mix the two is like attacking a tank with a wooden stick.

Surely, anecdotes are used as the premise of scientific research. Lots of people tell stories. There seems to be something interesting going on. Then you do the research and produce the data. If the data is conclusive, then your past the anecdote. If later contrarian anecdotes appear, and they seem significant, off you go to scientific research again.

I know Im wrong somewhere. But where?


I've noticed that many of my biases, typically for or against some name brand or technology, even if not anecdotal, are old. I now try to remember not only how I came to this opinion, but when. I guess that's one of the consequences of aging.


My dad once received an inappropriate form letter from Texaco in the 1970's and never went to a Texaco station since.


I'm curious what some of your (the community's) thoughts are on articles like this. While the article itself has a large amount of personal bias with unsubstantiated claims, the conversation it has broached has been rather rich and enlightening. However, that is largely due to its controversial and contrarian (ironically) nature. That said, while the conversation has been thoughtful and looks to have largely refuted article, because so much of the thoughts put up here in the comments have been to downplay the original article, my question is: "Was this article worth our time or just an elaborate waste of time since the premise was bunk?"


I downvoted some contrarian anecdotes once, and it ruined the thread.


I downvoted some contrarian anecdotes once, and my "downvote" button was disabled.


Anecdotes have their place, especially in complex domains. For example, I have a friend who uses medical marijuana successfully to control MS symptoms. That is a sample size of one, but it certainly colors my view of the debate. I have a much deeper view into the decision making process of a MS sufferer now than I would otherwise, one that goes beyond spasticity, pain, and cognitive scores.


This is an interesting first submission by a Hacker News participant who joined the community 551 days ago. The core idea in the submitted blog post (by the submitter here) is

"Contrarian anecdotes like these are particularly common

http://news.ycombinator.com/item?id=4076643

http://news.ycombinator.com/item?id=4076066

in medical discussions, even in fairly rational communities like HN. I find this particularly insidious (though the commenters mean no harm), because it can ultimately sway readers from taking advantage of statistically backed evidence for or against medical cures. Most topics aren’t as serious as medicine, but the type of harm done is the same, only on a lesser scale."

The basic problem, as the interesting comments here illustrate, is that human thinking has biases that ratchet discussions in certain directions even if disagreement and debate is vigorous. The general issue of human cognitive biases was well discussed in Keith R. Stanovich's book What Intelligence Tests Miss: The Psychology of Rational Thought.

http://yalepress.yale.edu/yupbooks/book.asp?isbn=97803001646...

http://www.amazon.com/What-Intelligence-Tests-Miss-Psycholog...

The author is an experienced cognitive science researcher and author of a previous book How to Think Straight about Psychology. He writes about aspects of human cognition that are not tapped by IQ tests. He is part of the mainstream of psychology in feeling comfortable with calling what is estimated by IQ tests "intelligence," but he disagrees that there are no other important aspects of human cognition. Rather, Stanovich says, there are many aspects of human cognition that can be summed up as "rationality" that explain why high-IQ people (he would say "intelligent people") do stupid things. Stanovich names a new concept, "dysrationalia," and explores the boundaries of that concept at the beginning of his book. His shows a welcome convergence in the point of view of the best writers on IQ testing, as James R. Flynn's recent book What Is Intelligence? supports these conclusions from a different direction with different evidence.

Stanovich develops a theoretical framework, based on the latest cognitive science, and illustrated by diagrams in his book, of the autonomous mind (rapid problem-solving modules with simple procedures evolutionarily developed or developed by practice), the algorithmic mind (roughly what IQ tests probe, characterized by fluid intelligence), and the reflective mind (habits of thinking and tools for rational cognition). He uses this framework to show how cognition tapped by IQ tests ("intelligence") interacts with various cognitive errors to produce dysrationalia. He describes several kinds of dysrationalia in detailed chapters in his book, referring to cases of human thinkers performing as cognitive misers, which is the default for all human beings, and posing many interesting problems that have been used in research to demonstrate cognitive errors.

For many kinds of errors in cognition, as Stanovich points out with multiple citations to peer-reviewed published research, the performance of high-IQ individuals is no better at all than the performance of low-IQ individuals. The default behavior of being a cognitive miser applies to everyone, as it is strongly selected for by evolution. In some cases, an experimenter can prompt a test subject on effective strategies to minimize cognitive errors, and in some of those cases prompted high-IQ individuals perform better than control groups. Stanovich concludes with dismay in a sentence he writes in bold print: "Intelligent people perform better only when you tell them what to do!"

Stanovich gives you the reader the chance to put your own cognition to the test. Many famous cognitive tests that have been presented to thousands of subjects in dozens of studies are included in the book. Read along, and try those cognitive tests on yourself. Stanovich comments that if the many cognitive tasks found in cognitive research were included in the item content of IQ tests, we would change the rank-ordering of many test-takers, and some persons now called intelligent would be called average, while some other people who are now called average would be called highly intelligent.

Stanovich then goes on to discuss the term "mindware" coined by David Perkins and illustrates two kinds of "mindware" problems. Some--most--people have little knowledge of correct reasoning processes, which Stanovich calls having "mindware gaps," and thus make many errors of reasoning. And most people have quite a lot of "contaminated mindware," ideas and beliefs that lead to repeated irrational behavior. High IQ does nothing to protect thinkers from contaminated mindware. Indeed, some forms of contaminated mindware appeal to high-IQ individuals by the complicated structure of the false belief system. He includes information about a survey of a high-IQ society that found widespread belief in false concepts from pseudoscience among the society members.

Near the end of the book, Stanovich revises his diagram of a cognitive model of the relationship between intelligence and rationality, and mentions the problem of serial associative cognition with focal bias, a form of thinking that requires fluid intelligence but that nonetheless is irrational. So there are some errors of cognition that are not helped at all by higher IQ.

In his last chapter, Stanovich raises the question of how different college admission procedures might be if they explicitly favored rationality, rather than IQ proxies such as high SAT scores, and lists some of social costs of widespread irrationality. He mentions some aspects of sound cognition that are learnable, and I encouraged my teenage son to read that section. He also makes the intriguing observation, "It is an interesting open question, for example, whether race and social class differences on measures of rationality would be found to be as large as those displayed on intelligence tests."

Applying these concepts to my observation of Hacker News discussions after 1309 days since joining the community, I notice that indeed most Hacker News participants (I don't claim to be an exception) enter into discussions supposing that their own comments are rational and based on sound evidence and logic. Discussions of medical treatment issues, the main concern of the submitted blog post, are highly emotional (many of us know of sad examples of close relatives who have suffered from long illnesses or who have died young despite heroic treatment) and thus personal anecdotes have strong saliency in such discussions. The process of rationally evaluating medical treatments is the subject on entire group blogs with daily posts

http://www.sciencebasedmedicine.org/index.php/about-science-...

and has huge implications for public policy. Not only is safe and effective medical treatment and prevention a matter of life and death, it is a matter of hundreds of billions of dollars of personal and tax-subsidized spending around the world, so it is important to get right.

Blog post author and submitter here tylerhobbs suggests disregarding an individual contrary anecdote, or a group of contrary anecdotes, as a response to a general statement about effective treatment or risk reduction established by a scientifically valid

http://norvig.com/experiment-design.html

study. With that suggestion I must agree. Even medical practitioners themselves do have difficulty sticking to the evidence,

http://www.sciencebasedmedicine.org/index.php/how-do-you-fee...

and it doesn't advance the discussion here to bring up a few heart-wrenching personal stories if the weight of the evidence is contrary to the cognitive miser's easy conclusion from such a story.

That said, I see that the submitter here has developed an empirical understanding of what gets us going in a Hacker News discussion. Making a definite statement about what ought to be downvoted works much better in gaining comments and karma than asking an open-ended question about what should be upvoted, and I'm still curious about what kinds of comments most deserve to be upvoted. I'd like to learn from other people's advice on that issue how to promote more rational thinking here and how all of us can learn from one another about evaluating evidence for controversial claims.


You shouldn't downvote to censor relevant information, period. You should only downvote if the material violates site guidelines. Downvoting to disagree is lazy, substituting easy censorship for thoughtful response. Instead, post your counter-argument or upvote an existing opinion that already expresses your position.


Censoring relevant information is, of course, never good but the authors contention was that the information isn't actually relevant. Its always good that people who are wrong get a cogent reply, but beyond that adding a "me too" post doesn't have any more benefit than a downvote does. I used to just upvote post I agreed with, but since those are no longer public the only way to clue readers in to what the consensus is is to downvote.


I'm not sure I'd agree he was arguing for the irrelevance of anecdotal evidence, only that it has a disproportionate impact when considered with statistical evidence. Rather than censoring the anecodotes, the better solution is to simply point out the danger of overweighting them.

The purpose of a site like HN is not to arrive at some kind of imaginary consensus, it's to inform and engage people in meaningful discussion. There is no winning side to be on, and no ultimate arbiter of truth. By downvoting as you do, you rob the site of content in exchange for an illusory sense of victory.


  For example, if a close friend goes on and on about how the Ford he bought 
  was a piece of crap, detailing how the transmission failed at 30k miles 
  and the rear-view mirror fell off, you’ll be wary about buying a Ford in 
  the future, even if Consumer Reports rates them highly.
Personal anecdote: I've got a '93 Ford Explorer. It has 237,000 miles on it, and has yet to have a single major component failure. I've been waiting for the goddamn thing to die for the last 70,000 miles, so I can buy a car that does better than 13 miles to the gallon, but no dice.

Don't buy Ford trucks. They're too reliable.


Have their been stories with seemingly solid statistical data, where anecdotal evidence shows that the statistical evidence is wrong?


Without contrary viewpoints there is always a danger of falling into overconfidence and group-think.


I disagree. In my experience, this article is wrong.


The author didn't pick good examples of contrarian anecdotes found on HN. They are contrary to a study on coffee consumption and dementia. The study, however, is only a sample of 124 people from Tampa and Miami. The study is just as likely to have found an isolated effect as the posters with family members who consumed a lot of coffee and still developed alzheimers.

You would be better served by this:

https://sites.google.com/site/mccormickphilosophy/home/criti...

The author must be a frequent coffee drinker who didn't like that some people had a different experience with coffee than some other people had, and felt compelled to write that post. It's not the contrarian anecdotes that left me with the sense that the research findings weren't conclusive. It's the fact that a population of 124 people in two cities is not at all representative of the target population which numbers over 40 million according to the census bureau.

http://quickfacts.census.gov/qfd/states/00000.html

Maybe when the study researchers conduct a larger study I'll believe them.


Great comment. I am most certainly not claiming that you shouldn't question research findings on other grounds, such as sampling bias or statistical significance, as you have done here; those types of points are extremely valuable.


N=124 can definitely give statistically significant results. Consider the following thought experiment. You have a coin that you think might be loaded. You flip it 124 times. You get 119 tails. It is probably loaded. Lets say you get 66 tails. Could definitely be due to chance. Moral: The larger effect you study, the less sample size you can get away with. It might be that people from Tampa and Miami are different in some important way from other people. However, if you want to level this criticism, maybe you should give a reason for suspecting this? "The study is just as likely to have found an isolated effect as the posters with family members who consumed a lot of coffee and still developed alzheimers." No, no, triple no.


But .... but .... anecdotes are what politicians use to get elected !!!! (e.g. Joe the Plumber, et al).

If we downvote anecdotes, we won't have a justification to vote for scumbag politician with a heartfelt story to tell!!

edit: here's a terrific example!!! http://www.washingtonpost.com/local/alabama-law-drives-out-i...


I'm going to downvote this article because contradicts everything said here.


Forget what you and your friends experience. If we manage to get a piece of research sponsored, you'd better believe it!

Sorry, I'll go on the experiences of myself and people I trust over research/article spinning.


if my friends and I have problems with your product, it doesn't matter how reliable your research spins it as, I won't believe you. Forget research: make something my friends and I, or people I hear about, don't have problems with. That's all I pay attention to, and it works a lot better than the alternative. (paying attention to whatever you "prove" at a statistically significant level - nevermind how many commissioned studies you don't publish, thereby completely invalidating that statistical significance - and ignoring anecdotes simply doesn't work, for me or anyone else.)

This article is literally asking for the right to lie (under the guises of 'research') and asking us to mod down anyone who calls them out on it. It really takes some face to say "Ignore what you experience - and vote down the experiences of others - and trust our data instead."

Next you'll sell me the most reliable cloud on the planet. All the responses on the article say they've had nothing but problems and downtime. But, I should just ignore these, right?


Is this not based in the assumption that the article to which the anecdotes are offered as replies is omniscient? Which may in and of itself be fallacious.


The assumption is only that the article is based on a scientific study, not that it is omniscient.


A study that rigidly follows the scientific principles, but is intellectually dishonest (ie, sponsored by interested party of studied subject, etc) is often not useful to the reader of the study results. Because you can follow the letter of the principles, and still flout the spirit.

One should, when listening to a study, question the funding. Likewise, dissenting opinions must also be examined in what interested parties have a hand in their creation.


Sponsorship does not imply intellectual dishonesty. Intellectual dishonesty is a matter of how the argument is made. Sponsorship is a heuristic you could use to quickly filter papers but used as a counter-argument, it is simply ad hominem.

You are describing a method for judging arguments without thinking critically about the arguments themselves or examining their basis in evidence and that is not any kind of science. One does not find any measure of objectivity by averaging between opinions, only by holding arguments to the yardstick of rationality and evidence.


Yes, examining the biases and methodologies of a study is the correct way to present a contrarian opinion. Providing an anecdote is much less useful.


Thanks, you were great too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: