Hacker News new | past | comments | ask | show | jobs | submit login
What the Dunning-Kruger effect is and isn’t (2010) (talyarkoni.org)
81 points by damienkatz on Aug 31, 2015 | hide | past | favorite | 25 comments



Looks like a good article overall. I have just one qualm:

> The plural of anecdote is not data.

Ah, but it is. It's biased, hard to analyse, fraught with many perils… but it is still relevant information. Even though anecdotes are rarely conclusive, They can often tell you where you should look next.

This is similar to the correlation/causation thing. Sure, correlation doesn't imply causation. But it sure makes it much more probable.


Ah, but it often belies and smooths over the important distinguishing factor of data: statistical significance.

Simply having "plural" is not enough. You need a well-designed experiment, with well-controlled variables, good methods and measurements, good consistent recording, and enough points of that nature to show significance. And all that without natural biases, such as the confirmation bias so common with anecdotes. You can have as many self-reported stories from a self-selecting population as you like and it still won't be good data, due to the inherent bias in the method.

It's difficult to recognize these biases, which is why we say "the plural of anecdote is not data." Not because some data isn't a series of anecdotal points, but because good data is often so much more, and it's important to respect that. Sure, use it to guide your instincts, but don't mistake it for rigorous science.


You're making a lot of assumptions, though. You're assuming self-selection, and you're assuming what kinds of situations the data is being used. The plural of anecdote isn't data, but a group of anecdotes isn't inherently not data.

Further, a collection of many anecdotes is not necessarily great data, but the quality of data can be taken into account when making assertions about it, and you can attach confidence ratings to assertions made based on data with known faults.

The problem here is that "the plural of anecdote isn't data" is just a short, snappy thing to say, that doesn't capture any of the nuances of what data is and how it's used. A lot of times it's used to express a legitimate point, but I think we could express that point more accurately by actually talking about what data is instead of oversimplifying the concern.


Statistical significance? At what p-value? Controlled studies are great, but claims of "statistical significance" are not as strong as we make them to be. I also don't like this idea of an arbitrary threshold.

For the nice case of comparing two hypotheses, I'd rather talk about decibels of evidence: which hypothesis does it support, and how strongly does it support it. That way you don't even have to chose a null hypothesis. (You still have to work with some "priors", but in practice you have to assume things anyway).

> Sure, use it to guide your instincts, but don't mistake it for rigorous science.

"Rigorous science" is the easy part. It's the point where you already have some insight or theory in mind, and you just have to test it. But first, you need to get that insight, or formulate your theory. At that point, your instincts is pretty much all you have.


Yeah, even the original quote is "The plural of anecdote is data"

http://evidence-based-science.blogspot.ie/2009/11/plural-of-...

If it wasn't, where would data come from?


Controlled, repeatable studies designed to counter biases and generate meaningful findings. If you really loosen up the meaning of the word "anecdote" then sure, a series of certain conforming anecdotes do become data. But it doesn't follow that all collections of anecdotes are data.


Further, when dealing with psychological or (many) sociological/biological subjects where you can not do a double blind test for moral reasons and all data is anecdotal (ie. self-reported), don't researchers overlook this bit of conventional wisdom?

In fact, from my experience (haha), it seems that people use anecdotal evidence whenever it helps them or supports a popular opinion, but then demand stronger evidence when it goes against their current belief.


The correlation/causation trope is a thought terminating cliche that we often need to look past. To me, the use of it is often something to take note of; that one is being manipulated by rhetoric.


You're right, but I think more importantly sometimes data is not data. A white suburbanite will collect anecdotes from other white suburbanites, like for example that the police are to be trusted. Then a study which doesn't properly control for these variables says the same thing. The anecdotes are actually more informative, since they are passed by word of mouth among similar people: they get at facts about the world that are true about the people you associate with. A black person would be unlikely to trust these anecdotes, but dress it up in SASS and call it a study and everyone pretends it's objective. More insiduous yet is that selection biases are usually this damning but much less obvious, and most studies are run on a select group of upper middle class educated psychology majors..


"The plural of anecdote is not data" is usually a shorthand for "Not every set of anecdotes is useful data."


The Dunning-Kruger effect effect: The perceived ability in people who have heard about the Dunning-Kruger effect to judge other people's abilities by observing their self-assessments.


I always find ironically amusing people without a psychology degree invoking Dunning-Kruger to explain things


I find psychology degrees ironically amusing.


I thought I was experiencing the Dunning-Kruger effect, but it turns out I am just an idiot.


I often wonder if I'm idiot too or experiencing the DK effect.



No one will read this, but ... this reinforces what I think is the singularly most powerful skill for creating a bootstrapped mind is metacognition. Without it, one cannot create the feedback loop for self improvement. The self taught learners who haven't gone off the rails, have for the most part a highly tuned ability to know _why_ they know what they know. Teaching a child how to learn is teaching them metacognition.


I was tested for learning disabilities around the 2nd grade. At the end of the process they offered to let me skip the 3rd grade (I declined, but I only recently came to regret my decision).

I don't recall if I figured this out on my own or someone trained me but I do know this was about the time I started a process of metacognition, wherein my internal dialog became a deliberate process of repackaging the things the teacher was saying into something that made sense to me. Instead of trying to cram their mental model into my head, I would build my own hypotheses and test them against all the examples that were provided, then abandon them for a new one if they didn't fit. [Edit] And when I got the answer wrong I would halt the presses and try to figure out why I was wrong. I used to joke that if I wanted to remember something forever all I had to do was get it wrong on a quiz and it would stick in my brain for eternity.

Prior to that I assumed if the teacher said something that didn't make sense that I was stupid. After I assumed that the teacher was just speaking a foreign language and I needed an internal translator. I went from almost held back to honor roll in the space of about a year.

(this trick, incidentally, made me an exceedingly good troubleshooter in my professional life, and "If you can't figure it out go ask hinkley" is common strategy almost everywhere I've worked. Not only can I probably solve your problem but the mental tax is lower on my than it was on you, because it is my idiom to process facts this way. I am literally always thinking about thinking).


Yes! Metacognition is the core of cognition. We need to teach more people this.


For me, the assessment of Dunning-Kruger is missing a key control group, people who don't claim any aptitude in the relevant skill. For a newbie, that's the class of people who they are most likely to compare themselves to, e.g. the typical small office's Excel guru knows a few `@` commands not pivot tables and linear regression and it's computer whiz has a general picture of directories as a tree structure and can use `cd` and `dir` and `*.bak` to find lost files.

To put it another way, the less skilled practitioner is comparing what they know today versus ignorance. An experienced practitioner has been around long enough to have experienced tough problems that highlight the limits of their knowledge. In other words, one group is likely to compare themselves to an unskilled cohort and the other to a highly skilled one.

Excluding people who rate their ability at zero and who perform consistent with that ability skews the graph.


tl;dr

"So the bias is definitively not that incompetent people think they’re better than competent people. Rather, it’s that incompetent people think they’re much better than they actually are. But they typically still don’t think they’re quite as good as people who, you know, actually are good."

and graphic:

http://www.talyarkoni.org/blog/wp-content/uploads/2010/07/du...


I remember reading an interview with either Dunning or Kruger at one point. They were really disappointed that everyone had used their research to point fingers at others.

In actual fact, the authors had hoped the real value of their research would be in getting people to question themselves.

I.e. Instead of saying: "look at that dumb person, he doesn't know he's dumb."

The authors hoped people would say: "what if I'm not as smart as I think I am?"

I guess the "other people are dumb" meme is so much more comforting, though.


It's like the old trope, where someone confides in a friend that they are worried that they are going crazy, and the friend tells them not to worry. People who are crazy don't bother to ask if they're crazy.

Self inquiry is, IMO, a cornerstone of self regulation. If you're asking questions you're already well on your way. If you're not you won't notice that self-help articles even apply to you. (In fact I think of most self-help advice is, at best, milestones that you can't recognize until you've gone past them. It can tell you how far you've gone and when you've made a wrong turn, but it doesn't actively help you get where you're going. It's all up to you).


> People who are crazy don't bother to ask if they're crazy.

Is that commonly true or ~always true?


But self-doubt / imposter syndrome is a thing.

I find myself on both sides, though more often catch myself in self-doubt than overconfidence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: