Hacker News new | past | comments | ask | show | jobs | submit login
Cognitive Biases and the Human Brain (theatlantic.com)
103 points by jonbaer on Sept 1, 2018 | hide | past | favorite | 20 comments



When I teach problem solving in small teams, I focus on what I believe to be the “Big 4 Biases”:

Confirmation Bias

Conformity Bias

Sunk Cost Fallacy

Halo Effect

Obviously there are many others, but these are the 4 biases I would most want to inoculate small teams against.


As an honest question, how has that worked out for you?

I take a different approach to this idea. In the past I’ve emphasized the following four pillars:

1. Skepticism. Doubt everything. Doubt aggressively and often. Every unsubstantiated claim is suspect and unreliable. Assume your own knowledge and perceptions are unreliable at best and grossly deficient at worst. If someone mentions they read a study recently, doubt the study exists and doubt it finds what they believe it does if it happens to exist. Don’t attach value judgements or pettiness to the skepticism, but just insistently doubt everything.

2. Empiricism. Do not hold opinions about things you haven’t researched deeply. Apply Newton’s flaming sword to everything you can. If you believe something to be true because you read about it in a study, read the entire study, read related studies, and be capable of speaking about their contents at length. If you believe something to be true because you’ve induced it through reasoning, be able to demonstrate a clear chain from some axioms (even if they’re just personal moral imperatives).

3. Charitability. Assume everyone you speak with is at least as intelligent and capable as you are. Assume they’re putting forward the strongest possible version of their argument, and assume their behavior is rational and has a logical sequence of reasons behind it. Regard anything that would make you think uncharitably of them as an unknown unknown on your part until proven otherwise. Humility goes hand in hand with this.

4. Dispassion. Maintain a very small identity. Do not affiliate your identity with anything extrinsic - such as a religion, political party, country, company, profession, programming language, social class, etc. Distill the topics you can become irrational about into their smallest possible core and try not to engage in debate or comparative analysis about them.

The idea is that these four rules should make someone 1) consume vastly more information than they produce, 2) be generally distrustful of the signal to noise ratio of all information, and 3) produce much more signal than noise when they do produce. I don’t know how well they work in practice because it’s hard to act this way and hard to convince others to do the same.


Both of these approaches are great.

In my experience, #4 rears its head over and over again. People tend to tie their identity and ego to their ideas and work products and it can be very blinding and disabling in terms of figuring out a path forward that might exclude something that they perceive of as "theirs".

"It's my code", or "That was my idea" becomes very problematic when its explicit. Doubly so when the person is unable to acknowledge it and it becomes and implicit, but unknown decision driver.

I've worked at more than one company where a senior manager tied us to rapidly outdating tools and software simply because it was what they worked on when they were junior -- even when there were demonstrably better tools or approaches or software out there -- but this fact never came up in the reasoning for why we stuck with something (even if it was very obvious).


Agreed.

Natural human emotions/behaviour/selfishness seem to get in the way of everything.

Libertarianism/Communism would both be equally awesome if it wasn’t for all of us flawed humans.


Nice list. Question:

> 3. Charitability. [...] Assume they’re putting forward the strongest possible version of their argument,

Wouldn't it be charitable to assume that they're putting forward a sub-optimal version of their argument, ie not the strongest version? (Thus, their argument could be made stronger.)


Yeah, this stuck me as poorly worded. But to follow the Charitability rule, it can be better interpreted as follows:

Often, when someone puts forward an argument, there are multiple ways to interpret it. Some of these versions of their argument are strong than others. Assume that they were trying to say the strongest version.

See also steelmanning:

https://en.wikipedia.org/wiki/Straw_man#Steelmanning

If you want to see examples of steelmanning, read

http://slatestarcodex.com/


>Wouldn't it be charitable to assume that they're putting forward a sub-optimal version of their argument, ie not the strongest version? (Thus, their argument could be made stronger.)

What the parent means is that we should treat what they say as if they meant the "strongest possible version".

So, parent doesn't mean "assume what they said was the strongest possible version" but "assume the strongest possible version of what they said".


Thanks both of you ^ for clearing this up. Makes sense now.


So, incidentally, this way of putting the rule provided a nice example of itself.


In small team environments it has worked quite well(based on both anecdotal operational performance as well as perceived value in student validations).

I know it is imperfect, but due to the time/space we have to train we have gone with what we see as the highest retained bang for buck.

I do really like Skepticism/Empiricism/Charitability/Dispassion.

I’ll look at incorporating it into discussion.

To be honest, one of the hardest nuts to crack is how to incorporate mitigating against human organisational behaviour.

Specifically, how to mitigate against excessive risk aversion due to organisational culture.


Aren't #1 and #3 at odds with each other?


If applied wrong, they can be. But the combination can be helpful.

Charitable skepticism leads you to asking questions about the content. If I say that coffee drinkers live 5 years longer on average, you can ask what in the coffee makes it happen, who did the study, how the study was conducted, etc, etc.

Rather than just saying the whole study is bullshit and dismissing it out of hand. That's more of a contrarian form of skepticism. And it's less useful in learning something.

For instance, if you ask those questions, some of the answers might lead you to something that is useful or interesting. Maybe something in the coffee binds to some receptor that enables blah blah blah (I'm making all of this up for the sake of example, mostly because I'm drinking coffee, so don't actually look for any of this information. It doesn't exist). Or maybe it is all hokum, which is good, because you don't like coffee anyway.


Excellent breakdown. I especially liked #4.


This article is older now but is still a great read. It has a different list which I liked a lot (the below is directly from the article):

Self-Serving Bias – basically: if something is good, it’s probably because of something I did or thought of. If it’s bad, it’s probably the doing of someone else.

Fundamental Attribution Error – basically: the bad results that someone else got from his work must have something to do with how he is, personally (stupid, clumsy, sloppy, etc.) whereas if I get bad results, it’s because of the context that I was in, the pressure I was under, the situation I was in, etc.

Hindsight Bias – (it is said that this is the most-studied phenomenon in the history of modern psychology) basically: after an untoward or negative event (a severe bug, an outage, etc.) “I knew it all along!”. It is the very strong tendency to view the past more simply than it was in reality. You can tell there is Hindsight Bias going on when descriptions involve counterfactuals, or “…they should have…”, or “…how did they not see that, it’s so obvious!”.

Outcome Bias – like above, this comes up after a surprising or negative event. If the event was very damaging, expensive to clean up, or severe, then the decisions or actions that contributed to that event are judged to be very stupid, reckless, or negligent. The judgement is proportional to how severe the event was.

Planning Fallacy – (related to the point about making estimates under uncertainty, above) basically: being more optimistic about forecasting the time a particular project will take.

From: https://www.kitchensoap.com/2012/10/25/on-being-a-senior-eng...


I'd include Hindsight Bias or perhaps Survivor Bias as well. But excellent list!


I recently saw a video about scrum where they explained how Cognitive Biases has been an impediment in SDLC. They explained Status Quo Biases and Planning Fallacy, the whole video although was about Scrum but had also intrigued me regarding Cognitive Biases. Really happy to have found a relative link about it.

Nisbett writes in his 2015 book, Mindware: Tools for Smart Thinking, “I know from my own research on teaching people how to reason statistically that just a few examples in two or three domains are sufficient to improve people’s reasoning for an indefinitely large number of events.”


And I know from that quote, that he is taking great liberty with statistical power...

Seriously, that quote could easily have included a single number to build confidence. :(


That quote was part of a book, presumably containing many numbers. This criticism is ridiculous.


Fair criticism of my post. I will try and pick this up to take a deeper look. That quote just set a lot of my hairs raising. Sounded too much like someone selling me something.


Can you please link the video, sounds interesting. Thanks.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: