Ah, now this is gratifying to see. I'm not a lone crazy person. Because I read that book, with high expectations, a couple years ago and came away profoundly disappointed for many of the same reasons.
I did find him to be a hell of an interesting thinker and someone with very interesting ideas, but to me, the whole book had a smell of overconfidence and incredibly weak standards of evidence. Very much in the "well I observed this phenomena, went for a walk and though deeply and clearly the reasons is X", followed by a tower of (interesting) conjecture built on top of X.
I put it down to coming from the economics culture rather than a more rigorous scientific culture.
I'm not sure we read the same book; I don't remember any such occasion like that. Every such conjecture was followed by a bunch of experimental evidence. There are a ton of references associated with the points.
That's no guarantee of the rigor, of course, but I don't think yours is a fair characterization of the book. Are you sure you're not confusing it with Gladwell? ;-)
It's been a while since I read it, but his experimental evidence was a lot of tiny studies with no replication. I tend to not view a single study as strong evidence at all, especially in the social sciences. And history hasn't been particularly kind to many of those findings.
I went in really wanting to love the book, it was universally praised and I was pretty high on Ariely at the time, but even with that bias I still found large chunks quite unconvincing.
Perhaps it was just the writing style or something, or the fact that the beginning of the book focused a lot on those priming studies at a time when that entire area of research was profoundly embarrassing the social sciences.
(re: Gladwell, that's a good way of putting it, it was uncomfortably Gladwellian! Not that level obviously, but not far enough away!)
Reading the book, I had to stop many times to reflect on how poorly constructed the conclusions were. Many of his conclusions suffered from the same biases that he had "discovered", and yet he appeared to be blithely unaware of his own biases.
My favorite part was his discussion of the Linda problem, where he would get different results depending on how the question was phrased. In the book, he never once considered that the problem may be that his formulations of the problem were not well communicated to the study participants.
In the end, I decided that his research into cognitive bias was more a journey of self exploration, but somehow he discovered these biases in everyone except himself.
> In the end, I decided that his research into cognitive bias was more a journey of self exploration, but somehow he discovered these biases in everyone except himself.
Maybe fixing humans is impossible and it would be better for meta-meta-cognition to be accomplished by other people. Clearly they are quite good at it already!
Suppose you had a complete record of a person's speech and writing i.e. some of their outputs.
Could a software be devised which gave the equivalent of a 3rd party observation on your choices? Could you even identify choices from the data?
That's just another example of real research with nuanced conclusions being transformed into pseudo-science for pop-psychology and business management books. Almost every book I read in this area I'm at least familiar with some of the research, or end up reading some of it as I go along. It seems like all books in this class of non-fiction best sellers overstates their conclusions in almost exactly the same way.
Same book, but obviously we have different "triggers". Like 'freshhawk', I found Kahneman's book to be astonishingly full of faulty logic and difficult to finish, but I have no problem reading Gladwell. Here's an HN thread from a a few years ago with my summary of Thinking, Fast and Slow: https://news.ycombinator.com/item?id=3431815
Perhaps it's that Kahneman is in the "uncanny valley" between science and popular writing. Since Gladwell is (for me) safely on the popular side, I find him engaging rather than misleading. But I'm pushed away by Kahneman's apparent overconfidence in places that I'm certain he's wrong, including such examples as Jason gives.
Ah, I found a pdf of the book and did some searches to jog my memory. It was probably the section where he describes going to walks with one of his associates to discuss ideas about decision making, bias and rationality and "examining our own intuitions" to gauge how closely they fit rational actor economic models or their proposed alternate models.
I find this whole "well I examined my intuitions about X so that is some evidence of this model of human decision making about X" that is immensely popular in philosophy (especially meta-ethics) and economics to just be embarrassing garbage. So that definitely hit a very strong bias I have. And it seemed completely out of place in a book concerned with human irrationality.
I guess we are victims of Halo Effect as he described in the book! He won the nobel prize so we think whatever he writes must be true.
Weak standards of evidence also made me bit uneasy throughout. But my System 1 just trusted the whole book because everyone seems to be liking it. Case of Social Proof i guess.
How is it possible that it took until 2015 to discover the mistake in the original 1985 research (Gilovich et al.) "proving" that hot hands in basketball don't exist?
It became mainstream news, discussed in places like ESPN and has been repeated countless times in popular science books. Furthermore, the debunking in the Miller and Sanjurjo article referenced in the parent's blog post is not the result of arcane mathematical analysis, but uses extremely basic math that any undergraduate could understand.
It's almost shockingly hard to believe that no one noticed the problem before, when the result was the subject of such passionate, widespread controversy, and so many people had the tools to see the flaw....
And not only that, the edge case condition makes the flaw painfully obvious. From the Miller and Sanjurjo article's description of the original, flawed research: "...a streak is typically defined as a shot taken after 3, 4, 5, . . . hits". Looking at only the case of a shot taken after streaks of length 3, the miss rate is 100% (because a hit would imply the streak actually had length 4).
Usually when the popular press talk about a study, they don't link to the original study. So the number of people who talk about it is much higher than the number of people who might actually go and re-examine the study.
>> You have no choice but to accept that the major conclusions of these studies are true.
> I am surprised at the blind spot I had when first reading it – Kahneman’s overconfidence didn’t register with me.
Wow, this hits me pretty hard too. My impression of the book was of Kahneman being a subtle and humble thinker, in stark contrast to the bombast of Nassim Taleb of "The Black Swan".
A great post, if only for this and drawing my attention to the fact that the "hot hand" effect may actually exist... I've definitely used it as an example of when people are liable to see patterns in the noise :-(
The arrogance is deep: The book repeats "this is true...you have no choice...this is you" and such, but that isn't true at all! The studies, assuming they're trustworthy, prove things about people in general, and the conclusions are useful if you're a public speaker or work in marketing. But what's dangerous is the suggestion that these things are true in the small, true for the individual. That's why the conclusions are less interesting, less useful than the author suggests. But it's written like typical popular science, with each chapter ending with pithy application of the knowledge, usually presented in the small (usage examples such as "I did X because of the availibility heuristic").
I checked out a copy based on Alan Kay's recommendation and returned it after getting about half-way through, realizing it was typical pop-science sophistry.
I'd heard and read about the "hot hand" effect being a fallacy in a lot of places. It never made sense to me. I'm glad it is finally getting debunked.
Anyone who has played sports for a length of time would likely say that they feel more in control of their body at certain times. It could be because the brain chemistry becomes temporarily balanced, reducing anxiety, so their brain doesn't distract them as much. It could be a lingering pain/headache suddenly disappearing so it no longer becomes a physical distraction. It could be something directly related to muscle control, like minor tremors going away giving you more direct control over your body. In the case of sports with "fine tuning" like basketball where if you feel you are shooting short, you make the change to put more strength into it to "hone in" on the basket and start hitting shots. So, there are definitely times where you not only feel more capable of doing what you need to do over a short period of time. Due to current circumstances, you actually are more capable.
Yes, whatever the reason, we all have good days and bad days.
What you're talking about is what the statisticians call nonstationarity: on one day you make 30% of your shots, while on another day you make 40%, or whatever. Then your teammates, estimating your probability of making the next shot based on how they've observed you playing that day, decide whether to give you the ball more or less often. That would be one explanation -- and to my mind, a perfectly reasonable one -- of the "hot hand" theory.
But that's not what the authors of the original paper measured; and interestingly enough, it's also not the theory that Collins has now un-debunked. That involves a different measure, called autocorrelation, which measures "streakiness": how the odds of your making the next shot change based on whether you made the previous one. Autocorrelation and nonstationarity are orthogonal -- you can have either one without the other.
If you create a shot-vector per day, then any vector will lack autocorrelation.
If you you create a shot-vector for the whole season, then there will be autocorrelation.
Why?
P(n = good | n-1 = good)
= P(n = good | currently good day) P(currently good day | n-1 = good) + P(n = good | currently bad day) P(currently bad day | n-1 = good)
> P(n = good)
Also regarding verhausts example, the process he describes clearly has autocorrelation.
>Anyone who has played sports for a length of time would likely say
I played sports, one particularly to a highly competitive level, and I'm not sure I agree with feeling a conscious sense of more control of my body. Sure, some days feel better than other. But there are just far too many variables, so you can always point to something as the "cause" of your sudden string of what I'd consider good luck, but might also look like ability.
In fact, I'd say this is the major fallacy of sports analysis: post-hoc reasoning of results. Caveat: I haven't read the "hot hand" research.
Kahneman is half-right, though. You have to accept the conclusion of a study as true. You can't accept the study as true and then keep believing in the thing the study just disproved.
What you can do, though, is question whether the study itself is true. Sometimes, it's not.
One can consistently accept a study as well-implemented/operationalized while acknowledging a reasonable chance that the results are garbage, and that repeating the study might not yield significant results.The proper conclusion of a statistically oriented study is a probability distribution across several potentially contradictory propositions. One is never forced to "exclude the middle" as a result of a study, and conclude either A or not A.
Re-reading what you said I think we're in agreement already. I was nit-picking at the idea that a study has a single conclusion which is either true or not true. The conclusion of most studies is properly a vector and a hefty pinch of salt imo.
Actually I would say that you are more likely to be correct if you wait until there are multiple replicated results before you "have to accept" it as true. One study is interesting evidence, depending on the protocol, but publication bias/filedrawer effect is a real thing and studies are massively biased towards positive results.
>You have to accept the conclusion of a study as true. You can't accept the study as true and then keep believing in the thing the study just disproved.
People doing that weren't who Kahneman addressed there though. It was people thinking the study was BS/fishy/wrong.
I was quite disappointed in this book -- having followed Kahneman since his early research with pupillometry as an indication of interest. I agree with many of the comments in this thread. Given the author -- and the importance of the subject -- it just should have been done better.
But I still recommend that it be looked at for a variety of reasons, including what Kahneman calls the "expository fictions" of "System 1" and "System 2". (In talks I usually pair this up with a slide of "Maps of the Mind" by Turner to make the point that there are many characterizations of mental architectures, some at odds and some harmonious (i.e. be careful when trying to reason with such suppositions).
Still, the "System 1" and "System 2" simplifications are very useful as aids to thinking about many important areas, including learning, user interface design, etc.
I wish the author had provided some references, because his claims are conflicting with what I have read from many other sources.
I am mainly thinking of a) many forms of priming are firmly established at this point, and b) Kahneman specifically called out researchers to more thoroughly vet their findings for the others.
This conflicts with this article's claims that priming is not established, and that Kahneman has doubled-down on it.
I suspect the confusion comes from the many forms of priming, some of which are established, others of which are suspect.
I suspect the confusion comes from the many forms of priming, some of which are established, others of which are suspect.
Not an expert, but I thought it was by now common knowledge that social priming is a field in crisis, where failure to replicate is the norm, not the exception[1,2].
I think there are 2 subtle distinctions to be made, in how evidence should be interpreted and responded to.
A) Given XYZ evidence, I am 100% confident that premise ABC is true.
B) Given XYZ evidence, I am 100% confident that accepting premise ABC as true, is the right thing to do.
Sentence A is almost always false. No matter how much supporting evidence you have, the odds of the underlying premise being true, is never 100%. There is always some likelihood of all the evidence being flawed/biased/compromised in some way. Because of this, the likelihood of the underlying premise being true, is always lower than 100%.
That said, even if the odds of the premise being true is only 90%, sentence B could still be true. That sentence doesn't state that the premise is guaranteed to be true. Only that given what we know, accepting the premise is the most rational thing to do. Just like deciding not to buy a lottery ticket, or doubling down on your bet when you're holding a straight flush. You might still turn out to be wrong, but that doesn't negate the fact that your earlier actions were 100% the right thing to do.
Unfortunately, despite the significant differences between A and B, a casual reader may still mistake B for A. A non-perfectionist writer/editor may also mistakenly write A, when really, he means B. This might be the case with Kahneman's book. Given the evidence available at the time of his writing, the B interpretation of his assertions would still hold up well, and we shouldn't be denying that on the basis of 20/20 hindsight.
I like your distinction, and think there is an important difference to be made between A and B. But I don't think you've framed B quite correctly. For example, what would be the difference between "I am confident" and "I am 100% confident"? What would be the case for B in which having exactly 65% confidence would be correct?
I think both A and B have an unspoken assumption of "and assuming there is no error in our measurement of XYZ, and assuming that XYZ means what we think it means, and assuming there is no other evidence we are failing to consider". Unless one's confidence in these assumptions (and probably some others) is 100%, one shouldn't have complete confidence in any particular course of action.
Kahneman's book may have a lot of dubious science for specific points. However, you need to remember that he was attacking the Economic Man model that has been standard in economic science for over a century. This model holds our economic decisions are all rational, or at least if biased, the biases are random across different people and so cancel out. That means we should always trust the unregulated market to produce ideal results, and so the government should always stay out of it, as it could only make things worse.
Various economists such as Robert Schiller and his writings on "animal spirits" have argued that this is not at all the case. Kaheneman and the other behavioral economists have added a lot of empirical support, and, with some luck, economic science will shift to better fit reality.
100% agreed. For all the hype around the book, and so many people recommending it as the best book they've read in $duration, I could never quiet stomach so many of Kahneman's "X Study leads to Conclusion Y" in the book.
I still liked the distinguishing characteristics between the "Fast" and "Slow" brains (the 2 systems) that Kahneman uses (Which I think he borrowed from another researcher, whose name skips me). It's a great way to think about how the brain works and you can neatly categorise some things as part of System A and some as System B. But the behavioural science studies leave much to be desired.
Then again, perhaps that is the uncertain nature of behavioural science?
My most useful takeaway from the book is how cognitive biases negatively influence one's investment returns. I've been applying this to my own investment strategy, trying to avoid those biases.
I found this a very frustrating read as well. The tone of the book was over-confident and in my mind the author really goes to no attempt to explain his reasoning.
I only made it as far as the probability section. I gave up after failing to understand his logic. Something about students enrolled in computer science, if Google is indicative.
Interesting, that's the chapter that caused me to finally put the book down. The gist of it was: people not thoroughly trained in statistics aren't good at it.
This is hardly news. In fact, even highly intelligent people struggle with the Monty Hall problem, for instance.
"later research has questioned whether the belief is indeed a fallacy.[1][2] More recent studies using modern statistical analysis have shown that there is evidence for the "hot hand" and that in fact it may not be a fallacy.[2]"
It is obvious, he has studied intuitive biases throughout his career.
However, he himself said that experts make only little difference as compared to the market experts in the stock market. On an analogy, he is an expert but just like any other human. He is fallible.
Our bias on the Oscar Winning - Halo Effect biased us in buying and reading. And after reading and analyzing the content. Now everything has gone to the hindsight.
That is the purpose, concrete analysis of concrete condition based on rationality.
I see the author's point and I feel it too (I'm only 2/3 through the book) but unless he has evidence to show that Kahneman used flawed research, then he is just replacing this baseless optimism with baseless pessimism. The book definitely gives a lot of overly confident and pat answers to complex questions, but the book is aimed at a mass market who are trying to grasp the basics of decision theory and are not going to read the sources.
im in the last section and honestly, looking back at it, i would've been satisfied with just reading the first 2 sections, maybe even the 3rd one. It gets more dense and narrower as you go through it I think.
one thing drivin me crazy is about half the time Kahneman presents a situation and says "you would've chosen this" except that I didn't choose that. Guess I'm a weird outlier or something :P
> one thing drivin me crazy is about half the time Kahneman presents a situation and says "you would've chosen this" except that I didn't choose that. Guess I'm a weird outlier or something :P
Oh wow, I had that experience! Thanks for putting my feeling into words.
A few years ago, I was looking for a semi-popular book I could recommend to people on dual systems theory. I picked up Kahneman's TFS expecting it to be a good choice, but was very disappointed. His logic was sloppy and bombastic, but what I found most irritating was that he would constantly, like maybe every other page, slightly misrepresent other people's work.
--------------------------
Dual systems theories hold that reasoning, categorization, and decisionmaking are the result of two interacting systems, one rule-based/symbolic and the other statistical/associative. The deliberative (Type 2) process is open to introspection but the intuitive (Type 1) is not.
Which system dominates varies by situation. Intuitive reasoning dominates when there is low time or interest. It can also dominate deliberative reasoning when the cost of violating intuition appears large.
The general model of interaction between the systems is that intuition and affect operate through positive feedback and the deliberative system attempts to inhibit the intuitive with varying success depending on relative coherence.
While a weak form of this theory is still largely accepted as useful, there is much criticism based on the difficulty of cleanly isolating the processes. The theory seems to be fading in popularity in favor of multimodularity. The main counter-theory seems to be that deliberative reasoning is mainly cycles of intuitive process.
You just caused me to discover something interesting-- the title is only uppercase due to a CSS transform rule. When I copy the text in Firefox, it copies case-correctly. Chrome copies it as all uppercase.
The big idea is to realize, internalize beyond any doubt that the brain is evolved set of highly specialized structures, some very ancient, some more recent, which communicate to each other.
The emotional and instinctive parts are more fundamental and more important than language-related parts, and what we call or reflect as thinking is mostly feeling and sensing, to which verbal augmentation is merely a, well... augmentation.)
The message is that cortex is by no means is everything or even most essential. The ancient, animal, non-verbal parts of the brain are still doing most of the job. As a consequence the popular myth about human's pure rationality should be discarded. We are still animals. The crucial difference is obviously a language and hence abstract thinking (in that order - we think in language labels we attach to everything, including feeling and moods). But this is not the whole picture.
So called Fast Thinking is working of the ancient, animal parts of the brain.
Yeah, I started reading this and got a very strong "Steven Pinker" vibe from it. Except I pushed back a lot more against Pinker's assertions than Kahneman's.
I've never read a critique of Pinker, and found his article on academic prose to be quite well-written. Would you expand upon your "Steven Pinker vibe"?
There's multiple, none definitive (the cool thing about writing a giant tome is that somebody has to write a giant tome back). I've just picked up on them as I discuss his work online on various venues, which I feel panders very obviously to well-off westerners seeking reassurance.
There's another that dismantles his use of an ancient human corpse found carrying "weapons" to suggest that there had been a murder.
Yet another discusses sloppy research where me mis-portrays some joke minor journal as representative of an academic field and makes broad generalizations based on it.
One day I'll collect my own compendium. In the meantime, if you're curious, just scour reddit or something.
I did find him to be a hell of an interesting thinker and someone with very interesting ideas, but to me, the whole book had a smell of overconfidence and incredibly weak standards of evidence. Very much in the "well I observed this phenomena, went for a walk and though deeply and clearly the reasons is X", followed by a tower of (interesting) conjecture built on top of X.
I put it down to coming from the economics culture rather than a more rigorous scientific culture.