Hacker News new | past | comments | ask | show | jobs | submit login

Hedgehogs (experts) know one big thing, while foxes (generalists) know many little things. However, foxes seem to be better at predicting the future than hedgehogs, because they constantly challenge their worldview and see their own ideas as hypotheses in need of testing. So maybe the reason why foxes are more successful forecasters is that they focus on learning, their "one big thing" strategy.



> However, [generalists] seem to be better at predicting the future than [experts], because they constantly challenge their worldview and see their own ideas as hypotheses in need of testing.

What? No. You're implying that experts do not challenge their own beleives and ideas about the world.


That is literally what the article is arguing, and it's something I've seen often enough to be convinced about.


I've seen this fairly often. I believe it's akin to the Dunning-Kruger effect in that experts begin to over-assess the effectiveness of the theories they hold most deeply to their expert identity the more entrenched they become in that identity.

It's usually easy to spot experts less susceptible to this problem (and more open to change) since they'll almost never consider themselves "experts" and avoid the label. They know how susceptible to change their theories could be or potential for being misguided/wrong. Their deep degree of understanding shows them just how little they truly know.

Furthermore, in our society, you must project confidence to be successful. Many experts are forced to appear confident when they're really not (does not make them less of an expert). If you're an expert and don't come off as exact, non-experts (and some experts even) may falsely assume you're not an expert because they expect you can produce an answer to some question they pose with certainty.

I believe this structure is partly to blame for the problem. Some play this confidence charade and are able to separate out the act of appearing certain, but still understanding the uncertainties they don't share and keeping them held tight.

The others get caught in this feedback loop where appearing confident and being right translates emotionally to being more confident. Their confidence is rewarded so they assume the the perspectives they have confidence in are sound the more it occurs over time. When a case comes a long that breaks that flow, they're still confident, but have positive emotional ties to those beliefs and become a victim of their own successes. Something else must be wrong because I've been rewarded so much for being right about this.

I believe those doing translational/applied sciences and engineering are more susceptible to problem than those on the theoretical side due to the way things are incentivized as a society. On the theoretical side there's a greater acceptance of being wrong (that's becoming less true as research is privatized more). On the applied side, people expect a high degree of confidence and certainty because that's what they pay for and there's little-to-no acceptance for error even though we're all human and make errors daily.

So I think the problem is really due to how society views experts and exerts its expectations upon them.


That exactly what his study found.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: