> Two days later, a team member with experience in finance saw that the hryvnia was strengthening amid events he’d thought would surely weaken it. He informed his teammates that this was exactly the opposite of what he’d expected, and that they should take it as a sign of something wrong in his understanding.
This is an experience every young trader ever has had many, many times. Myself included. You see an economic figure, you think "hey this should do x", and it doesn't do what you think. The senior trader on the desk will remind you that your wrongness is worthy of further thought.
Keep in mind that prices are themselves predictions. You are predicting what people will predict.
If you're wondering why the talking heads get things wrong, it's mainly because they are talking heads. They're paid to say something interesting, not to make money betting on it. That includes the banks' own analysts! They don't have money on the line, they are sales people at core. I've personally had conversations where the same guy will tell me to buy AND sell the same thing (USDJPY in that case). As long as I do it with him.
But mainly the guys making pronouncements on TV are there to get themselves another slot on TV.
The talking heads point is deeper than that. Not only are they paid to be interesting.
The very ability of being good at talking might be co-related with other base attributes.
For instance being good at talking might be co-related with tunnel vision, not having a bigger perspective, being able to connect things which shouldnt be, naivety, empathy and so on.
So if your process of conferring the title of expertise involves going through the university system, or performing well in debates or on a podium, you will get experts who are all wrong in exactly the same way and there is no one to offer them a different perspective.
> For instance being good at talking might be co-related with tunnel vision, not having a bigger perspective, being able to connect things which shouldnt be, naivety, empathy and so on.
This is a good point. When you really understand something, you know what the limits are. Your mind is full of ifs and buts.
People who are good at talking, especially persuasive talking, tend to be so because they can articulate a clean vision with few wrinkles. Think of any political discussion. In reality there's a tradeoff between higher taxes and higher social services. You end up getting people who essentially argue that it's a no-brainer, one way or the other depending on their flag.
The candidates do write books. One politician here is known as one of the "thinkers". In his books often times a problem is shown from many angles and his position ends up on some balance point. For example he might consider taxing wealth more and (salary) income less.
> The senior trader on the desk will remind you that your wrongness is worthy of further thought.
George Soros has written a bit about this effect (about back when he was a trader).
From recollection -
Firstly he enjoyed finding his investment hypothesis turn out wrong, and he eagerly looked for errors. He stated that some of his biggest wins came almost directly from a previously faulty idea.
Secondly he hated giving opinions on any market to anyone, since he felt that the social pressure to be correct hindered his ability to recognise mistakes and change his ideas.
> This is an experience every young trader ever has had many, many times. Myself included. You see an economic figure, you think "hey this should do x", and it doesn't do what you think. The senior trader on the desk will remind you that your wrongness is worthy of further thought.
I feel like the author intended for me to leave the article less likely to dig my heels in when acting as an expert and more likely to maintain a flexible worldview when making predictions in a scenario where there are a variety of well developed and contradicting predictive theories. And that's probably a good sentiment.
But I think it's important to recognize that foxes outperform hedgehogs only in scenarios where hedgehogs are available. That is, to weigh contradictory theories requires that those theories exist in the first place: and that's the work of experts.
So rather than ask: "should you be a fox or a hedgehog?"
I'm more interested in: How should foxes and hedgehogs be arranged so that the best predictions are made? How many contradictory predictive theories are necessary to let the foxes do their work and are there diminishing returns re: how deeply the hedgehogs have gone into their specialization?
Research in this direction could have implications re: how we fund other research.
Phil Tetlock, the expert on experts whose research is discussed in the article, does raise this point. He also discusses the difference between being a good predictor, and being good at raising interesting questions. There are some videos of him and a number of other smart folks here:
https://www.edge.org/event/edge-master-class-2015-philip-tet...
From the article I got the impression that the experts' job was not necessarily forecasting, so the question was "are experts good at forecasting?". That doesn't mean that experts are not needed, but that there's more to forecasting than specialists.
Khosla says he doesn't do much more than write checks now because he's been wrong almost every time about which start-ups he should help with. He thinks he's working on the next big thing, but it usually flops and a different company in his portfolio is the fund returner.
Furthrr, "Expert" Hedge fund managers routinely get smoked by the index.
> Seides had already conceded the bet earlier in 2017, due to the huge disparity in performance between his basket of hedge funds and Buffett's S&P 500 index fund. After all, through the end of 2016, the S&P 500 index fund had gained more than 85%, while the average of Seides' five baskets of funds was just 22%. [0]
Hedgehogs (experts) know one big thing, while foxes (generalists) know many little things. However, foxes seem to be better at predicting the future than hedgehogs, because they constantly challenge their worldview and see their own ideas as hypotheses in need of testing. So maybe the reason why foxes are more successful forecasters is that they focus on learning, their "one big thing" strategy.
> However, [generalists] seem to be better at predicting the future than [experts], because they constantly challenge their worldview and see their own ideas as hypotheses in need of testing.
What? No. You're implying that experts do not challenge their own beleives and ideas about the world.
I've seen this fairly often. I believe it's akin to the Dunning-Kruger effect in that experts begin to over-assess the effectiveness of the theories they hold most deeply to their expert identity the more entrenched they become in that identity.
It's usually easy to spot experts less susceptible to this problem (and more open to change) since they'll almost never consider themselves "experts" and avoid the label. They know how susceptible to change their theories could be or potential for being misguided/wrong. Their deep degree of understanding shows them just how little they truly know.
Furthermore, in our society, you must project confidence to be successful. Many experts are forced to appear confident when they're really not (does not make them less of an expert). If you're an expert and don't come off as exact, non-experts (and some experts even) may falsely assume you're not an expert because they expect you can produce an answer to some question they pose with certainty.
I believe this structure is partly to blame for the problem. Some play this confidence charade and are able to separate out the act of appearing certain, but still understanding the uncertainties they don't share and keeping them held tight.
The others get caught in this feedback loop where appearing confident and being right translates emotionally to being more confident. Their confidence is rewarded so they assume the the perspectives they have confidence in are sound the more it occurs over time. When a case comes a long that breaks that flow, they're still confident, but have positive emotional ties to those beliefs and become a victim of their own successes. Something else must be wrong because I've been rewarded so much for being right about this.
I believe those doing translational/applied sciences and engineering are more susceptible to problem than those on the theoretical side due to the way things are incentivized as a society. On the theoretical side there's a greater acceptance of being wrong (that's becoming less true as research is privatized more). On the applied side, people expect a high degree of confidence and certainty because that's what they pay for and there's little-to-no acceptance for error even though we're all human and make errors daily.
So I think the problem is really due to how society views experts and exerts its expectations upon them.
I like the conclusion of the article that broad knowledge is needed to do good predictions.
But, I do not see the point on going against experts. Probably the learning is that the most convinced and self-assured you are in a topic the easier is to get things wrong.
Take any non-expert that is sure and vocal about a future event and probably is wrong also.
I think there is a useful distinction to be made between fields like physics, where expertise in the field makes you more likely to make accurate predictions (e.g. about when that comet is going to appear in the sky), and fields like economics, where expertise in the field actually makes you less likely to make accurate predictions than a non-expert (e.g. whether we will have a recession soon).
Consider an expert in medicine, back when the "four humours" theory of medicine was what experts used. A non-expert would think, "bleeding is bad, I don't want to bleed, I need my blood". Experts believed in bleeding their patients/victims to restore a balance of the humours. George Washington was probably killed by his doctor in this way.
Being an expert in a field only helps, when the basic theoretical models of that field have reached a certain minimum level of maturity.
Interestingly, the article gives a tested recipe for creating a forecasting team. I wonder if it has been used for more important matters since, or maybe an expert committee advised against it :)
Predicting anything is a gamble. One simply does not know what will happen but one makes assumptions.
Let'ś take weather forecasting as an example. It is difficult to do it right for 5 days and often goes wrong for 10 days. But when you ask a group of scientist a simple question, like "Do you think that humans contribute to the rise of CO2 ?" and 97% say "yeah", then you can make a new assumption: CO2 will probably rise fast and we can make a new climate model that goes beyond 10-day forecasts and we assume that we can accurately forecast what happens in 80 years. Most, however, do not realise that they assume a lot.
You seem to be implying that climate forecasting can't possibly be accurate because weather forecasting is time-limited.
That's not correct. Climate and weather forecasting are different disciplines based on different models.
Of course you can't forecast whether it will be raining on a specific day twenty years from now. But you can certainly make a good estimate of how likely drought is.
Yes gambling and predicting are interconnected. They both share a common history.
> One simply does not know what will happen but one makes assumptions.
Tomorrow the sun will rise. I do not know, it is just an assumption. But, I am quite certain of it. :)
> we assume that we can accurately forecast what happens in 80 years.
Nobody assumes that. Nobody forecasts the weather in 80 years. What scientists forecast is climate changes not the weather.
In the casino I cannot predict which players will win and which lose (weather). And yet I can predict that they will lose on average and the casino win (climate).
You can’t accurately forecast what the weather will be on June 28th 80 years from now, for the same reason you can’t predict if it will be raining at your house at 3:29 pm two days from now, or where some molecule of water will end up in the Mississippi River an hour from now. But you can assuredly make less specific forecasts based on current trends. You know it’s likely to rain at some point in the morning two days from now, that the water molecule will flow roughly downstream, and that average air temperatures will increase if we keep putting CO2 in the atmosphere.
Go down a sea wall and dunk a measuring stick in. As the waves lap against it, how far in advance can you predict the exact height measured? Not much more than a second - it's too chaotic.
But observe it for a day or two, and deduce the existence of tides. Now you can easily predict the average (minute-to-minute) water height months in advance.
This is an experience every young trader ever has had many, many times. Myself included. You see an economic figure, you think "hey this should do x", and it doesn't do what you think. The senior trader on the desk will remind you that your wrongness is worthy of further thought.
Keep in mind that prices are themselves predictions. You are predicting what people will predict.
If you're wondering why the talking heads get things wrong, it's mainly because they are talking heads. They're paid to say something interesting, not to make money betting on it. That includes the banks' own analysts! They don't have money on the line, they are sales people at core. I've personally had conversations where the same guy will tell me to buy AND sell the same thing (USDJPY in that case). As long as I do it with him.
But mainly the guys making pronouncements on TV are there to get themselves another slot on TV.