But he also has the incentive to exaggerate the AI's ability.
The whole idea of double-blind test (and really, the whole scientific methodology) is based on one simple thing: even the most experienced and informed professionals can be comfortably wrong.
We'll only know when we see it. Or at least when several independent research groups see it.
> even the most experienced and informed professionals can be comfortably wrong
That's the human hallucination problem. In science it's a very difficult issue to deal with, only in hindsight you can tell which papers from a given period were the good ones. It takes a whole scientific community to come up with the truth, and sometimes we fail.
I don't think so. The truth is advanced by individuals, not by the collective. The collective is usually wrong about things as long as they possibly can be. Usually the collective first has too die before it accepts the truth.
> I thought (and could be wrong) that all of these concerns are based on a very low probability of a very bad outcome.
Among knowledgeable people who have concerns in the first place, I'd say giving the probability of a very bad outcome of cumulative advances as "very low" is a fringe position. It seems to vary more between "significant" and "close to unity".
There are some knowledgeable people like Yann LeCun who have no concerns whatsoever but they seem singularly bad at communicating why this would be a rational position to take.
Given how dismissive LeCun is of the capabilities of SotA models, I think he thinks the state of the art is very far from human, and will never be human-like.
Myself, I think I count as a massive optimist, as my P(doom) is only about 15% — basically the same as Russian Roulette — half of which is humans using AI to do bad things directly.
The whole idea of double-blind test (and really, the whole scientific methodology) is based on one simple thing: even the most experienced and informed professionals can be comfortably wrong.
We'll only know when we see it. Or at least when several independent research groups see it.