What's your approach? Do you only believe arguments that sound unconvincing? Or just ignore all arguments, and try to adopt the beliefs that the popular kids in the schoolyard talk about?
The first step would be to recognize the error bounds on epistemological error and redo one’s estimates, but the tl;dr is that one’s certainty on edge case predictions should drop by a lot.
Having low certainty on edge case predictions is fine, but it also should result in correspondingly larger updates when those edge-case predictions come true, which one had previously doubted. For me, that was the case with AlphaGo, AlphaFold, and now ChatGPT. In all three cases I was highly skeptical that, in my lifetime, AI would ever beat humans at Go, adapt the same architecture to problems like protein folding, and blow the Turing Test right out of the water.
I've had to update accordingly, and now I'm less skeptical that the barriers ahead will be harder to break than the ones behind us.