This can happen in academia. I went through Stanford CS, finishing in 1985. That was about when it was clear that "expert systems" were not going to lead to Strong AI Real Soon Now, or, indeed, much of anywhere. But many faculty were in deep denial about that. The "AI Winter" followed. There was little progress until machine learning finally took off about two decades later.
I still remember reading an article about Doug Lenat and Cyc a long time ago in some pop sci publication and thinking, surely you can't be serious...
I don't understand how people ever thought such things could become equivalent to human minds, but I don't think that's my insight, but that others' thinking was blinkered for some reason.
You're being downvoted, but we' re basically drunk on these amazing successes in Computer Vision and apply at least some magical thinking when expecting them to extend to general (e.g. reinforcement learning) domains.
AI is doing a lot better now than in the 1980s. AI back then was groups at MIT, Stanford, and CMU, and a few small groups elsewhere. Almost all the 1980s AI startups went bust.
This time, there's enough success that the field is self-funding. Cynically, because mediocre machine learning is good enough for ad targeting.
I’m bearish on reinforcement learning right now. Is there good stuff happening in a comparable front? (Will a blocks world solver ever play Super Mario?)