AI has been around the corner since the 1950s, this is the historical evidence for the pessimistic stance against over optimistic predictions.
LLMs are a huge stride forward, but AI does not progress like Moore's law. LLM have revealed a new wall. Combining multi agents is not working out as hoped.
Perhaps without intending to, you've cited a pretty appropriate example of overconfident pessimism. Philosopher Hubert Dreyfus is most responsible for this portrayal of AI research in the '50s and '60s. He made a career of insisting that advances in AI would never come to pass, famously predicting that computers couldn't become good at chess because it required "insight", and routinely listing off what he believed were uniquely human qualities that couldn't be embodied in computers, always in the form of underdefined terms such as intuition, insight, and other such terms.
Many of the things AI does now are exactly the type of things that doomsayers explicitly predicted would never happen, because they extrapolated from limited progress in the short term to absolute declarations over the infinite timeline of the future.
There's a difference between the outer limits of theoretical possibility on the one hand, and instant results over the span of a couple new cycles, and it's unfortunate that these get conflated.
This is exactly the point. The pessimists get the pedantic thrill of pointing out that, 60 years ago, some proponents were overconfident. But they neglect to notice the larger picture, which is one of extraordinary progress. They'll be sitting in the passenger seat of a self-driving car, arranging their travel itinerary with a chatbot fluent in English and 60 other languages, and smugly commenting on HackerNews about how "AI pessimists got it right in the 60s."
Lots of progress in some areas, very little in others. Neither the pessimists nor the optimists got it right.
(Btw., a really hardened pessimist might even say your example is mostly things that were doable when there were not many computers around at all... taxi driver to drive you, a secretary to book a flight, meet in a club to discuss things, ...)
To me, if artificial intelligence means anything, it means automating mundane tasks -- most of which humans can accomplish already. If someone tells me they're an AI pessimist because they think automating tasks like "converse intelligently about nearly any subject at length" and "drive my car anywhere while I sleep in the back" isn't an impressive AI achievement, then I think our disagreement pertains to our ambitions for the field, and lies outside the realm of the technical.
Right, I believe this is the more normal framing. I think Hubert Dreyfus spent his life constantly retelling the story of the 50s and 60s with a pessimistic narrative, and that is what made it stick, to the extent that it has. But it is a bizarre framing that, even if it once looked like a guiding light on how to set AI expectations, I think talking about the 50s and 60s over and over is not a helpful way of engaging with what has unfolded over the past 2-3 years.
1. AI has not been "around the corner since the 1950s," and 2. if you think there hasn't been forward progress in AI since the 1950s, you have no valid opinions on this subject.
LLMs are a huge stride forward, but AI does not progress like Moore's law. LLM have revealed a new wall. Combining multi agents is not working out as hoped.