Mind you the "successes of NNs" didn't start to show up until 2010 or so, despite active research in multi-layer ANNs going from early 1980s. Just the past decade ANN classifier performance wasn't particularly remarkable compared to other methods. And given the 20th century technology, the deep learning architectures of today were computationally unfeasible.
It does seem rather harsh to hold Minsky to account for a conclusion which is only true with access to the massive computing resources of the 21st century. Not only was the future power of computation unknown in 1955, the quality of neural networks remains nonobvious even with that prediction - you have to actually run the things and see if they work.
None of that makes Minsky right, but it's hard to see how much could even have been achieved on neural nets back in '55. Our architecture design today descends from experimental results that were not going to be available for many decades.
True, things often seem simple with advantage of hindsight. However Minsky's original criticism was to linear separability of original perceptron, the only known ANN at the time, and as such as technically sound now as it was then. Even when people got some spare cycles on their computers and started to throw extra layers to increase dimensionality the results weren't too encouraging for a long time.
EDIT: well I basically made the same point as you.
This is news to me. I had been steeped in a different lore. I have read the original article (or perhaps it was an excerpt?). I don't recall this reference.
I see from the wikipedia article you linked to, that they did know about the multiple layers. I thought it was suspicious that they had somehow missed it since it is so simple (at least to us now), and these guys are so very smart.
I wonder if they also knew (or realized, rather), that a single layer neuron with a non-monotonic function could have also "solved" XOR.
Again, hindsight. Backpropagation was first applied to ANNs on the verge of 1980s. So we are slowly moving from Misnky wrongly criticizing perceptrons to Minsky not inventing sound backpropagation/multilayer ANN algorithm, which is perhaps taking it a bit too far.
Just to be clear, Minsky didn't need backprop, just the multiple layers. (In order for a perceptron to act like an XOR gate, or to "solve the XOR problem" in some parlance.)
Minsky assumed a trained percepton with the weights already set to act like an AND or an OR gate. He wasn't dealing with the learning problem.
Fair point, although without knowing that this approach is practically viable to begin with it was getting into speculations territory. Not that Minsky didn't like to speculate..
Anyway, IMO people overestimate Minsky influence to single-handedly shut down an avenue of research. The reason _Perceptrons_ conclusions caught up is because they were sound and reasonable to his peers at the time.
I also seem to remember his video interview from a few years back where he elaborates on perceptrons and how much of his original conclusions are applicable to the state of art ANNs. Can't quite find it though.
And symbolic methods beat human players in chess and checkers for decades now, and handled U.S. military logistics since the Desert Storm. Markov chains were writing spamfilter-busting prose for years, and graph clustering powered Google search.
It's great to be enthusiastic about breakthroughs, but the history of AI is littered with partial success stories.