> where AI essentially meant rule-driven behavior like logic programming
I think the full history is a little more complex.
In the early days, there were two camps: the "connectionists" who worked on neural-net type stuff, and the symbolic reasoners who worked on hand-authored rule-based systems. Both fell under the "AI" umbrella, believed their approach was the one true one, and squabbled over funding and public perception. (Because public perception affects funding.) Remember that at the time, much AI research was government or defense funded, so politics was heavily involved.
The connectionists invented neural networks. The symbolic folks gave us Lisp, Prolog, and a lot of compiler and parser theory stuff.
The connectionists hit a wall in the sixties, and shortly after "Perceptrons" was published. That book deliberately pointed out the current limitations of neural networks and effectively shut down research into them for decades. It was one of the causes of the "AI winter" of the 80s.
After that, "AI" became roughly synonymous with symbolic reasoning and rule-based expert systems because that camp had won.
Then, in the 80s, backpropagation and other learning techniques for neural nets were finally figured out and those researchers started making progress again. Two AI winters had happened by then, so "AI" didn't have all of the positive connotations it used to (at least when it comes to funding) and the term almost solely referred to symbolic reasoning at this point, so they started using "machine learning" to refer to neural-network-based AI.
In the early 2000s, big tech companies found themselves with lots of cheap computational power and tons of data on their hands, the two key ingredients to make machine learning useful. Meanwhile, symbolic reasoning and expert systems had petered out.
So "machine learning" got bigger and bigger until eventually it became the main computer intelligence approach in town. On top of that, it's gotten smarter and smarter until the public has started associating it with the old image of what "AI" means. So now you see "AI" coming back to refer to what is, essentially, the same connectionist approach it used to include in the 60s.
Connectionism didn't falter because people couldn't figure it out, it faltered because people figured out they needed 1000x more powerful computers, which took a few decades to build.
I think the full history is a little more complex.
In the early days, there were two camps: the "connectionists" who worked on neural-net type stuff, and the symbolic reasoners who worked on hand-authored rule-based systems. Both fell under the "AI" umbrella, believed their approach was the one true one, and squabbled over funding and public perception. (Because public perception affects funding.) Remember that at the time, much AI research was government or defense funded, so politics was heavily involved.
The connectionists invented neural networks. The symbolic folks gave us Lisp, Prolog, and a lot of compiler and parser theory stuff.
The connectionists hit a wall in the sixties, and shortly after "Perceptrons" was published. That book deliberately pointed out the current limitations of neural networks and effectively shut down research into them for decades. It was one of the causes of the "AI winter" of the 80s.
After that, "AI" became roughly synonymous with symbolic reasoning and rule-based expert systems because that camp had won.
Then, in the 80s, backpropagation and other learning techniques for neural nets were finally figured out and those researchers started making progress again. Two AI winters had happened by then, so "AI" didn't have all of the positive connotations it used to (at least when it comes to funding) and the term almost solely referred to symbolic reasoning at this point, so they started using "machine learning" to refer to neural-network-based AI.
In the early 2000s, big tech companies found themselves with lots of cheap computational power and tons of data on their hands, the two key ingredients to make machine learning useful. Meanwhile, symbolic reasoning and expert systems had petered out.
So "machine learning" got bigger and bigger until eventually it became the main computer intelligence approach in town. On top of that, it's gotten smarter and smarter until the public has started associating it with the old image of what "AI" means. So now you see "AI" coming back to refer to what is, essentially, the same connectionist approach it used to include in the 60s.