The OP is including a lot of different concepts under the neural network umbrella. Things like restricted boltzmann machines and hierarchical temporal memory are technically neural networks, but many computer scientists would consider them to be different enough in approach to think of them separately. ie, you wouldn't say "let's use a type of neural network to solve this problem" you would probably say "let's use a restricted boltzmann machine".
It is true that these things are becoming more popular. I've found in practice that a modern computer scientist is still more likely to solve a simple learning problem with some form of regression, if only because it's faster than training a NN.
Restricted Boltzmann machines are as bonafide neural networks (NN) as you can get and has been around since the golden age of neural networks. You have the same layered structure, feed-forward connections, same "squashing function". The only "restriction" is that the unknown and the known nodes must live on different layers so that the connections are only between a known and an unknown node. It has been called different names and the theory explaining them has had different names too, for example "Harmony Theory" of Smolensky.
I think NN is a broad enough category that no matter what you want to use or describe, you will have to qualify your "lets use blah" statements with a particular kind of neural network. Similar in spirit to statements like "lets use a parser" vs "lets use a LALR parser".
But back to the topic of new found interest on NNs, part of the reason is that there have been new developments in training algorithms which work significantly better than what were used traditionally. With these methods NNs require far less baby-sitting. NNs traditionally really required a huge lot of that.
The other reason is that sheer scale and size of the data sets that are available now, have forced machine learners to move from powerful but batch optimization algorithms (quadratic programming for instance) to simple and online gradient based algorithms that have been the forte of the NN community all along.
Training a NN is no different than regression. It is another name/technique for (some what systematically) creating a tower of increasingly complex regression functions. If the simplest(linear) one works, its imperative that one uses the simplest one in the interest of good predictive accuracy on unseen data. Bundled together with the low training time that parent mentioned, its a win win.
In my experience, the reason is less due to speed and more because there's a perception that training algorithms for things like RBMs still involve a certain amount of "black magic" in tuning parameters, deciding when it's converged, etc, in contrast to linear/logistic regression or support vector machines, where you can basically turn a crank and get an answer out.
It is true that these things are becoming more popular. I've found in practice that a modern computer scientist is still more likely to solve a simple learning problem with some form of regression, if only because it's faster than training a NN.