You wrote the answer I was just a little bit too lazy to write...
As to the GP: Geoff Hinton (probably the most well-known neural networks researcher) said in his Coursera course that neural networks thrive at problems with a lot of structure that could be encoded, while simpler models like SVMs or Gaussian processes might be better for problems without as much deep structure to discover.
Also, a lot of the current research with neural networks involves using neural networks to learn better representations of data. These cleaner representations of data (which can be thought about as a sort of semantic PCA) often make classification far easier, which explains the great results. Learning representations also makes transfer learning (transferring knowledge from one domain to another) much easier/more possible.
As to the GP: Geoff Hinton (probably the most well-known neural networks researcher) said in his Coursera course that neural networks thrive at problems with a lot of structure that could be encoded, while simpler models like SVMs or Gaussian processes might be better for problems without as much deep structure to discover.
Also, a lot of the current research with neural networks involves using neural networks to learn better representations of data. These cleaner representations of data (which can be thought about as a sort of semantic PCA) often make classification far easier, which explains the great results. Learning representations also makes transfer learning (transferring knowledge from one domain to another) much easier/more possible.