Hacker News new | past | comments | ask | show | jobs | submit login

You wrote the answer I was just a little bit too lazy to write...

As to the GP: Geoff Hinton (probably the most well-known neural networks researcher) said in his Coursera course that neural networks thrive at problems with a lot of structure that could be encoded, while simpler models like SVMs or Gaussian processes might be better for problems without as much deep structure to discover.

Also, a lot of the current research with neural networks involves using neural networks to learn better representations of data. These cleaner representations of data (which can be thought about as a sort of semantic PCA) often make classification far easier, which explains the great results. Learning representations also makes transfer learning (transferring knowledge from one domain to another) much easier/more possible.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: