Hacker News new | past | comments | ask | show | jobs | submit login

>I'm willing to bet the deep learning thing is just one more Neat fad that will eventually cause disillusionment at its lack of results, reverting us back to the Scruffy view that intelligence is far too complex to be described holistically by small sets of simple algorithms. The great thing about the Scruffy philosophy is that it isn't derogatory...deep learning will always have a place as a tool in its tool set. It merely doesn't hold unreasonable expectations.

Deep learning is already a Scruffy fad. It basically says, "Hey, let's use a really huge hypothesis space of circuits that often includes a heavy prior towards convolutions." Gradient descent is a Neat principle, but the whole point of things like improved training methods, new objective functions, and convolutions was to deal with the exploding-gradient problem.

Deep learning didn't come up with its own Neat principle, it invented Scruffy methods to apply a Neat principle to a really fucking huge hypothesis space, so long as you've got a pretty big dataset.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: