If someone were to ask me what's wrong with DL (and not that anyone would, since I'm an unknown), I'd say the lack of theory. Most DL results look very hacky to me. Someone says Max Pooling works; someone again comes along and says it's not necessary. Someone says sigmoid or tanh are the best activation functions; someone else says ReLUs are better. And so on. Why? Why is one better than the other?
I'm no biologist, but I don't think our brains are going around trying to do a grid search for the best hyperparameters. Most DL results today are the result of throwing 1000s of Titans on the problem and then sitting back for a week for the beast to cough up a solution.
Tangential nitpick: one (very minor) nit I have with Prof LeCun's presentations is that I don't see him give more credit to Hinton and Schmidhuber. Hinton is mentioned a couple of times (3), but Schmidhuber is totally ignored; for example, when he mentions LSTM, it's cited as [Hochreiter 1997], even though it was a join publication with Schmidhuber. It should be cited as [Hochreiter et 1997], as he does in the very next line.
I agree re: lack of theory. No easy answer on that one other than "keep looking and get more people to help". We are making major practical gains along the way (although many are quick to discount those--"that's it???"). It's science in practice, theory follows.
I disagree re: the 1000s of titans thing. Google, Baidu, etc are building large GPU clusters and have basically shown "similar resources = similar results", but everyone else is mostly using single machine--maybe multi-GPU--and doing fine. A single Titan X is a BEAST for deep learning--nobody is using 1000s and you only need 1 for great results on most datasets I've seen.
On the subject of Schmidhuber, I saw him speak once and he spent half the talk explaining how he invented everything he's talking about (EVERYTHING!) and the other half talking about how no one gives him credit. I'm half joking, but I think there's more to his story. Or it's a miscarriage of justice.
Max pooling tests if a feature occurs anywhere in a certain area, rather than being sensitive to the exact location.
ReLUs fit combinations of piece-wise linear functions. Whereas sigmoids are more nonlinear and can be harder to optimize. They were originally continuous approximations of binary threshold functions.
All these things can approximate each other. Neurons can approximate the max function, and ReLUs can approximate sigmoids. So there really isn't much to fret over.
It's like asking for a theory of which programming language is better. In practice they will have different advantages in different domains, but they are all Turing complete.
It's like asking for a theory of which programming language is better.
There's nothing wrong with just ignoring programming-language theory and just deciding on one, seat of the pants style. But this is because programming as it exists now is a static "art form" with only marginal progress expected.
However, assuming deep learning currently works unexplainably well and one aims to scientifically explain that good working, one would want an explanation which guides one's approach to extending the process.
I've done a bit of applied math, where knowing which kind of function to pull out of one's toolbox for which situation was the really-smart-people's purview, a fairly well guarded folk-knowledge, actually. I'm used to the "little bit of this, little bit of that" kind of explanation for which functions to use when and why. If one weighs them long enough, I assume one can intuitively figure out what to do.
But if we're aiming to advance fundamentally beyond the state-of-the-art, we would aim to quantify these advantages and disadvantages, to automate one more layer. So here we really should know and have a "real" theory here.
Do you think no one is trying? Should researchers just ignore all results until the underlying theory is found? What if we don't find it for another 50 years? I find it incredibly hard to be critical in this situation.
> Max pooling tests if a feature occurs anywhere in a certain area, rather than being sensitive to the exact location.
From Geoff Hinton's AMA on Reddit: The pooling operation used in convolutional neural networks is a big mistake and the fact that it works so well is a disaster.
Hinton doesn't like that pooling loses track of the exact locations where features are located, and just tests if a feature occurs in some area.
The basic effect of this is to decrease the resolution, so it's more tractable to operate on. Without pooling you are stuck with a huge resolution at each layer.
But not in the same brain. Sure, evolution happened and picked the right parameters; but today's brain comes with the hyperparameters baked in (with some small amount of randomness). It is able to do all the things it can do without the luxury of parallel training and tuning.
For example: you don't need to show a baby a 1000 photos of mugs before it can tell what's a mug and what isn't. Just show it 1 example of a mug a couple of times, and from then on it's able to identify mugs and mug-like objects pretty reliably.
> [the brain] is able to do all the things it can do without the luxury of parallel training and tuning.
I beg to differ. Newborn babies can hardly do anything. Their brains are undergoing "parallel training and tuning" 24/7 starting even before they are born. Babies train themselves on thousands of hours of visual stimuli to gain the 3D object recognition capabilities to reliably identify objects such as mugs.
Newborn babies can hardly do anything because their parameters have not been set; but their hyperparameters (things like, to use NN analogies: activation function, learning rate, etc.) are baked in.
Right. Evolution picked the hyperparameters, and experiential learning picks the parameters. (To first order; certainly there are some instinctual behaviors programmed by evolution and I wouldn't be surprised if some hyperparameters are influenced by experience as well.)
No, I don't think so. I think that evolution is a complex system in the sense that it creates feedback loops altering the fitness landscapes it optimizes over. Organisms interact with each other through competition and predation and with their environment, for example by liberating oxygen from water. If evolution were about optimization then the current set of organisms would be "fitter" compared with dinosaurs (for example). I don't fancy my chances vs. a T.Rex, more to the point I don't fancy anything that is around at the moment's chances v. a T.Rex!
But the T.Rex was a local maxima. I don't fancy a T. Rexes chance against humankind with the technology, intelligence and group organization we have. The T.Rex was optimized for physical force, but that proved useless against asteroids. Humans would have a chance against that type of threat.
Well that's what makes it research then. It's a new field, and obviously there needs to be more science and creative thinking involved into drawing theories about deep learning. The field seems already pretty hard for beginners, so of course there will be less scientists involved into making theories.
I'm no biologist, but I don't think our brains are going around trying to do a grid search for the best hyperparameters. Most DL results today are the result of throwing 1000s of Titans on the problem and then sitting back for a week for the beast to cough up a solution.
Tangential nitpick: one (very minor) nit I have with Prof LeCun's presentations is that I don't see him give more credit to Hinton and Schmidhuber. Hinton is mentioned a couple of times (3), but Schmidhuber is totally ignored; for example, when he mentions LSTM, it's cited as [Hochreiter 1997], even though it was a join publication with Schmidhuber. It should be cited as [Hochreiter et 1997], as he does in the very next line.