I've read those papers when they came out. Correct me if I'm wrong, but they were not peer-reviewed.
The first one looks very impressive from the examples they've provided, but extraordinary claims require extraordinary proof. I will believe it only when I see an interactive demo. It's been nearly a year and I haven't seen it surfacing in a real product or usable prototype of any sort. Why?
Somehow all the papers that have "deep neural" stuff get 1/100th of the scrutiny applied to other AI research. I don't see anyone hyping up MIT's GENESIS system, for example.
The second paper has a really weird experiment setup. The point of one-shot learning is to be able to extract information from a limited number of examples. The authors, however, pretrain the network on a very large number of examples highly similar to the test set. Whether or not their algorithm is impressive depends on how well it is able to generalize, and they're not really testing generality -- at all. Again, why?
The first one looks very impressive from the examples they've provided, but extraordinary claims require extraordinary proof. I will believe it only when I see an interactive demo. It's been nearly a year and I haven't seen it surfacing in a real product or usable prototype of any sort. Why?
Somehow all the papers that have "deep neural" stuff get 1/100th of the scrutiny applied to other AI research. I don't see anyone hyping up MIT's GENESIS system, for example.
The second paper has a really weird experiment setup. The point of one-shot learning is to be able to extract information from a limited number of examples. The authors, however, pretrain the network on a very large number of examples highly similar to the test set. Whether or not their algorithm is impressive depends on how well it is able to generalize, and they're not really testing generality -- at all. Again, why?