Hacker News new | past | comments | ask | show | jobs | submit login

> If this is the state of the art in machine translation, how on Earth can you expect a machine that can converse on human level in three years?

Here are 2 good papers--one from last year, one very recent--that might change your mind.

https://arxiv.org/pdf/1506.05869.pdf

https://arxiv.org/pdf/1605.06065v1.pdf

Over the past 10 years (deep learning got going in ~2006, really), the state of the art has improved at an exponential rate, and that's not slowing down. There are plenty of reasons to be bullish. Or at least, now seems like a bad time to leave the field.




I've read those papers when they came out. Correct me if I'm wrong, but they were not peer-reviewed.

The first one looks very impressive from the examples they've provided, but extraordinary claims require extraordinary proof. I will believe it only when I see an interactive demo. It's been nearly a year and I haven't seen it surfacing in a real product or usable prototype of any sort. Why?

Somehow all the papers that have "deep neural" stuff get 1/100th of the scrutiny applied to other AI research. I don't see anyone hyping up MIT's GENESIS system, for example.

The second paper has a really weird experiment setup. The point of one-shot learning is to be able to extract information from a limited number of examples. The authors, however, pretrain the network on a very large number of examples highly similar to the test set. Whether or not their algorithm is impressive depends on how well it is able to generalize, and they're not really testing generality -- at all. Again, why?


Ok, here's some code for anyone that wants to try it themselves:

https://github.com/tristandeleu/ntm-one-shot

https://github.com/macournoyer/neuralconvo




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: