> I want us, as a community, to stop treating deep learning any different than any other ML algorithms --- have a consensus, based on scientific facts, about the possibilities and limitations thus-far. If we, "the experts", don't understand these things about our own algorithms, how can we the rest of the world to understand them?
I agree. It's interesting watching the "debate" around deep learning. All the empirical results are available for free online, yet there's so much misinformation and confusion. If you're familiar with the work, you can fill in the blanks on where things are headed. For instance, in 2011, I think it became clear that RNNs were going to become a big thing, based on work from Ilya Sutskever and James Martens. Ilya was then doing his PhD and is now running OpenAI, doing research backed by a billion dollars.
The pace of change in deep learning is accelerating. It used to be fairly easy for me to stay current with new papers that were coming out; now I have a backlog. To a certain extent, it doesn't matter what other people think, much of the debate is just noise. I don't know what AGI is. If it's passing the turing test, we're pretty close, 3 years max, maybe by the end of the year. Anything more than that is too metaphysical and prone to interpretation. But there have been a bunch of benchmark datasets/tasks established now. Imagenet was the first one that everyone heard about I think, but sets like COCO, 1B words, and others have come out since then and established benchmarks. Those benchmarks will keep improving, pursuing those improvements will lead to new discoveries re: "intelligence as computation", and something approximately based on "deep learning" will drive it for a while.
> If it's passing the turing test, we're pretty close, 3 years max
Well yes? If a Turing test you realize the simulation of some idiot in the online chat, it has long been there - and nobody wants. But the system, which can lead a meaningful conversation, today there is no trace. And there is even no harbingers of its occurrence.
<- This was translated by Google Translate from a piece of perfectly intelligible and grammatically correct text in another language. If this is the state of the art in machine translation, how on Earth can you expect a machine that can converse on human level in three years?
Sadly, I read the google translated text and it read like a person where english was their second language. I didn't realize it was an "example" until I read your next section. So it had me fooled.
You could probably replace 90% of YouTube comments with a simple trigram-based chat bot and no one would notice. But that's hardly a good measure of AI quality.
Although, your comment illustrates the main problem with the Turing Test. It depends on too many factors and assumptions that have nothing to do with the AI itself.
A good AGI test should be constructed in such a way that any normal person passes it with 100% certainty and no trivial system can pass it at all.
Over the past 10 years (deep learning got going in ~2006, really), the state of the art has improved at an exponential rate, and that's not slowing down. There are plenty of reasons to be bullish. Or at least, now seems like a bad time to leave the field.
I've read those papers when they came out. Correct me if I'm wrong, but they were not peer-reviewed.
The first one looks very impressive from the examples they've provided, but extraordinary claims require extraordinary proof. I will believe it only when I see an interactive demo. It's been nearly a year and I haven't seen it surfacing in a real product or usable prototype of any sort. Why?
Somehow all the papers that have "deep neural" stuff get 1/100th of the scrutiny applied to other AI research. I don't see anyone hyping up MIT's GENESIS system, for example.
The second paper has a really weird experiment setup. The point of one-shot learning is to be able to extract information from a limited number of examples. The authors, however, pretrain the network on a very large number of examples highly similar to the test set. Whether or not their algorithm is impressive depends on how well it is able to generalize, and they're not really testing generality -- at all. Again, why?
I agree. It's interesting watching the "debate" around deep learning. All the empirical results are available for free online, yet there's so much misinformation and confusion. If you're familiar with the work, you can fill in the blanks on where things are headed. For instance, in 2011, I think it became clear that RNNs were going to become a big thing, based on work from Ilya Sutskever and James Martens. Ilya was then doing his PhD and is now running OpenAI, doing research backed by a billion dollars.
The pace of change in deep learning is accelerating. It used to be fairly easy for me to stay current with new papers that were coming out; now I have a backlog. To a certain extent, it doesn't matter what other people think, much of the debate is just noise. I don't know what AGI is. If it's passing the turing test, we're pretty close, 3 years max, maybe by the end of the year. Anything more than that is too metaphysical and prone to interpretation. But there have been a bunch of benchmark datasets/tasks established now. Imagenet was the first one that everyone heard about I think, but sets like COCO, 1B words, and others have come out since then and established benchmarks. Those benchmarks will keep improving, pursuing those improvements will lead to new discoveries re: "intelligence as computation", and something approximately based on "deep learning" will drive it for a while.