So then the discussion is all about defining intelligence, beginning with a fuzzy conception of required qualities. It literally means ability to select, read, choose between. The discussion can be a means to judge the AIs or, for the sake of the argument, to judge and improve intelligence with AI as a heavily simplified model, which is an old hat by now.
Do you think that's irrational? Do you expect neuroscience or are you rather interested in mathematics? I thought that's how to learn in absence of external inputs, by recombination of the old inputs, ie. the fuzzy notions, to generate new ones. I'd think that's how recurrent networks work. What do you know about it (honest question)?
Fuzzy doesn't mean wrong. Underspecification, as little as I know about it, is a feature.
I remember similar arguments in the early 90s when I was doing my PhD. They got about as far then. The same arguments will be happening in another 25 years. And beyond, even when a computer is super-human in every conceivable way, there'll be arguments over whether it is 'real AI'. And nobody will define their terms then either. Ultimately the discussion will be as irrelevant then as it is now, and, then as now, it will be mostly take place between well-meaning undergrads, and non-specialist pundits.
(Philosophers of science have a discussion that sounds similar, to someone just perusing the literature to bolster their position, but the discussion is rather different though also not particularly relevant, in my experience.)
> What do you know about it (honest question)?
About recurrent NNs? Not much beyond the overview kind of level. My research was in evolutionary computation, though I did some work on evolving NN topologies, and using ecological models to guide unsupervised learning.
yes, obviously, as that's the topic, but also the the rest I of what I mentioned, neuroscience, maths, leaning on logic and philosophy.
> I remember similar arguments in the early 90s
That's why I mention neuroscience, the ideas are much older.
> even when a computer is super-human in every conceivable way, there'll be arguments over whether it is 'real AI'
Of course. Just because it's superhuman, we humans wouldn't know what it is, whether it is what we think it is and if that's all there could be.
Real (from res (matter (from rehis (good as in the goods))) + ~alis (adjective suffix (from all?))) means worthy and obviously an AI is only as good as its contestants are bad. It won't be real for long before it's thrown to the trash once a better AI is found.
That'll stop when the AI can settle the argument convincingly. That's what's going on, evangelizing. And we do need that, because if not for the sake of the art itself, then as proof for the application of answers and insights other fields.
> And nobody will define their terms then either. Ultimately the discussion will be as irrelevant then as it is now
LeCun sure went along a lot further since then, and he defines the terms in software. As I said, the discussion is just about what to make of it. Of course many come up basically empty, that's why the discussion is important, and that's why I asked what do you know about it. I think it's a very basic question and not easy to grow tired of. If you work that, maybe that's different and specialized to computation.
There might not be much to say about it, all the easier then to summarize in a short post. Or there's indeed more to it, then I'd appreciate a hint, to test my own understanding and learn.
I don't really know, what LeCun talks about, or the techniques you studied, so I'm saying it. Just for perspective. I'm just generally interested in learning and computation is just one relevant and informative angle. Maybe that's what bothers you, learning to learn, and that's why its freshman bothering with it, but learning to learn is maybe really just learning, or impossible. That's the kind of logical puzzle that's to be taken half joking. Don't beat yourself up over it.
Do you think that's irrational? Do you expect neuroscience or are you rather interested in mathematics? I thought that's how to learn in absence of external inputs, by recombination of the old inputs, ie. the fuzzy notions, to generate new ones. I'd think that's how recurrent networks work. What do you know about it (honest question)?
Fuzzy doesn't mean wrong. Underspecification, as little as I know about it, is a feature.