That's right. 'Intelligence' isn't a useful technical concept. Think of it like 'beauty' or 'interestingness'. It's a quality of the impression something has on you, and not an external objective thing.
By this definition, a successful AI makes the impression of intelligence on people that observe its behaviour.
This definition is pretty well known, though not universally agreed on, and it serves me well in my professional AI-research life by removing this otherwise tediously unresolvable argument.
So then the discussion is all about defining intelligence, beginning with a fuzzy conception of required qualities. It literally means ability to select, read, choose between. The discussion can be a means to judge the AIs or, for the sake of the argument, to judge and improve intelligence with AI as a heavily simplified model, which is an old hat by now.
Do you think that's irrational? Do you expect neuroscience or are you rather interested in mathematics? I thought that's how to learn in absence of external inputs, by recombination of the old inputs, ie. the fuzzy notions, to generate new ones. I'd think that's how recurrent networks work. What do you know about it (honest question)?
Fuzzy doesn't mean wrong. Underspecification, as little as I know about it, is a feature.
I remember similar arguments in the early 90s when I was doing my PhD. They got about as far then. The same arguments will be happening in another 25 years. And beyond, even when a computer is super-human in every conceivable way, there'll be arguments over whether it is 'real AI'. And nobody will define their terms then either. Ultimately the discussion will be as irrelevant then as it is now, and, then as now, it will be mostly take place between well-meaning undergrads, and non-specialist pundits.
(Philosophers of science have a discussion that sounds similar, to someone just perusing the literature to bolster their position, but the discussion is rather different though also not particularly relevant, in my experience.)
> What do you know about it (honest question)?
About recurrent NNs? Not much beyond the overview kind of level. My research was in evolutionary computation, though I did some work on evolving NN topologies, and using ecological models to guide unsupervised learning.
yes, obviously, as that's the topic, but also the the rest I of what I mentioned, neuroscience, maths, leaning on logic and philosophy.
> I remember similar arguments in the early 90s
That's why I mention neuroscience, the ideas are much older.
> even when a computer is super-human in every conceivable way, there'll be arguments over whether it is 'real AI'
Of course. Just because it's superhuman, we humans wouldn't know what it is, whether it is what we think it is and if that's all there could be.
Real (from res (matter (from rehis (good as in the goods))) + ~alis (adjective suffix (from all?))) means worthy and obviously an AI is only as good as its contestants are bad. It won't be real for long before it's thrown to the trash once a better AI is found.
That'll stop when the AI can settle the argument convincingly. That's what's going on, evangelizing. And we do need that, because if not for the sake of the art itself, then as proof for the application of answers and insights other fields.
> And nobody will define their terms then either. Ultimately the discussion will be as irrelevant then as it is now
LeCun sure went along a lot further since then, and he defines the terms in software. As I said, the discussion is just about what to make of it. Of course many come up basically empty, that's why the discussion is important, and that's why I asked what do you know about it. I think it's a very basic question and not easy to grow tired of. If you work that, maybe that's different and specialized to computation.
There might not be much to say about it, all the easier then to summarize in a short post. Or there's indeed more to it, then I'd appreciate a hint, to test my own understanding and learn.
I don't really know, what LeCun talks about, or the techniques you studied, so I'm saying it. Just for perspective. I'm just generally interested in learning and computation is just one relevant and informative angle. Maybe that's what bothers you, learning to learn, and that's why its freshman bothering with it, but learning to learn is maybe really just learning, or impossible. That's the kind of logical puzzle that's to be taken half joking. Don't beat yourself up over it.
As I have said elsewhere, I will agree we have achieved true AI when a program, uncoached, creates a persuasive argument that it is intelligent. One nice feature of this test is that you don't have to define intelligence precisely beforehand. Of course, that does not give a specification for developers to work to, but that is the case we have now, anyway.
Nice to know your criteria. There have been many. The trick is not to decide when you'll 'agree' (whatever your agreement is worth), but to form a consensus in the discussion.
I've known people who've been unable to create a persuasive argument that they are intelligent (or unwilling), and I've know intelligent dogs unable to argue for anything, persuasive or not. I don't fancy your chances of having your definition become the standard.
The possible downside to appealing to consensus is that it doesn't always agree with you. That doesn't bother me in the slightest, but perhaps that's because I think people will recognize AI when it shows up.
It is the perfect argument, everyone can forcefully make their points forever, and we'll be none the wiser whether this AI is 'true AI' or not.