The author gets it close to right when he talks about Google Maps as a form of AI, and the notion of raising the bar.
What's missing in this article as well as most reporting on AI is the differentiation between artificial intelligence and artificial consciousness, otherwise known as individuality or self awareness.
To me there's a whole smoke and mirrors phenomenon going on in the AI topic, especially the idea of "emerging AI", and the supposed potential danger that poses, and it's tied to the tendency we have as humans to anthropomorphize inanimate objects, and to believe in supernatural effects.
That tendency allows the idea of artificial self awareness to always float behind the scenes in these conversations, and let's normal reporting on AI be magically conflated with a different topic.
It's important to realize that AI is nowhere near self awareness, or conscious, or "awake" and won't be no matter how far the field and implementation goes. No matter how many Turing Test they pass, intelligent machines will be no more conscious or self-aware than the Mechanical Turk!
That's because solving the problem of self-awareness, or consciousness is a different engineering challenge than solving problems of AI. Consciousness is a more complicated, and specialized a thing.
Were we to build an artificial self-awarene machine we would not expect it to pass a Turing Test. Instead we might expect different things of it and ask different questions to determine if it is self aware: can it adapt and survive without human help, ie can it trap and store energy and reproduce itself, and what purpose does it find for itself is what objective does it pursue ...
These are things machines are capable of as well, but as I said: it's a different engineering challenge than producing information that is organized to be sensible to human mind, which is the AI challenge, and the Turing Test.
That isn't to say machine learning isn't potentially dangerous, on the scale of atomic weapons or greater, especially in conjunction with automation, however the idea of an artificially emergent consciousness wth intelligence greater than our own is hogwash: we would do better to pay attention to our own emergent lack-of-intelligence systems and worry about them taking over first.
You've shifted the goalposts and erected strawmen so many times in this brief passage, I hardly know where to start...
> No matter how many Turing Test they pass, intelligent machines will be no more conscious or self-aware than the Mechanical Turk!
I see. Well, this is just a rephrasing of the "Chinese Room" discussed in the article. Taken to it's logical conclusion, I am certainly self aware, but the rest of you are all just acting out complex behaviors encoded in chemical and electrical gradients, successfully mimicking consciousness.
I think that if any entity exhibits the behaviors associated with conscious thought, it would well behoove us to treat such entities as conscious, or we may very well find ourselves holding the short end of that particular stick sooner than we'd like.
> That's because solving the problem of self-awareness, or consciousness is a different engineering challenge than solving problems of AI. Consciousness is a more complicated, and specialized a thing.
Since there is no doubt that ML/AI has a long way to go toward AGI, and along the way we can expect the discipline to evolve considerably in many unexpected directions, this assertion of yours is close to tautological.
> Were we to build an artificial self-awarene machine we would not expect it to pass a Turing Test.
Why not?
> Instead we might expect different things of it and ask different questions to determine if it is self aware: can it adapt and survive without human help,
So, anyone severely ill to the point that they cannot do without assistance is not conscious and self aware?
> ie can it trap and store energy and reproduce itself,
So, a single-celled organism is conscious?
> and what purpose does it find for itself is what objective does it pursue ...
Ah, this seems a relevant criteria, but keep in mind that humans can be subjected to operant conditioning ("brainwashing") to impose external goals, not to mention that humans actually require a couple of decades of such conditioning (albeit rather more gradual and haphazard) before being considered competent members of society, but we don't consider humans to be less conscious or less self-aware on either side of that particular divide.
> it's a different engineering challenge than producing information that is organized to be sensible to human mind, which is the AI challenge, and the Turing Test.
Given that people have to be specially educated to produce information that is organized to be sensible to a computer, I don't see why an AGI, whatever it's capabilities "out of the box", so to speak, shouldn't be expected to be capable of learning to be sensible to humans.
I am not sure we are going to be able to understand each other. I find your thoughts to be completely missing a foundation that I'm thinking would be necessary to understand what I'm saying. I don't mean to be rude ...
Yes of course a single celled organisim is conscious.
Exactly the way an amoeba is self aware is how an self-conscious intelligence system would need to be to pose any kind of threat: organized to find energy sources and metabolize, replicate, etc.
I'll tell you: a single celled organism is way more self aware, and way more functionally complex than any computer or software - in fact it's orders of magnitude more complex of a machine.
That's my point: solving problems that make a machine capable of producing intelligence that is sensible to you and I is not solving the and problems that make a machine like a single cell organism, which is to say vertically integrated from the atom upwards to be a self-sustaining, self propagating, energy trap.
A self-aware human who is disabled and can't live without intervention of other humans, can't self-sustain without others and therefore will not pass the test of being able to self-sustain. It's a test, and so one failure isn't validation of hypothesis. It can still be a great test affairs fail a percentage of the time.
In general we know that all self conscious organisms self-sustain, even social, super-organism ones that need each other to survive so a criteria for a self aware organism is that it be capable of self sustaining. We don't even have a good test for that yet. But a test that would fail a perfectly self-aware disabled human wouldn't be a good one.
We could very well administer a Turing Test to an artificial consciousness, but my point is that it wouldn't be a very accurate test. A Turing Test only proves the accuracy of a facsimile of human intelligence. It proves nothing about self consciousness systems. An amoeba would fail it in an instant as would a parrot or dolphin - and if you tell me these organisms aren't self-aware and conscious then we are definitely not on the same page.
I could be wrong. I'm absolutely interested in anyone who can make a convincing argument otherwise, however until then I'm pretty certain that no emerging conscious machine will happen by accident. Rather it would take a Manhattan project or greater to produce an artificial consciousness on par for sophistication with an amoeba. And we don't have much motive to attempt it either, so I'm doubting we will do it anytime soon.
What's missing in this article as well as most reporting on AI is the differentiation between artificial intelligence and artificial consciousness, otherwise known as individuality or self awareness.
To me there's a whole smoke and mirrors phenomenon going on in the AI topic, especially the idea of "emerging AI", and the supposed potential danger that poses, and it's tied to the tendency we have as humans to anthropomorphize inanimate objects, and to believe in supernatural effects.
That tendency allows the idea of artificial self awareness to always float behind the scenes in these conversations, and let's normal reporting on AI be magically conflated with a different topic.
It's important to realize that AI is nowhere near self awareness, or conscious, or "awake" and won't be no matter how far the field and implementation goes. No matter how many Turing Test they pass, intelligent machines will be no more conscious or self-aware than the Mechanical Turk!
That's because solving the problem of self-awareness, or consciousness is a different engineering challenge than solving problems of AI. Consciousness is a more complicated, and specialized a thing.
Were we to build an artificial self-awarene machine we would not expect it to pass a Turing Test. Instead we might expect different things of it and ask different questions to determine if it is self aware: can it adapt and survive without human help, ie can it trap and store energy and reproduce itself, and what purpose does it find for itself is what objective does it pursue ...
These are things machines are capable of as well, but as I said: it's a different engineering challenge than producing information that is organized to be sensible to human mind, which is the AI challenge, and the Turing Test.
That isn't to say machine learning isn't potentially dangerous, on the scale of atomic weapons or greater, especially in conjunction with automation, however the idea of an artificially emergent consciousness wth intelligence greater than our own is hogwash: we would do better to pay attention to our own emergent lack-of-intelligence systems and worry about them taking over first.