Current Ai doesn't have any ideas about existing. That's something we'd have to add in to future AI, which goes to the point of the article and over-exaggerated claims about current AI.
Caring about one's existence (or even having the notion of a self) isn't something that comes about from crunching large data sets. ML isn't going to result in existential feelings or consciousness.
> Current Ai doesn't have any ideas about existing. That's something we'd have to add in to future AI ... isn't something that comes about from crunching large data sets
(note: I'm a roboticist who spends a lot of time thinking about long-term autonomy (how can a robot go for months and years without human intervention) so that gives you a sense of where I'm coming from. Since my original comment was more of a question, I'll expound a little more here.)
I wasn't really asking in the context of current AI, but 'true' AI as the parent was talking about. They drew a distinction between how humans learn and how a "true" AI might learn based on the goals of each, and I question the notion that a "true" AI wouldn't have a notion of survival.
I think you agree with me to some extent, based on your other comment:
> A robot might someday be a self. It would certainly be advantageous for robots to avoid damaging themselves, and being safe around other robots and humans (treating them as selves). But how we go about making robots self-aware, and more problematically, how they have experiences is something nobody knows how to do.
I agree. I believe a robot not only would benefit from the idea of "self", but that it's fundamental to their design. We've built algorithms for a long time that operate in a very limited way. Robots break that mold in a number of ways, foremost being they can move in their world, and even physically change it. Yet we're still insisting on building them the same way we build webapps or trading software.
There needs to be a notion of "inside" and "outside" just to implement things like collision avoidance. I think there will also be notions of "group" and "individual" as well as "self" and "other" and "community" as well if we're going to call robots intelligent, as we see these notions in other species we regard as intelligent (whales, dolphins, chimps, etc).
The question is, how do they get these concepts and what do they look like? I don't know. But I think they're essential, and I think they will be a barrier to AI if we keep treating them optional instead of innate parts of an intelligent creature.
I think we're huge technological leaps and bounds--I mean light-years--away from AGI so advanced it requires some form of computer psychology. But I do think it might eventually happen. And I don't think people are going to do a good job of preparing for it proactively. It's going to require a huge fiasco to motivate people to take that idea seriously.
So I agree that one day we may have to be careful how we treat AGI and maintain a system of ethics for the use of artificial, sensate, sapient beings. But that day is hella not today. Yet when it does come, it will probably still sneak up and surprise us. Like most black swans are wont to do.
Right now IBM is using people's confusion of AI with AGI to make sensationalist ads. But hey, most advertising around trending tech is sensationalist.
> I think we're huge technological leaps and bounds--I mean light-years--away from AGI so advanced it requires some form of computer psychology.
I think you're right about this. I think the current wave of anxiety about the capabilities of AI are fueled by non-practitioners who have no idea really how hard it's going to be.
I think it has with my conflation of intelligence and life. Life is another one of those things we have trouble defining, but I think you've hinted that one of the key characteristics of life is a desire to extend it.
Once we have a thing that seems to value its own self, then we can start talking about motivations for doing so, and I think that's one of the things that can elevate life to intelligence.
I think that consciousness is a collection of experiences and observations ('data'), nothing more. I am curious what lead you here:
> Caring about one's existence (or even having the notion of a self) isn't something that comes about from crunching large data sets. ML isn't going to result in existential feelings or consciousness.
My opinion is that 'consciousness' by the human definition will be a spontaneous and emergent property, when the computers become complex enough.
> I think that consciousness is a collection of experiences and observations ('data'), nothing more. I am curious what lead you here:
Consciousness is part of having an experience. Observations don't require consciousness, since they can be performed by measuring equipment, although one can get deep in the woods on the whether observation needs a mind to make it meaningful, and thus an observation, not simply a physical interaction.
> I am curious what lead you here:
Being a self isn't about crunching data, it's about having a body and needing to be able to distinguish your body from the environment for survival and reproductive purposes.
An algorithm has no body and thus no reason to be conscious or self-aware. That it's even an algorithm and not just electricity is an interpretive act on our part (related to the deep semantic debate over what counts as an observation).
A robot might someday be a self. It would certainly be advantageous for robots to avoid damaging themselves, and being safe around other robots and humans (treating them as selves). But how we go about making robots self-aware, and more problematically, how they have experiences is something nobody knows how to do. Saying it will emerge once the robot is sophisticated enough is the same as saying nobody knows how to make a machine conscious.
What makes you think AI wouldn't have similar ideas about wanting to continue its existence?