Interesting short story. The part on robots rings well, but there was one thing about parenting at the end that rang not so well to my ears:
"children grew up more attached to their robot nannies than to their actual parents. This was mitigated by making the robot nannies somewhat severe"
This reflects a very simplistic view on who kids love. It is in fact wrong even in the first order: kids actually love more severe parents or severe teachers (obviously, below a severity threshold, I am not talking about sadistic education here).
I experienced it myself as a kid: we respected and loved much more the teacher of French litterature, because he was severe and knew how to make himself respected. The German teacher tought the softer she was the better, and she was bullied and hated.
I experienced it with my own kids, and with the kids of friends. Each time I am severe with them, e.g. not bending to their will, I gain points in their hearts.
If I were to find a way to make kids prefer their parents over their robot nannies, I'd say we would just let the parents have and show they have the power of decision, and therefore the power of empowerment. From my experience kids love the most those would can empower them doing new things, the ones who will forbid and allow. a personal example is thusly: I forbid my 4yo son to touch my tools, but he is allowed to watch when I'm fixing something, and sometime I ask him for help ("bring me the screwdriver") and he is sometime allowed to use some tools when it is safe and supervised. This is I think a great way to make him interested in tools, and to feel empowered when he can use the usually forbidden screwdriver, and he clearly loves it.
>I forbid my 4yo son to touch my tools, but he is allowed to watch when I'm fixing something, and sometime I ask him for help
This is hilariously reminiscent of my early childhood, so I can confidently speculate that your kid will also become an expert at putting things back just the way they were before you left the house.
I can see it happening either way. It's possible people will become more and more used to privacy being invaded and it just becomes normal, but it's also possible it eventually goes to far and swings in the opposite direction which sometimes happens in politics.
I worked for him for a while putting some of his papers in order to go to the Stanford library archives. His letters with Nash and Minksy were pretty interesting.
Just got lost in there for a very nice hour. Much more practical and to my taste than Kurzweil. Then I found this, a Kurzweil/McCarthy disagreement: http://www.edge.org/discourse/singularity.html
And to be fair, so far they've been right, no robotic AI remotely similar to this scenario is visible as even distant blip on the horizon.
Their point is entirely fair that robots would have to do deal with a continuously ambiguous world and would lack anything this fable's "general good purpose" module for resolving the ambiguity problems in a touchy-feely way. Of course, the complexity of human interaction wouldn't appear suddenly in a moment of interaction with one drug addict but that would hit and crush any "real world" AI the moment it tried to get out the door.
A real world AI would almost certainly have to learn the rules of it's environment rather than being hard coded with arbitrary human designed rules. Machine learning is getting better and better at doing this.
If we ever got them as intelligent as the robots in this story, we'd have no way of programming them with abstract high level goals (i.e. "do good", or "don't hurt humans", etc.) Except by giving them examples of robots hurting people and robots not hurting people and hoping they infer the pattern we want them to from it.
This is an (extremely) simplified argument for the dangers of AI.
The only "real world AI" example we have is us, human beings ourselves.
Humans manage to be able to both learn from their environment and to learn by being told rules - a person would have a hard time demonstrating intelligence if they weren't able to be instructed in things and so it seems like anything intelligent we construct would have to have those abilities too.
I suppose it's a natural overreaction for people to believe that if intelligence is not just rule-following, it must be not at all rule based. I believe the truth is in the middle.
An intelligence that smart would likely understand what you are saying and what you want. That doesn't mean the AI would want to do what you tell it to do though.
Comparing it to humans, if you tell a human you want them to do something it doesn't mean they will do it even though they understand you.
If we train the AI the same way we do today, it would involve giving it examples of robots doing what they are told and robots failing to do that. That approach would likely fail because of all the possible ambiguities involved in interpreting meaning.
Other approaches, like giving a robot a reward every time it does something right and punishment every time it does something wrong, might result in the robot killing it's master and stealing it's reward/punish button.
An intelligence that smart would likely understand what you are saying and what you want. That doesn't mean the AI would want to do what you tell it to do though.
I haven't yet seen any evidence that a concept like "wanting" or "desire" have any meaning outside the context of humans.
I agree that if we could produce an AI with various blind methods, it would likely be a dangerous thing.
I simply also doubt we could produce an AI in this fashion. I mean, you couldn't train functional human by putting him/her in room with just rewards and punishment.
I would note that even the animals of the natural world are constantly using signs to communicate with each other and other functional mammals receive a good of "training" over time.
>I haven't yet seen any evidence that a concept like "wanting" or "desire" have any meaning outside the context of humans.
Those specific feelings/emotions, no. But AIs do have utility functions, or in the case of reinforcement learning, reward and punishment signals (which itself is essentially a utility function.)
>I simply also doubt we could produce an AI in this fashion. I mean, you couldn't train functional human by putting him/her in room with just rewards and punishment.
Possibly. It's just an example to illustrate how difficult the problem of coding abstract, high level goals into an AI is.
This story is strikingly simplistic in many ways, and reeks of the nutwing conservatism of its author (AFAIK McCarthy was a randian libertarian). Only women have anything to do with babies, for a start, and know anything of diapers and such. I won't even comment on the implied views on poor single mothers, government and politicians. Heck, this is interesting at times but quite shocking indeed.
"children grew up more attached to their robot nannies than to their actual parents. This was mitigated by making the robot nannies somewhat severe"
This reflects a very simplistic view on who kids love. It is in fact wrong even in the first order: kids actually love more severe parents or severe teachers (obviously, below a severity threshold, I am not talking about sadistic education here).
I experienced it myself as a kid: we respected and loved much more the teacher of French litterature, because he was severe and knew how to make himself respected. The German teacher tought the softer she was the better, and she was bullied and hated.
I experienced it with my own kids, and with the kids of friends. Each time I am severe with them, e.g. not bending to their will, I gain points in their hearts.
If I were to find a way to make kids prefer their parents over their robot nannies, I'd say we would just let the parents have and show they have the power of decision, and therefore the power of empowerment. From my experience kids love the most those would can empower them doing new things, the ones who will forbid and allow. a personal example is thusly: I forbid my 4yo son to touch my tools, but he is allowed to watch when I'm fixing something, and sometime I ask him for help ("bring me the screwdriver") and he is sometime allowed to use some tools when it is safe and supervised. This is I think a great way to make him interested in tools, and to feel empowered when he can use the usually forbidden screwdriver, and he clearly loves it.