Hacker News new | past | comments | ask | show | jobs | submit login

I can speak for Hawking, but Musk just seems like a pretty disingenuous voice in this debate. He has a very strong financial interest in human spaceflight, so making robots and AI seem scary is within his interest. Space is best explored by robots and Musk has to propagandize the opposite.

A nanny AI is the more rational outcome considering that if we built a AI mind, we could control its parameters (make it non-violent, etc). Also fears of being enslaved and such are positively archaic. Slavery, as an economic system, makes little sense as its more liabilities than assets. If you were to suddenly conquer the world, keeping the capitalistic system in place would make the most sense. It confers a level of control and civility when done right. No need for a violent peasant or slave uprising when most everyone can find a job, rent/buy a home, start a business, advance their careers, etc.

Sadly, I imagine these high-profile attacks on AI are causing a chilling effect from a funding and research point of view. Its sad that instead of celebrating AI as something that could help humanity, we've instead demonized it and dismissed it. Shame billionaires like Musk can so trivially control public opinion and how so few of us are skeptical of his hysterical claims.




> if we built a AI mind, we could control its parameters

A "mind" and being able to "control its parameters" could very well be mutually exclusive. So far, the minds we know are quite intractable, and can be affected in very crude ways (i.e. things that make you less violent also make reduce cognitive skills).

> If you were to suddenly conquer the world keeping the capitalistic system in place would make the most sense.

There is no "capitalistic system", but many systems, some rather different from others. Besides, why does it matter that keeping the existing order "makes more sense". Who says that an AI will be especially rational? It is quite possible that being intelligent and completely rational are mutually exclusive. And even if it is rational, I'm not sure we can fully predict the interests of intelligent beings so different from ourselves. Much of what makes sense to us is a result of us being accustomed to living in a society, with all its consequences. The society the AI will live amongst might be very different from ours, and so its assessment of what makes sense. I spent some time writing about people living in poor, high-crime neighborhoods. Their society was very different from the one I grew up in, and as a result their behavior was different even when it was rational. For example, turning to violence was often a very rational, sensible choice on their part.

> Sadly, I imagine these high-profile attacks on AI are causing a chilling effect from a funding and research point of view.

I think this is all a PR stunt. We're many, many decades, if not a century or more, away from what people imagine when they say AI. Unfortunately, software can be very dangerous to society even without being considered AI[1], and unfortunately few people talk about that.

[1]: http://www.slate.com/articles/technology/bitwise/2015/01/bla...


>So far, the minds we know are quite intractable, and can be affected in very crude ways

Depends on your definition of crude. With a virtual mind, you probably have a good understanding of how things work and be able to edit them trivially. You can have a working mind without complete freedom or with limited capacities. Look at accident victims with brain damage who, say, can no longer do math or some other weird edge case. Its plausible to selectively edit minds but to also have an acceptable range of self-determinism.

>A "mind" and being able to "control its parameters" could very well be mutually exclusive.

Maybe with biological beings but there's no reason to assume that with artificial beings.

>We're many, many decades, if not a century or more, away from what people imagine when they say AI.

Except guys like Musk are helping move that number farther and farther into the future for their own selfish and short-term need (sell rockets to the government) via politically charged anti-AI propaganda. If AI research is chronically underfunded and feared, then it will never happen. Decades aren't too far out. Imagine if there was a big anti-computer backlash in the 1960s. We wouldn't have the toys we have today. Its important to keep a steady head and consider plausible scenarios and stop feeding the anti-AI bullshit of "OMG SLAVERY!!!"

As far as the economic argument goes, I still am very skeptical that AI would instantly run to a discredited and inefficient economic system (slavery). Its laughable to even think it plausible. It just shows how easy it is for guys like Musk to make people scared.


> With a virtual mind, you probably have a good understanding of how things work and be able to edit them trivially

We can't explain the inner workings of deep-learning NNs today (hell, it seems like we can't even explain emergent behavior in a simulation game), so you think we'll be able to understand and edit a mind? All signs point to the contrary.

> but there's no reason to assume that with artificial beings.

But there is. We can't even fine tune complex software that's a far cry from AI.

> Except guys like Musk are helping move that number farther and farther into the future

I seriously doubt that even the venerable Elon Musk could have such a profound effect on the progress of humanity.

> It just shows how easy it is for guys like Musk to make people scared.

Oh, Elon Musk scares me, but that has absolutely nothing to do with AI.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: