Hacker News new | past | comments | ask | show | jobs | submit login

>> the unrestrained distribution of the technology is similar to spreading the blueprint of nuclear warheads.

There's an episode of Pinky and the Brain where Brain invents a magnification ray and gigantises Pinky and himself - so they're now giant rats [1]. They walk about some human metropolis, looming larger than skyscrapers and terrorising everyone. To cut the story short, Pinky happens and eventually the gun shoots everything in the world.

... so Pinky and the Brain are now normal-sized mice again. Or, well, they're giant mice, but everything in the world is also giant, so they're small.

I think of that episode during discussions of the dangers of AGI and how "democratizing" it will eliminate those dangers. The problem is that AGI, like nuclear weapons, is not an equalizing force only- it is also a very destructive force, and so there is the danger that any loon with access to it could blow us all to kingdom come (with nuclear weapons, for sure; with AGI, perhaps).

On the other hand, when nuclear weapons, or AGI, giant rays etc are not democratized we have the situation we're in today: only a few entities have access to them and they basically have free (military) reign... and there is still the danger that some loon might get to a position of power were they can push a button and -blamo.

Which is why people are really worried about this sort of thing. Because once certain discoveries are made, there's no going back and at the same time the road ahead is full of danger.

[1] https://www.youtube.com/watch?v=-fPcqu5aMyw




We already have 7 billion of these H(uman)GI's walking around the world, with guns and dangerous tech. If the AGI will be much smarter, we shouldn't fear it the same we fear a dictator with nukes.


"Smarter" has no nessisary or consequent intersection with "Shares human values" or even "Values human existence"

https://wiki.lesswrong.com/wiki/Paperclip_maximizer

One of the reasons AGI will be so dangerous is the near certainty it will have utterly alien values and intentions.

It seems exceedingly unlikely that we will be able to program in what consitutes "human values". We can't even begin to remotely agree and explain them fully to fellow humans, let alone what is basicly an alien intelligence.

It is slightly ironic that people bring up Assimov's laws of robotics in this context, because pretty much every story he wrote was about how the laws could be subverted via "loopholing" of sufficently intelligent AI.

If something is very much smarter than humans, we cannot possibly hope to constrain it via some kind of faustian bargin agreement. Because in the end, the devil is always smarter than you are.

If an AI kills you from either malice or indifference, or just that you are composed of plenty of paperclip-capable raw materials, you are dead all the same.


> pretty much every story he wrote was about how the laws could be subverted via "loopholing" of sufficently intelligent AI.

The most chilling of these is the "Robot Dreams" short story.


I still think the AGI will have more to fear from people than people from it. We're together on this little boat called earth, 7 billion of us with access to weapons and many of us utterly crazy and willing to do stupid things for stupid reasons (see religion). We don't need AGI to blow ourselves up, plenty capable of that using our wetware neural nets. Imagine being locked in a prison cell with so many dangerous HGIs (Human GI's). I think the first thing for AGI would be to make sure we can't kill ourselves and it at the same time, by teaching us to become less stupid.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: