Hacker News new | past | comments | ask | show | jobs | submit login

Even if we make an AI that wants to turn all matter into paper clips, we're so far away from an agent doing that I'm really not too worried.

I don't think there's any industry on earth that doesn't need humans in the loop somehow. Whether is mining raw material from the ground, loading stuff in machines for processing, and most importantly fixing broken down machines, robots are really bad at these things for the foreseeable future.

Not to mention AI needs constant electricity, which is really complicated and requires humans fixing a lot of stuff.




The thought experiment is about a superintelligence, which either wouldn’t need humans and could build some kind of robots or something even more effective that we haven’t thought of, or manipulate us into doing exactly what it “wants”

Also it’s a simplified example, it wouldn’t literally be paperclips but some other arbitrary goal (it shows how most goals takes to their absolute extreme won’t be compatible with human existence, even something that sounds harmless like making paperclips)


What about "most arbitrary goals are incompatible with human existence" requires super-human intelligence?

A human who wanted to "build as many paperclips as possible" could cause a great deal of destruction today.

A human who wanted to accumulate as much wealth as possible could, too.

EDIT: maybe a better way of articulating my complaints about this famous thought experiment is that it's supposed to be making a point about superintelligence but it's talking about a goal that has sub-human-intelligence sophistication.


> What about "most arbitrary goals are incompatible with human existence" requires super-human intelligence?

The "taken to the absolute extreme" part.

> A human who wanted to "build as many paperclips as possible" could cause a great deal of destruction today.

Maybe, but a) no one really wants that (at least not as their only desire above all else) and b) we aren't superintelligent so it's hard to gain enough control and power and plan well enough to do it that well

> talking about a goal that has sub-human-intelligence sophistication

There is no reason a simple goal can't be followed in an intelligent way or vice versa. This is called the "orthogonality thesis". There's a good video about it here: https://www.youtube.com/watch?v=hEUO6pjwFOo


i agree that there's no way to get humans out of the loop. somebody set up this machine to make paperclips because some human(s) wanted/needed paperclips. eventually, one of those people would realize "we have enough paperclips. let's turn off the paperclip making machine".

this nightmare scenario really only plays out if paperclip machine develops some sort of self-preservation instinct and has the means to defend/protect itself from being disabled. Building a machine capable of that seems a) like fantastical scifi and b) easily preventable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: