Hacker News new | past | comments | ask | show | jobs | submit login

The thing that puzzles me is "why". What would be the purpose? How would "it" obtain a sense of "I" and a purpose to keep "I" alive and to do what? What would "it" be?



The paperclip optimizer is given a program and is not aligned with humans, it just pursues its goals

What you should all be fearing is bot swarms and drone swarms becoming cheap and decentralized at scale online and in the real world. They can be deployed by anyone for any purpose, wreaking havoc everywhere.

Every single one of our systems relies on the inefficiency of an attacker. It won’t any longer be true.

Look up scenes like:

https://m.youtube.com/watch?v=O-2tpwW0kmU

https://m.youtube.com/watch?v=40JFxhhJEYk

And see it coming:

https://www.newscientist.com/article/2357548-us-military-pla...

Also ubiquitous cameras enable tracking everyone across every place, as soon as the databases are linked:

https://magarshak.com/blog/?p=169


I get the scenario where people use "AI" for their purposes. That is of course a very real scenario. But the question I raises was in relation to the OP's point about "AI" taking over the World and exterminating humans.


You're mistaking terminal versus instrumental goals.

You're thinking of an AI with a terminal goal of "Kill all humans", and why would it have that goal in the first place.

But this is making a major error. You could give a superintelligent agent of "Buy gold at $400 an ounce using any means necessary, here is 1 million dollars". The AI could attempt to buy gold on the market at that price and fail and come up with the idea "Humans value gold too much, therefore removing humans will allow me to achieve my goals". Humans are just the ant hill in your front lawn keeping you from having perfect grass, you wipe the ants out with a second thought.


I think this [1] article makes a good case for a lot of AI futurism just being a new draping over though complexes created for old religious debates, so the AGI in the doomer view is basically the devil: it will conquer the world because that's what the devil does, and it will do it by seducing people using clever lies.

I'm not saying that recreating religious tropes is what people set out to do, more that as they walked trough the bare space of possibilities they found these well worn paths of logic and simply settled on what felt like convincing lines of reasoning, not seeing that it felt convincing before it was familiar.

[1] https://www.vox.com/the-highlight/23779413/silicon-valleys-a...


It doesn't need a sense of "I".

Waze doesn't have a sense of "I". If I tell it to plot a route between one city and another, it plots that route. If I make Waze a lot more capable and feed it more data, it takes traffic data into account. If I make it more powerful and capable of accessing the internet, maybe it hacks into spy satellites to get more accurate data.

It didn't need any sense of I to increase what it does, just more capability.

If at some point it is capable of doing something more dangerous than just hacking into spy satellites, it might do it without any sense of "I" involved, just in trying to fulfill a basic command.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: