Hacker News new | past | comments | ask | show | jobs | submit login

It is a disproof by counterexample. People claim that if we couldn't trust a superhuman AI to be free in the world, why don't we just lock it in a box and then ask it about cures for cancer and so on? If we just ask it, and all it can do is talk what harm can it do?

They say, all we need to do is say "no" if it asks to be released.

This experiment shows that if you can't resist freeing a human locked in a box, you don't stand a chance against an AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: