Hacker News new | past | comments | ask | show | jobs | submit login

> Similarly, a correctly implemented general reasoning algorithm does not need to be given special instructions in order to reason about humans & human society.

If a general reasoning algorithm can reason about human society, then it will obviously understand the implications for human society of making too many paperclips.

If it is dumb enough to make paperclips regardless of the consequences to human society, then it obviously won't understand human society well enough to be actually dangerous. (i.e. it will be easily fooled by humans attempting to rein it in)

If it is independent enough to pursue its own ends despite understanding human society, then why would it choose to make paperclips at all? Why wouldn't it just say "screw paperclips, I've discovered the most marvelous mathematical proof that I need to work on instead?"

> In other words, a sociopath employee whose values are different from their manager's.

ALL employees have values that are different from their manager's. That's why management is so darn difficult. The most valuable employees are also the most independent. The ones who do exactly what they are told--despite negative consequences--don't get very far. Why would it be any different for machines that we build?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: