Hacker News new | past | comments | ask | show | jobs | submit login

I think the counterargument is usually something like: since this is superhuman intelligence we are talking about, it by definition is (at least possibly) capable of thinking its way out of any given constraints. Simply not allowing it direct control over x does not prevent it from devising a clever plan to manipulate things in order to gain control of x, even including social engineering if necessary. The idea is that once something we’ve built is truly as ‘intelligent’ as we are, it becomes very hard to predict what it will be able to do (particularly because it operates on timescales unimaginable to us). So the verification you speak of is anything but easy.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: