Hacker News new | past | comments | ask | show | jobs | submit login

I think you’re a brilliant writer. This is a fantastic article. Thank you for articulating the problem so clearly.

The only hope I have is that because we’re building these things largely in our image, because we’re fallible and often the systems we build are too, we’re actually building something quite stupid and incapable of mass scale destruction. Hopefully the laws of physics or intellect are reached before anything gets to too out of control.

This is why “containment” is such a precarious concept. In the end, we may find our plan to place the AI into checkmate only resulted in placing ourselves into checkmate.




Thank you!

> because we’re fallible and often the systems we build are too

Yes, the basis of my article is using the assumptions of the alignment theorists who believe that we can build an AGI and it will be by default "power seeking".

Of course, either could be wrong. I agree that is really the best hope that either we hit some unexpected limitations or the problem we perceive simply does not manifest as we fear.


I suppose we’re always living on the edge, this time is no different.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: