Hacker News new | past | comments | ask | show | jobs | submit login

It seems impossible that an AI could trick a human to "let it out of the box". But you might find this interesting: https://rationalwiki.org/wiki/AI-box_experiment

(With sufficiently nasty psychological tactics it is apparently feasible - even for humans - to make someone fail at a game where the only objective is "don't release the AI").




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: