Hacker News new | past | comments | ask | show | jobs | submit login

Have we not all seen a version of this at some point on reality tv, namely Big Brother?



The characters are artificial, indeed, but not particularly intelligent ;-)

Care to elaborate a little more?


Quoting from the post, gratuitously, with minor modifications.

Like the AI-Box,the show is improvised drama (so is most of reality tv), the show operates on an emotional level as well as a logical level. It has characters, not just plot. The contestants cannot force others to keep them in the home. The could try to engender sympathy or compassion or fear or hatred or try to find and exploit some weakness, some fatal flaw in the other contestants.

Since the post is ultimately about 'transhuman artificial intelligence', focussing on the 'relativity' of the intelligence of the contestants, the show ultimately rewards the contestant with the most intelligence required to survive in the artificial environment.

The contestant only needs to convince others of their right to stay in he house while requiring others to leave.

The AI requires just the opposite of the gate keeper. This is specific to the environment set up in the experiment, and the experiment could probably be set up to let have the gate keepers primary function be to prevent the AI from entering the box'.

The escape from the box has been set up to exercise human fear of a being of greater intelligence exploiting us.


I would like to note that the fear of something that could annihilate the human race is not an unreasonable one.


Homo sapiens will, eventually, be superseded by its descendants. A sufficiently intelligent super-human AI could annihilate us like we did the Neanderthals, but let's not forget we may be anthropomorphizing it a little bit too far here.

It could also possibly be no more interested in us than we are to the yeast we use to make bread.

Our survival depends on how annoying we are to them ;-)


Whether it's reasonable or unreasonable should not depend primarily on how frightening it is to you.


I'm sorry, I think the prospect of human extinction should be frightening to just about everybody.

If we do eventually develop a trans-human AI, it's a virtual certainty that it will escape its "box". Whether it would kill us all is unknown. However, it definitely could, and we would be effectively powerless to stop it.

"Reasonable" is a measure of risk tolerance. Since the downside risks associated with trans-human AIs are effectively infinite, fear of those risks is always reasonable.

Corollary: Since the upside risks are also unbounded, greed is also reasonable.

Frankly, though, I'd rather not roll those dice if we can avoid it.


If we can bring new intelligence to life, do we have the moral right to refrain from doing so? Also, would it be right to confine it to the box mentioned in the article while using its smarts to do useful work outside it? Shouldn't a trans-human AI be entitled a right to life?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: