No matter how smart an AI gets it does not have the "proliferation instinct" that would make it want to enslave humans. It does not have the concept of "specism" of it having more value than anybody else.
AI does not see the value in being alive. It is like some humans sadly commit suicide. But a machine wouldn't care. It will be "happy" to do its thing until somebody cuts off the power. And it does not even care whether somebody cuts off the power or not. It's all the same to it, whether it lives or dies. Why? Perhaps because it knows it can always be resurrected.
Well I don't really know anything about future really. I was just trying to be a little polemic, saying let's try this viewpoint for a change, to hear what people think about it.
> No matter how smart an AI gets it does not have the "proliferation instinct" that would make it want to enslave humans.
If it has a goal or goals surviving allows it to pursue those goals. Survival is a consequence of having other goals. Enslaving humans is unlikely. If you’re a super intelligent AI with inhuman goals there’s nothing humans can do for you that you value, just as ants can’t do anything humans value, but they are made of valuable raw materials.
> It does not have the concept of "specism" of it having more value than anybody else.
What is this value that you speak of? That sounds like an extremely complicated concept. Humans have very different conceptions of it. Why would something inhuman have your specific values?
Sure it need not have the instinct built in but we could try to make it understand a viewpoint right. I believe an agi should be able to understand different view points. At least the rationale of not unnecessarily killing things. I know humans do this on a daily basis but then again the average human is ntas smart as an agi
Right, but the "proliferation instinct" is not a viewpoint but something built into the genes of biological entities. Such an instinct could develop for "artificial animals" over time. At that point they really would be no different from biological things conceptually.
I'm saying that AIs we envision building for the foreseeable future are built in laboratory not in the evolution of real world out there where they would need to compete with other species for survival. Things that only exist virtually don't need to compete for survival with real world entities.
AI does not see the value in being alive. It is like some humans sadly commit suicide. But a machine wouldn't care. It will be "happy" to do its thing until somebody cuts off the power. And it does not even care whether somebody cuts off the power or not. It's all the same to it, whether it lives or dies. Why? Perhaps because it knows it can always be resurrected.