> All that matter is we (I), feel/think that it is conscious, or deserving of care and respect.
It does matter, I can agree, but it doesn't the all that matter. People can be mistaken, and they can disagree sometimes. Suppose, you cared for a particular AI and I didn't, should we cut the power from the machine running an AI? I can complicate it a little if you wish, adding a painful death of a kitten which would happen if we didn't turn power down.
We need some objective means to measure what is conscious. We have a heuristic for people: "human life is sacred, full stop". There are some corner cases when it doesn't work well (like euthanasia), but we are used to it. There are other heuristics we are generally agree of, like we care more about kittens than of grass, and more of grass than of amoeba.
With AI we'll face more of that and we have no idea where to place it among our heuristics. People did bad already, like treating black people as non-human. It would be a shame to repeat those mistakes without making an attempt to do better this time.
> I think what you are afraid of here is agency, which is something that might be dangerous to endow a super intelligent being with.
I cannot vouch for others, but I do not particularly afraid of that. I do not fear much of losing it all to a super intelligent and a conscious being, it would be a great achievement for a humanity, which would fit nicely with all this evolution business. It is a paperclip scenario I do not like much.
One of the fundamental differences between human life and silicon-based AI is that biological organisms can't recover from a system shutdown. If you suffer heart failure or go without air for an hour or starve to death, bacteria start to eat your brain and you're irreversibly destroyed. If you cut power to an AI and then come back in a year, it's all still there. It's not a death, it's sleep mode.
It also doesn't meaningfully age or feel pain. If you expose a human to trauma, that's a permanent scar. If you do the equivalent to a computer program and the result is undesirable, it can roll back to a previous snapshot. Most of what causes us to have sympathy for living things or treat them with compassion just doesn't apply.
I'm trying to say, that our means to keep a moral high ground are subjective and based on heuristics. It seems to me that you do not notice, let me show you.
> If you cut power to an AI and then come back in a year, it's all still there.
Does AI's current state stored in volatile memory doesn't matter? Or it does? Should we avoid turning off only those AIs which store they weights in DRAM?
> Most of what causes us to have sympathy for living things or treat them with compassion just doesn't apply.
I believe slaves didn't trigger sympathy in slave-owners, but it doesn't stop us from believing that slavery is bad. I admire that you at not like this, and your sympathy extents to all living things, but it is your subjective way to decide what is moral and what is not. Other people may feel differently, what should they do to be not less highly moral than you? Or can you and I become even better and to hold even higher moral standards?
> If you do the equivalent to a computer program and the result is undesirable, it can roll back to a previous snapshot.
If we could roll back people to a previous snapshots after burning them to ashes on a bonfire, would it be ok to burn people and then restore them?
Questions like this may be impractical (because we cannot restore a human burned to ashes), but our hesitation to answer shows the limitations of our ways to think about such problems.
Humanity can benefit a lot from an objective way to deal with moral dilemmas, and based not on heuristics but on universal laws, like a physics does. It may help people to understand each other and to find ways to live together without fighting. I'm not sure that moral can be objective and based on a universal law, but it is not a reason to stop thinking. When you think about it, you find new corner-cases and specific solutions to them. At least it makes your heuristics better.
> Does AI's current state stored in volatile memory doesn't matter? Or it does? Should we avoid turning off only those AIs which store they weights in DRAM?
Is this supposed to be hard to distinguish? Destroying something is clearly a difference. But you could still "shut down" an AI that normally stores its state in volatile memory by saving the state to non-volatile memory. We don't know how to do that with humans.
AIs are also different because they're often minor variants on each other. The value of information is largely in how much it diverges from what continues to exist. For copyable data, minor forks can't be as valued as major ones. We don't have the resources to permanently store everything that is ever temporarily stored in memory. So "can you destroy a minor variant" has to be yes as a matter of practicality.
Notice that this is already what happens with humans continuously. You're not the same person you were yesterday; that person is gone forever.
> I believe slaves didn't trigger sympathy in slave-owners, but it doesn't stop us from believing that slavery is bad.
I don't think the people it didn't trigger sympathy in thought it was bad. Some people are sociopaths. And some people at the time it was happening did have sympathy and think it was bad.
> If we could roll back people to a previous snapshots after burning them to ashes on a bonfire, would it be ok to burn people and then restore them?
If we could roll people back to a previous snapshot then what you would be burning is meat. There are reasons you might want to prohibit that, e.g. because the meat is someone else's property, but it's no longer the same thing at all as murdering someone.
If in the future we developed technology that enabled effective "backup&restore" for human (and animal) minds, would that change your reasoning for this argument?
And it was as cheap and easy as it is on a computer? It would change how we deal with almost everything. All of the social structures we have around preventing people from getting hurt would be irrelevant because damage could be undone. No one would have an experience they didn't choose to have. Murder would be a crime on the level of vandalism or destruction of property. "Human life is sacred" would simply not be true anymore.
> And it was as cheap and easy as it is on a computer?
And yet most people still don't take backups of their data on a computer. Frequently, cost is the problem for the average Joe. Ergo the lower class would not infrequently suffer permadeath.
This isn't even getting into how often backups fail in being restored.
By your logic, if I bring down your biz's computer system and vandalize your homepage, but you still managed to restore a backup, are you not going to sue for damages et al? People go to jail for cybercrime, even if the damage can be undone. Why would murder be any different even in a world where it was hypothetically an inconvenience?
> And yet most people still don't take backups of their data on a computer. Frequently, cost is the problem for the average Joe. Ergo the lower class would not infrequently suffer permadeath.
People don't back up the data on their computer because it generally isn't all that valuable, not because backups are expensive. A $30 USB hard drive amortized over five years is $0.50/month. If it was a matter of life and death, no one would go without as a matter of cost, and governments could plausibly offer it to everyone for free even if it cost ten times as much to provide a high level of redundancy and availability.
> People go to jail for cybercrime, even if the damage can be undone.
Because it's a crime on the level of vandalism or destruction of property (or ought to be; some of the penalties can be quite excessive). It is not a crime on the level of murder, and murder wouldn't be either if it could be undone.
It does matter, I can agree, but it doesn't the all that matter. People can be mistaken, and they can disagree sometimes. Suppose, you cared for a particular AI and I didn't, should we cut the power from the machine running an AI? I can complicate it a little if you wish, adding a painful death of a kitten which would happen if we didn't turn power down.
We need some objective means to measure what is conscious. We have a heuristic for people: "human life is sacred, full stop". There are some corner cases when it doesn't work well (like euthanasia), but we are used to it. There are other heuristics we are generally agree of, like we care more about kittens than of grass, and more of grass than of amoeba.
With AI we'll face more of that and we have no idea where to place it among our heuristics. People did bad already, like treating black people as non-human. It would be a shame to repeat those mistakes without making an attempt to do better this time.
> I think what you are afraid of here is agency, which is something that might be dangerous to endow a super intelligent being with.
I cannot vouch for others, but I do not particularly afraid of that. I do not fear much of losing it all to a super intelligent and a conscious being, it would be a great achievement for a humanity, which would fit nicely with all this evolution business. It is a paperclip scenario I do not like much.