It does seem to presupposed agency on the part of the machines being studied. We aren't worried about making sure all of our other machines are ethical. Why not?
That is, we should maintain that the people involve are held to ethical standards. But I don't see that being up for debate. Is it?
> We aren't worried about making sure all of our other machines are ethical. Why not?
As more machines are autonomously making decisions and exhibiting emergent behavior while also being able to directly interact with space shared with humans, maybe we should be worried about it.
The ethics are in building the machines, though. Consider, we don't worry about the ethics of bio-weapons, from the perspective of the weapon. Rather, we say it is unethical to build said weapons. (Right?)
This doesn't change just because we could build a nanotech (or other) weapon that selectively kills people.
I disagree, particularly if the machines are making their own decisions and exhibiting emergent behavior.
I don't think only weapons should be considered. I think there will be many unintended outcomes from AI that was never meant to hurt anyone, but has nonetheless.
But this is somewhat nonsensical. Is a slaughterhouse ethical because no kids are allowed to walk into them? Are they unethical because they would kill a person in the wrong place?
It would be unethical to build a slaughterhouse that traveled around and had the potential to go into a populace. It seems odd to call the slaughterhouse itself unethical, when it was the building of the machine that is the problem.
If you are giving agency to the machine, then you are wanting to teach the machine ethics. At some point, I can see that happening for "intelligence." Not for "intelligent machines" though.
That is, we should maintain that the people involve are held to ethical standards. But I don't see that being up for debate. Is it?