We do make moral choices, and there are rules and heuristics we use, they might be quite complicated, and they might not be what we think they are, but I think nonetheless that it should be possible to come close to predicting a human moral decision making quite well by using an accurate enough model.
And as autonomous vehicles will have to make decisions that have moral implications, they better do so in a way that humans will be happy with. I think this is an important area of research. This won't mean a machines will have morals of his own, whatever that means, but that they should do what (most?) humans would consider morally right. And what do humans consider morally right? Well that is exactly what we should try to find out.
And as autonomous vehicles will have to make decisions that have moral implications, they better do so in a way that humans will be happy with. I think this is an important area of research. This won't mean a machines will have morals of his own, whatever that means, but that they should do what (most?) humans would consider morally right. And what do humans consider morally right? Well that is exactly what we should try to find out.