> Utilitarianism actually leads to repugnant conclusions everywhere, and you can find repugnancy in even the smallest drop.
The argument here is quite agreeable, that morality cannot be reduced to a scalar, universal morals don't exist and utilitarianism is not the endgame.
But what if we trained an AI to be moral? We wouldn't even specify the objective functions , but somehow let it figure it out for ourselves. Perhaps she could understand us better than we can hope to understand ourselves.
The argument here is quite agreeable, that morality cannot be reduced to a scalar, universal morals don't exist and utilitarianism is not the endgame.
But what if we trained an AI to be moral? We wouldn't even specify the objective functions , but somehow let it figure it out for ourselves. Perhaps she could understand us better than we can hope to understand ourselves.