Hacker News new | past | comments | ask | show | jobs | submit login

I am not sure such utilitarian math is the true solution, attempts to put a number on the value of human life seem all silly. I would rather say that the failure of a robotic mission costs an amount X, while the failure of a human mission intuitively has infinite price.



Conversely setting human life to infinity cost quickly breaks down with the trolley problem. Any problem where groups a or b are going to die then whatever group is bigger lives, even if it doesn't make sense, such as group b is all very old and group a is very young but slightly smaller.


Trolley problems are an exercise in utilitarianism and maybe generally consequentialism. If those are your views, then you can probably put a number on it. Other views are available, however. (See sibling mentioning deontology.)


To be fair, it's not really a breakdown of the trolley problem. It's the point of the trolley problem. Setting the value of a human life to infinity basically gives you deontology, in that both outcomes are approximately equally bad, and thus your choosing to get involved at all is unethical.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: