Hacker News new | past | comments | ask | show | jobs | submit login

This seems like simply another version of utilitiarianism, but with a nonlinear utility function.



It seems that the priority view may feel intuitive because humans fail to internalize that the diminishing returns are already baked into utility gained from some benefit and they are trying to insert it a second time.


I agree although you could turn it on its head and say it's also a good example of how utilitarianism loses its explanatory power because it can explain away anything just by changing the utilities.

The problem with utility has always been in defining the utilities.


And lack of knowledge about Prospect Theory.


How does prospect theory apply to the scenario as proposed?


The article states: "you might consider a small amount of happiness for someone who is depressed to have more moral value than a larger amount of happiness for someone who is already fairly happy". For me this looks like the prospect theory curve: the depressed friend is at a loss, so you assign greater value to their happiness gains, regardless of the real utility.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: