Hacker News new | past | comments | ask | show | jobs | submit login
An underrated idea: the priority view (atis.substack.com)
159 points by akbarnama on Dec 10, 2021 | hide | past | favorite | 166 comments



This might be a bit of an odd comment in this kind of thread, but when reading this article, the concept of the Priority View discussed here reminded me of an episode from the TV show Malcolm in the Middle.

I definitely remember some of the plot details wrong, but I believe that one of the teachers hated Reese, a very dumb loser, and was purposely trying to fail him. So his brother Malcolm, a very gifted student, cheated on behalf of Reese to save him. They get caught, and the teacher presented Malcolm's mom with the choice of which child's future to save: either reporting Malcolm for cheating and possibly ruining the only one in the family with a bright future, or just failing Reese and letting a loser with no future get a head start on failing at life. She unhesitatingly said that she would sacrifice Malcolm to save Reese because Malcolm would land on his feet and be ok no matter the circumstance.


Ah I remember that one, the point was that the teacher was giving Reece bad grades no matter what he submitted. This became clear when he cheated by handing in something that Malcolm had written and still got an F (I think). Lois demanded he give Reece a good grade or she'd out him, even if it made Malcolm look bad, because Malcolm would do fine in life either way.

That show had some great plotlines.


Your memory is very close. Here is the relevant scene: https://m.youtube.com/watch?v=pU-uZztJEcQ


Reese would have failed regardless. Not sure saving him in this instance would have made a difference. In fact, letting him fail early might have been better.


I realise that this is a thought experiment, so refuting the premise is besides the point, but . . . I feel the need to call out the casual assertion that

> The suburbs has [net] benefits for your gifted son

The zeitgeist has a knee-jerk belief that children are best raised in the suburbs, and I think it's important to assess that belief critically. Children can and do thrive in cities. To pick just one example children, especially teenagers, have a lot more agency and independence when they live in walkable neighborhoods with a robust public transit network. In the typical suburb, anything beyond playing with friends who live on the same street requires a parent to drive them around. This harms the utility of both the children and the parents.


I actually think the housing situation in America is very directly linked to the teenage mental health crisis. From my personal experience, when I was under 18, my entire life revolved around school (which was 90% lectured by teacher and 10% socializing) and taking the bus home (the nearest other home was door-to-door ten minutes away), then being alone at home. The only other transit options was a bizarre after-school bus which took until 7 to get home or getting a ride from my parents.

In such an environment, it makes sense kids would feel isolated and alone. In my present life my work/home/play is much more balanced. I have multiple social areas in walking distance from my apartment, the activities during the 'working hours' is a mix of learning, meetings, and productive work. I know I'm a data point of one, but I can't imagine living in the suburbs and NOT being depressed.


America has also changed its opinion about whether children should travel around town alone. It used to be that children could ride their bikes around town, knock on doors, and find kids to play with, and it would be okay as long as you were back home by dinner.

Parents today are worried about their kids being hit by cars, going lost, getting abducted by strangers, or simply having something happen to them that gets CPS involved. If something happens to my kid, then I'm the bad parent. Anecdotally, I saw this happen in my town when a 7-year-old girl was taken in her own front yard. Parents chimed in online to say, "The mother should go to jail" and "I always know where my kids are at all times."


Unfortunately "suburb" is an overloaded and ill-defined term that can mean anything from "pre-war streetcar suburbs laid out on a grid, with a main street, walkable amenities, and (at least here in the Northeast, if not everywhere) access to commuter rail" to "a modern cul-de-sac development attached to a high-speed arterial that's walkable to exactly nothing."


I think this speaks to the failure of utilitarianism, at least in my mind - notions of "utility" are rooted in subjectivity and subject to chaotic forces in the future. Utilitarianism still remains useful as an ethical and executive framework, but one must be easily preempted by other ethical concerns as in the article, as well as aware of the inherent uncertainty in future outcomes.


Utility is subjective? We have instincts around what to consider useful or not. These instincts have been calibrated by evolution for survival. Survival is pretty objective, don't you agree?


In spite of that, not everyone has the same instincts on what is useful or not. Therefore utility is subjective. Beyond survival there is much else to disagree about.


Like the "prisoner" example in the prisoner's dilemma, I feel this example adds a lot of baggage that makes things harder to understand or easier to misunderstand.

If we boil it right down then would you give 11 points of utility to a random billionaire or 10 points to a random poor person.

Most people would feel the 10 units would do more good, but I think by the terms of the setup, that's actually impossible since 11 is more than 10. But probably most people would give to the random poor person if we were talking about 100 dollars, not 100 util points, which presumably translates to many more utils for the poor person.

I think many people would also be "prioritarians" because they think it maximise utility but this example confuses matters as there's second order effects on other people's happiness to consider.

I get the feeling people will answer the second question when asked the first, though they're quite distinct, one is easily graspable and the other very abstract.

If you try to convert back from slightly more util points for the rich person to cash then it might translate into "would you give a poor person .0001 of a penny vs give a rich person 100 dollars" where the utility of each is closer and the choice seems kind of irrelevant because neither really benefits.

Once you scale it to something that the poor person would notice, you'd need to give many millions to the billionaire, at which point the diminishing returns kick in and so you end up with something absurd like give a poor person 5 dollars, versus give Billionaire 5 trillion, at which point you'd be creating so much extra utility out of thin air that some of it is maybe going to spill over to others (or lead to the Billionaire becoming King of the World and/or God and then have negative effects).

Which still doesn't really shed much light on things as you're now imagining how all that extra util wealth will impact others in a complex sequence of utility impacts and transfers.


I think your analysis is correct for many people, myself included. Even though I know the premise of the thought experiment is that the gifted boy receives more utility, that is just too hard to believe to take seriously.

Your billionaire example makes the situation more clean and less influenced by my inability to accept the facts of the situation. If choosing between giving Bezos $100 or some dude $0.00001 I would probably just give the $100.

The even cleaner version of this problem is the utility monster mentioned in the article.


My layman thoughts:

The concept of utils as a common denominator for peoples value judgements or well-being is a dead-end. Economists use utils but that's only for working with ordinal preferences in math.

I do think that ultimately, people are guided by consequences even when following strict principles. E.g. not negotiating with terrorists leads to better outcomes in the long run.

Time is an essential component in these deliberations.

Human action is decided by peoples' preferences. Preferences are a list of desired outcomes. They are ordinal. They are personal to each individual. Valuations can be thought of as moving up the list of desired outcomes. The most desired alternative has special names: goals and ends.

Since preferences are ordinal they can't be summed. This gives rise to the concept of marginal utility and opportunity cost.

Buying a pack of cigarettes is not irrational because preferences are subjective. Might regret it later though but that's simply you modifying your preferences.

I would argue though that if you modify your preferences frequently, you are indeed an irrational person. E.g. buying a pack of cigarettes and always regretting it later.


> I would argue though that if you modify your preferences frequently, you are indeed an irrational person. E.g. buying a pack of cigarettes and always regretting it later.

Would you say that someone who is constantly struggling with their body fitness versus what they would like the level to be be is irrational?


Yes that's my position.

If you have a long-term goal of body-fitness but eat fast food each day then your long term goals are fluctuating wildly. Another possible description of this behaviour is high time-preference, which means your goals are more short-term rather than long-term.

Maybe there is a better word for this behaviour than irrationality?

We also are treading into difficult areas such as free will etc.


Akrasia


This is an interesting way to think about why utilitarianism doesn't alway comport with our moral intuitions. But I was really hoping this article would be about a user interface for business applications.


...or a To Do list app with a priority view.


I'm having issues understanding this:

> let’s assume that the utility gain for the gifted son from living in the suburbs would be larger than the utility gain for the disabled son from living in the city. A pure utilitarian, then, must choose the suburbs.

Everything's okay so far. But then he says this:

> Nagel’s view is this: if you say that you would live in the city for the sake of your disabled son, despite it being the case that moving to the city creates more utility in total, you are not a utilitarian

Didn't the author just say moving to the suburbs creates more utility in total?? And now he's saying moving to the city is what creates more utility?


Yeah I had to reread it a few times, it's just a mistake.

Of course maybe the kid being more accessible to doctors might increase the doctors' utility, but that's not what he meant.


Author here, thanks for catching that error.


Is this not equivalent to medical triage? Those who will die anyway, and those who will live anyway, get no immediate treatment (apart from morphine). Those who will not survive without treatment, get the resources.

Thus: move to the city, as the gifted child will most likely do just fine in life. Have dinner with the depressed friend, because the happy friend will enjoy the concert anyway.

Seems perfectly consistent with even simplistic utilitarianism.


Maybe the gifted teenager by getting priority early on would be able to better support his brother later in life. Parents aren't going to be around in old age.


> Have dinner with the depressed friend [...] Seems perfectly consistent with even simplistic utilitarianism.

Well, the problem is specifically stated as the concert having higher total utility.


> move to the city, as the gifted child will most likely do just fine in life

Why would you make this assumption though.


> Let’s introduce another scenario: Imagine that the gifted boy has a total utility of 80, and the disabled boy has a total utility of 40.

The thing is that day-to-day life doesn't provide those "scores" when it comes to humans and their interactions, life is not a video-game (even though we certainly do try our best at transforming it into that).

Writing down that one could assign scores in such a scenario is normative, i.e. we take for granted that such scores are possible and, even more (that's what makes it normative, imo) we somehow impose on the reader the notion that he/she should regard this score-setting as a fact of life, as normal, at the limit that the reader herself should join the game of assigning scores to human actions.


I think this is inevitable when you try to abstract and systematise anything. You cant think about big and complex things will all details in mind. You abstract. Same way you don't see a person in front of you as collection of millions cels. You see it as an entity.


>You abstract.

The danger is that you end up with a "perfectly spherical cow" that you base your arguments then on, and your abstraction-based results then have no bearing whatsover to the real world.


I'm not saying that you must follow your abstractions blindly. At the end you must always test your theory against reality and adjust if reality contradicts your assumptions.


Yeah, fully agreed.

I think trying to quantify such things can sometimes be useful as a tool of thinking, but you're always have to be extremely careful not to confuse the map with the territory.

If you're making weird calculations with hypothetical utility values, you're making your argument on the map - and you always have to make sure it still makes sense when converted back to the territory.

Numbers can also be used to make wildly unrealistic assumptions seem reasonable or hide additional circumstances that would be required for the argument to apply.

E.g., if you mapped back the utility values from the OP to an actual situation, you'd have a gifted son who is somehow absolutely thrilled to move to the suburbs and a disabled son who is mostly indifferent about whether he has to travel for several hours frequently or not.

The only situation I can think of in which that behaviour seems remotely plausible is if they were already living in the suburbs and the decision is really about moving to the city (and losing their complete social circle) or staying where they are.

If that were the case, the "low additional utility" of the disabled son would more likely be a conflict: The son might appreciate a lot not having to travel so far, on the other hand, he doesn't want to lose all his friends. So, in numbers, a high positive and a high negative utility, which the theory assumes you can simply add up to get a total utility. But that assumption doesn't seem to have merit to me.


Such scores are clearly possible. If you define a one-dimensional scale you can project everything onto it. Clearly you lose a lot of information in doing so, but if your scale is "overall utility" of some kind and that utility is the only thing you care about, then of course you can do it.

In principle anyway - you can't actually calculate someone's utility. It's a thought experiment.


But the one-dimensional scale also brings in a lot if implicit assumptions that you have to be aware of - e.g. that you can add or subtract individual utility values and the result will still be meaningful.


Yes that's true. 10 personal chefs do not have 10 times the utility of 1 personal chef.


One way to reason about this: instead of the score being one person’s numerical value it is instead the percentage of people who make a binary choice.

Put another way, you can calculate…

  1 rating between 0 to N
by sampling…

  N ratings of 0 or 1
So donuts-for-breakfast has a score of 5 not because it is quantifiably 5/100ths awful, but because only 1 in 20 people choose it.


“Utility scores” and their close cousins “prior probability distributions” seem to me like a way for mathematically inclined brains to frame their decisions in “math” because anything else feels “irrational” and icky.

To me, it seems like the assignment of priors and utilities scores is mostly arbitrary in these types of personal decision-making applications. How does one arrive at a score of 40 and 80? Does the magnitude of the difference mean anything? What’s the range on utility?

If these are just random numbers plucked from thin air then how is utility different from a feeling which you can plug into some equations? How does saying one thing has 80 vs 40 utility mean anything other than “I feel a little better about this than that”?

And if utility is just a numerical representation of a feeling, how do the results of these equations produce anything that we can interpret?


You can assign scores say in a health economics setting, where you're a public health system choosing the drugs to fund that will do the most good.

Do you buy the drugs that keeps 70 year olds with X disease alive for 10 more years, or 20 year olds with Y disease alive for 2 more years?

While it might be qualitative for individuals, it can become (more) quantitative for populations.


This article gets at the basic idea of a subfield of economics called "welfare economics". The general problem is, how do you combine individual well-beings into an aggregate to make decisions that affect multiple people? We can also answer questions like "Given certain assumptions about bargaining outcomes (nash equilibria etc) what aggregation function will rational actors come to on their own?"

This article presents a "priority view" as a contrasting moral view to "pure utilitarianism" and wonders why the view hasn't caught on outside of moral philosophy. The answer is that it has, and outside of philosophy we have models of aggregate utility that subsume both of the moral views in the article. This was a very active field from the 1930s-1970s, and now most of the interesting work here IMO is done in the cryptocurrency space (trying to find ways to prevent forks or incentivize participants to be pro-social).

The two points of view described in this article are just two specific "social welfare functions" we could optimize for. There are many others.

The "priority view" in this article is known as the "Kalai egalitarian bargaining solution" in economics and game theory, or the "Rawlsian" social welfare function (maximize the minimum individual utility):

https://en.wikipedia.org/wiki/Cooperative_bargaining

The "pure utilitarianism" view is not a bargaining solution, but it's known as the "Benthamite" social welfare function (maximize the sum of individual utilities).


I'm curious about whether anyone has studied using the harmonic mean (or at least, the reciprocal of the sum of reciprocals) as a way of aggregating utilities. I haven't had time to research this, but the thought repeatedly occurs to me as I have kids that are gifted in a school district that is especially concerned with the gap in achievement between the highest and lowest performers. I can't shake the feeling that what they really should want is a measure that prioritizes helping the lowest performers but does not consider the performance of the highest performers to have literally negative value, and the harmonic mean fits that bill.


> The "priority view" in this article is known as the "Kalai egalitarian bargaining solution" in economics and game theory, or the "Rawlsian" social welfare function

No, that is a common misunderstanding, especially understandable here since some of OP's formulations also conflate the two. If you're an econ person a good text that puts the prioritarian view in context is Matthew Adler's 2020 Measuring Social Welfare: An Introduction. I'd also recommend that book to anyone here who claimed e.g. "we can't compare across persons!" but who is open to reading a case against that claim.


I don't think I'm misunderstanding. Here's a quote from the article.

"The point being made here is that we do not assign any moral value to decreasing the well-being of those who are better off ..., but we do assign more moral value to increasing the utility of those who start from a lower base."

That's explicitly 0 weight on those who are "better off", leaving all the weight for those with the lowest utility. That's precisely the Rawlsian function. Traditionally Rawls only described the most extreme version, but softer versions with weights that are not all zero or one, but follow the same pattern are in the same family and also generally called "Rawlsian".


No, the OP claim "we do not assign any moral value to decreasing the well-being of those who are better off" is different from your claim "0 weight on those who are "better off", leaving all the weight for those with the lowest utility".

The former is a way, albeit too compressed by OP, to distinguish prioritarianism from the kind of egalitarian view where interpersonal relative differences affects the value of increasing/decreasing one individual's well-being, which makes egalitarianism the target of the so called levelling down critique. See Parfit's article that OP discusses for more on that distinction and critique.

Prioritarianism does not give 0 weight or value on increasing the well-being of those who are already better off - on the contrary a distinctive feature of prioritarianism is that increasing someone's well-being always has some value. It is only how much value that varies with the absolute level the increase starts from. In jargon the most prominent version of the prioritarian social welfare function is continuous, strictly increasing and strictly concave. In contrast a leximin SWF (sometimes called "rawlsian" because of some points of similarity with the features of one component, the difference principle, in Rawls systematic theory of justice) gives absolute lexical priority to increasing the worst-off positions well-being.

All this said, perhaps there is a way to relax or modify leximin SWF so as to make it functionally equivalent to a prioritarian SWF, but I'm not familiar with anyone doing so (happy to learn if you have a reference though!) and, regardless, to call that rawlsian would be unfair to Rawls since he definitely did not propose or argue for such a view in any of his texts on political philosophy and the view he did argue for is definitely incompatible with it.


Priority can be mathematically stated as maximizing the minimum expected utility across a set of people's utility functions instead of maximizing the additive or average expected utility.

It still suffers from the utility monster who can make trivial inconveniences as numerically terrible as the worst life imaginable for other people, dragging the world down to a merely comfortable level for everyone else, which may frustrate the desire to thrive and grow in those other people. It caps the effect at not letting anyone else be worse off, which seems desirable over alternatives. It potentially leaves a lot of good on the table to avoid the risk of a lot of harm.

It sounds like a good initial optimization strategy until we figure out how to unify disparate agents' utility functions into a global optimization problem if that turns out to be possible.


I wish very much that explainers of of this kind NOT center around exemplars on the individual level. In practice there is no applicability of political philosophy on the individual level, only absurdity.

Where there is some value to this work is at scale policy. One's only hope for fairness, equity- any qualities of importance- depends on modeling and quantifying. All models are wrong but in these cases it is incumbent to try to make them as useful as possible.


An ethics for society is pointless as society has neither subjectivity nor moral agency. You may as well draw up a system of ethics for the weather.

All ethics is individual ethics, as individual subjectivity is the only subjectivity, and individual moral agency is the only moral agency.


So call it an ethics for society's leaders. They do have moral agency, but we expect them to channel their individual subjectivity in a certain way.


I don't think it makes sense to partition society into leaders with moral responsibility, and followers without. It seems peculiar to think that what is good is shaped by external circumstance. Certainly if an action is good, it is good regardless of your lot in life.


No one said followers have no moral responsibility.

It's fine if it's one morality for everyone. It's just that as an individual with little power, the pursuit of that morality will involve very different actions than for individuals with a lot of power in relevant areas.


If it is the same morality for everyone, then why bother categorizing people into how much power they have?

If a person is just and poor, then Elon Musk dies in a freak self-driving car accident and it turns out this just person is the single estranged heir, surely they haven't suddenly become more or less just by this unexpected windfall...? Whatever was good before is still good after, and whatever was evil before is still evil after.


You're being quite sloppy with categories. You've shifted from saying "if an action is good, it is good regardless of your lot in life" to talking about people being more or less good, based on wealth. Nothing else in the conversation so far assumed a moral status in people; only in actions.

Categorizing people by power follows from the fact that power enables actions unavailable to the powerless. I cannot meaningfully shift public opinion on climate change. Someone investing a billion dollars into cultural messaging probably could. The same ethics could apply to me and the billionaire: say, a rule of maximizing one's impact on the phenomenon most likely to negatively affect the most people. Now if you assume (as you seem to) that utility isn't part of the morality equation, then both I and the billionaire could each try our best and be equally good. But that's not an obviously true thing, and I think most people these days would assume that ends matter. In that light the billionaire can do more good than I can, and although the same ethical rule might apply to each of us, it's proportionally more relevant to the billionaire. So: an ethics for society's leaders.


> If it is the same morality for everyone, then why bother categorizing people into how much power they have?

While (as you say) analytics tools such as the one under discussion aren't useful for assessing individuals, they are appropriate for assessing populations.

When you asserted that only individuals have agency here, that was refuted with the example of leaders, who do have to make difficult choices about populations, and rely on tools such as this to do so.

In no way was that about different moralities for different people. Just about who can use this tool in a useful way. If you read the thread you'll see it.


> While (as you say) analytics tools such as the one under discussion aren't useful for assessing individuals, they are appropriate for assessing populations.

Right, and I say this is a category error. Populations don't exist the same way you and I do. A population can't suffer, because it can't experience. The individuals making up the population can suffer and experience, but that's a different thing entirely. The collective entity only has objectivity.

If I step on a lego brick, and dance around on one foot in pain, so will my shadow and mirror image, but I'm the only one that felt the pain.

> When you asserted that only individuals have agency here, that was refuted with the example of leaders, who do have to make difficult choices about populations, and rely on tools such as this to do so.

I just don't understand how this is a refutation at all.

> In no way was that about different moralities for different people. Just about who can use this tool in a useful way. If you read the thread you'll see it.

Surely the outcome of actions can't determine whether they are good or not.

If a large man with an Austrian accent knocks on my door and asks if I know where Sarah Connor lives, and I helpfully tell him that she lives next door, and he goes and murders her, am I a villain because I unknowingly helped the killer?


> Populations don't exist the same way you and I do.

This is the premise upon which something could work for populations and not individuals, yes.


Would it have been better if you hadn't done so?


OTOH, maybe politicians shouldn’t be trying to meddle in the wellbeing of individuals beyond providing access to infrastructure and policing?


There is room for more systemic improvement than that, surely. Or do you count education, social safety net, basic healthcare as infrastructure?


How do we measure "should" or "shouldn't" there? Answering whether you're right or wrong requires looking at the alternatives and seeing whether political programs that go beyond infrastructure and policing create a better world.

There are arguably answers from all over the world that they do: universal healthcare and education are obvious ones.


It is simply a mistake to use constructions such as “utility gain for the gifted son from living in the suburbs would be larger than the utility gain for the disabled son from living in the city”. Utility is first-person relative to the decision-maker. Your assessment of the utility of each outcome is dependent on your subjective preferences relative to the options. You, the decision-maker, do not, and cannot, know the subjective utility values that your two sons would assess to their own outcomes.

Subjective utilities are not fungible, you cannot ask Son A how utility he expects and then compare that number with the answer Son B gives you.

Thus, if you prefer A to B, yet you somehow wrote down that the utility of B is higher than A, you just made a mistake somewhere. Utility is just a scalar valued encoding of subjective personal preference. If you are using it in some other way, e.g. pretending that you are accurately measuring the subjective utility of your sons (rather than your own subjective preferences) than you are going to get weird and usually useless answers.

Some people do use utility in such a way that they assign numbers to other people’s well-being, but doing this always leads to unresolveable paradoxes such as the Repugnant Conclusion, because it’s just not how people think and decide, nor should it be.


Supporters and resistors of the utilitarian framing of benefits of (sub)urbanity are both being over-simplistic.

Of course we make decisions on balance of their expected outcomes. The problem is that we can't in general predict outcomes with certainty. So, intelligent decision making is not merely to pick the best expected outcome, but to factor in the range of all possible outcomes on a probabilistic basis.

In this thought experiment, it seems that city-dwelling is highly probable to benefit the disabled kid, but we have less a priori certainty that suburb life is better for the accelerated learner (it may be better for him today, but it's plausible to think that it's long-term good for a smart kid to experience some amount of adversity in a tougher environment compared to a more comfortable sheltered suburban setting, or to learn by example that it's sometimes worth risking personal optimality to serve the needs of others).

So yes, the notion that we should prioritize the needs of the bottom of social hierarchies is worth considering, but it's even more important to factor in uncertainty, to have no pretense of one's ability to predict the future.


This is surely a fairly highly rated idea. At least where I live now and where I come from (at the time I lived there) it was regarded as more important to raise the standard of the poorest students than to raise the standard of the top performers for instance. At least when I was in primary school my teachers spent more time with those who found studying difficult than those of us for whom it came easy. It was made clear to me as a high achiever that help was always available but, as it was clear that I could work on my own, that I was expected to do so.

It also surely accords with the Marx's slogan:

"From each according to his ability, to each according to his needs[1]"

Perhaps this idea isn't so popular as it used to be.

[1] https://en.wikipedia.org/wiki/From_each_according_to_his_abi...


It's interesting because neither helping the strongest students, nor helping the weakest, has any reason to be maximally beneficial to society. From pure utilitarian point of view, we should first help those students who would get the most benefit per hour of help. That probably means the middling ones who just need a hand to get them over a hurdle, not the checked-out ones and not the superstars.


That’s a great, practical point. One anecdote about smart kids though: i knew a number of very smart boys growing up who got bored at school and checked out for that reason, and life didn’t work out great for them - nor did society benefit from the very positive productivity they might have been capable of. I think they’d have done better with appropriate challenges.


I always talked about ideas since I was a kid and got lots of applauses, but no one ever bothered to actually take my hand and try to explore with me. I didn't simply get bored, I ended up in complete social isolation. In my personal case, neglection actually runs deeper and beyond the educative system, but if you fail to "get people on board", any other metric will be irrelevant.


This happened with both myself and my sister. Thankfully, I dropped out and started attending community college where I could proceed at my own pace. I don’t think a lot of parents are aware of this option.


> That probably means the middling ones who just need a hand to get them over a hurdle, not the checked-out ones and not the superstars.

From my experience the middling ones are exactly the students who have the highest cost/benefit ratio. The checked out ones are the students who could have a small amount of attention to get them over a hurdle that’s completely blocking them, which caused them to check out. Maybe you find out they are dyslexic. Or you find out they are not eating breakfast or lunch after a short conversation. Or you find out they’re checked out because they’re bullied. A small change here can yield a drastic improvement.

The middling ones are usually performing at their highest cylinder and still not doing great, so it takes a lot of work to convince them to apply themselves even more.

TLDR; it’s easier to get a student from F to C than from C to A.


> There is also a catch in the hypothetical - let’s assume that the utility gain for the gifted son from living in the suburbs would be larger than the utility gain for the disabled son from living in the city.

I think that stretches credulity too far, so a utilitarian shouldn’t be expected to accept the far-fetched assumption.

Imagine if it said “let’s assume that depriving your disabled son of food for a week and giving the extra food to your gifted son would increase total utility, would you do it?” Or “let’s assume that giving $1 from a poor person to a billionaire would increase total utility, would you do it?”

You don’t get free reign to make ludicrous assumptions and expect me to agree to them to prove that I’m a utilitarian.


Maybe prioritarianism is similar to increasing average log-utility... A sort of social Kelly criterion [1].

[1] https://en.m.wikipedia.org/wiki/Kelly_criterion


Read like the parable of the lost sheep from the Bible:

“What man of you, having a hundred sheep, if he loses one of them, does not leave the ninety-nine in the wilderness, and go after the one which is lost until he finds it?”


“hey! no, you idiots: the least of these, god dammit!!” is the plaintive cry of all the prophets, about half the psalms, and a fair bit of the new testament.


This seems like simply another version of utilitiarianism, but with a nonlinear utility function.


It seems that the priority view may feel intuitive because humans fail to internalize that the diminishing returns are already baked into utility gained from some benefit and they are trying to insert it a second time.


I agree although you could turn it on its head and say it's also a good example of how utilitarianism loses its explanatory power because it can explain away anything just by changing the utilities.

The problem with utility has always been in defining the utilities.


And lack of knowledge about Prospect Theory.


How does prospect theory apply to the scenario as proposed?


The article states: "you might consider a small amount of happiness for someone who is depressed to have more moral value than a larger amount of happiness for someone who is already fairly happy". For me this looks like the prospect theory curve: the depressed friend is at a loss, so you assign greater value to their happiness gains, regardless of the real utility.


> Prioritarianism, or the priority view, is a view within ethics and political philosophy that holds that the goodness of an outcome is a function of overall well-being across all individuals with extra weight given to worse-off individuals. Prioritarianism resembles utilitarianism.

From Wikipedia, just for context...


"the utility monster: one person receives much more utility from each unit of resource they consume than anyone else does. For a utilitarian, it follows that every resource ought to be directed towards the utility monster in order to maximise total utility."

That is exactly the reality people see day by day in their work life/ business environment. If you have a business unit that creates more return per unit resources input than another one, the former is always favored.

Our whole society fetishizes efficiency, constant improvement and economic growth/success that it has entered into all parts of human life and interactions. So it is no wonder that the more moral priority view has and never will gain mainstream application, as it goes against the grain of the current ideology.


For those looking for more support (or criticism) for this way of thinking, the reasoning behind the priority view is quite similar to John Rawls’ arguments that people would adopt a maximin (making the least good outcome as good as possible) strategy when behind the “veil of ignorance” (imagining setting up a society in which you don’t know how advantaged or disadvantaged you’ll be). Here’s more: https://plato.stanford.edu/entries/original-position/


> "Imagine that the gifted boy has a total utility of 80,..."

Argument discarded.


Lots of people are getting hung up on the measurement.

The key word is "imagine" y'know.


Imagine that the Imagination Quotient is a number between 0 and 1, with 0 being the inability to imagine even what you're looking at, and 1 representing the ability to imagine a real thing unseen in full detail. The Imagination Quotient required to imagine that someone's utility is 80 is would be an proper imaginary number.


I can't even imagine the Imagination Quotient required for some social scores.


I have not read Parfit's paper, but in the article only the utilities of the gifted and disabled boys are considered and the utility of the parent is neglected. This gives a partial view, because the decision of a parent sensitive to utility will also need to account for the utility cost / benefit to themselves of the move (e.g. if they will decrease their own utility by feeling guilty about favouring one child's utility more than the other).


> “The suburbs has benefits for your gifted son - the levels of the crime in the city are fairly high, the cost of living is higher and so your home would be smaller, and so on.”

What a strange argument. I realize it’s more of a thought experiment, but the benefits of a city’s cultural life are obviously greater for a gifted child than having more space at home.

If you were a talented 15-year-old, would you prefer to live in Manhattan or a New Jersey suburb?


This is a tangent, but the "utility monster" scenario only makes sense if the utility gained from an activity remains the same with how many resources are put into it. This doesn't make sense with how people actually work, almost all goals or resources or pleasures have diminishing returns, or homeostasis. Do negative feedback loops exist in this philosophy? Perhaps I'm misunderstanding the point.


Isn’t this just Rawls’s theory of justice? That the inequalities that exist in a society should be arranged to benefit the people who have it worst?


They do seem similar in that they attempt to address shortcomings in utilitarianism but Rawls seem to reject the utilitarian framework in favor of justice whereas this is more of a tweak.


Ah, yes. The utilitarian angle is important, Rawls is more of a deontologist.


Section 3 of the linked Parfit article touches on the notion that Rawls's distinction is mostly one of framing, not of actual recommendations:

"Third, Rawls regards Utilitarians as his main opponents. At the level of theory, he may be right. But the questions I have been discussing are, in practice, more important. If nature gave to some of us more resources, have we a moral claim to keep these resources, and the wealth they bring? If we happen to be born with greater talents, and in consequence produce more, have we a claim to greater rewards? In practical terms, Rawls's main opponents are those who answer Yes to such questions. Egalitarians and Utilitarians both answer No. Both agree that such inequalities are not justified. In this disagreement, Rawls, Mill, and Sidgwick are on the same side."

That said, to your original point, Rawls does espouse maximizing for the least well-off ("maximin"). The "priority view" doesn't need to be so extreme. It can simply be biased toward improving the lot of the badly-off, rather than putting an absolute priority on the worst-off member of the population. They are definitely related, but it's muddled by Rawls's (somewhat inconsistent) rejection of a consequentialist framing.


Is there some threshold of utility that people should have beyond which prioritarians would not or should not feel a need to give utils to a person? Sort of like "make at least $XX for the place you live to be happy"?

If we assume there is, would a prioritarian view be to get people to that threshold, and then just utilitarianism?


The article feels like a very wordy way of saying that it may be better to optimize the overall utility than individual utility. Social welfare captures this notion in economic theory and maximizing social welfare is often a cited motive for macroeconomic policies.


My main take-away from this is that political philosophy must not spend much time with actual data if this is the level of numeric discourse. The models they use for very complicated decisions like the one described seem just crashingly unsubtle from this.


I don’t agree with the premise of applying numbers in these hypothetical trolly level like situations. Humans aren’t numbers, and you could never calculate a utility value or predict with certainty what would happen if you make a choice to make these hypotheticals useful in my opinion.

But playing along, while moving to the suburbs or city is a common decision to make for families, I would argue that the priority view is the same as utility. The priority raises the value/score itself.

Let’s say hypothetically you could go out to dinner to help talk a depressed friend out of suicide or you could go to a once in a lifetime meeting with an investor to pitch him on a startup idea about helping prevent suicides. Now i would argue the decision is a bit more murky. In one you have a probability of helping prevent one suicide in short term and in a other you have a probability of helping prevent multiple suicides in the long term.


That is the whole point of moral philosophy, to try to both model how humans make moral decision and help us make better moral choices.

To just wave away the work of hundreds of philosophers over hundreds of years with "it's murky" ... maybe philosophers already understand that.


I don't understand what's murky about your hypothetical as proposed. Go help the depressed friend. Even if you successfully pitched the startup to the investor the utility gained ultimately is indeterminate vs. the very immediate and real utility gained from a successful suicide (presumably permanent) prevented.

Your example though is good to illustrate the fallacy of "end justifies the means" type thinking. When you suppose the outcome (talking to the investor leads to the startup being created which then presumably prevents suicides) for one scenario but not for the other the entire thing is meaningless.

With these types of scenarios you can see the error by applying one scenario and overlaying it on the other:

1. Talk a friend out of a suicide who does (2)

2. Engage in a once-in-a-lifetime investment opportunity for suicide prevention.

Clearly 1 includes 2, so 2 is the answer. You might say that's not the scenario you posed, which is true, but I'd counter and say that's the issue with contrived examples to begin with.


Isn't this related to proportionality? Giving somebody with only $1 another dollar increases their wealth by 100%. Giving somebody with $1 million another two dollars increases their wealth much less, proportionally.


Seems fine until you do this kind of stuff over “identity groups.” This view taken beyond an individual level seems to be the cause of the current negative social situation.


Am I right to conclude that priority view applied to college admissions would mean favouring the worst students, because they have the largest marginal utility?


ITT, utilitarians discover diminishing marginal utility.


Surely that doesn't apply here, given the utility numbers we're assuming? Rather, it's presumed to be priced in already.


Rather, I think this is utilitarians struggling to "price in" an obvious observation about society and moral reasoning.

Which is, of course, ridiculous, since there is no such thing as general utility, but this is utilitarians we are talking about, unfortunately.


A general utility claim: One person suffering from the severest form of migraine is bad. Two persons suffering from the severest form of migraine is twice as bad. Here's another: one person suffering from the severest form of migraine is worse than one person experiencing a mild itch.


Yeah, that's wrong. The suffering is subjective and calling it "twice as bad" is spurious.

This is one of the core errors of utilitarianism, the idea that "utility" can be objectively measured and compared between different humans, and thereby, treat moral reasoning like a math equation.


The suffering is only subjective in the sense that a particular migraine is generated in a particular brain where only one person, me the subject, has direct experience access to it. But migrains and brains are biological processes in a physical world. My brain is't magically different from other brains so I can reason to the conclusion that a maximally severe migraine in another person's brain is equally awful and bad. Both my brain and theirs are part of the world and the states of our brains partly constitutes how good or bad the world is at a particular time. A world with two severe migraines is twice as bad as a world with only one.


"twice as bad"

No, it isn't. This is what you need to get through your head: the brains (really, the souls) are always different and there is a huge amount of subjectivity involved in experiencing a migrane, for example. You are trying to make moral reasoning neat and tidy math, using precise-sounding terms like "twice", but it simply isn't.

Basic utilitarian reasoning leads to obviously incorrect moral positions such as liquidating a small, troublesome and annoying minority being fine so long as it increases "the greater good". This is where the math leads, and it's all built on a foundation of clay, namely that there exists a mathematically precise, comparable measure of general "utility" that can be compared, summed-up, etc. between moral agents.

The whole thing would would be pure comedy, were it not for the fact that intelligent people take it seriously.


You conflate the question of interpersonal wellbeing comparison with both adjacent and orthogonal issues I have said nothing about (level of precision, mathematical modeling, deontic criteria, ...). What I describe is practiced daily in health care priority work and risk mitigation in e.g. macro level infrastructure planning. Meanwhile the utilitarians I know are busy working to end world poverty, eradicate malaria, minimize global pandemic risk, improve health access and end factory farming. The evidence on the severest forms of migraines do not support a huge experiential variation in the people afflicted - they max out suffering for virtually everyone. https://www.preventsuffering.org/cluster-headaches/


Yes, many utilitarians end up being good people despite their utilitarianism. They take a best guess at what is right, in a murky, conflicted world, and then work on that, perhaps backfilling a utilitarian just-so story to make themselves feel better. I applaud them for that.

But utilitarianism as a philosophy is obviously, and will continue to be, ridiculous for the simple reason that there is no such thing as general utility.


I have already given simple examples of interpersonal wellbeing comparisons of a type that are both common-sense and in fact widely put to practical use in health care and risk migitation in all countries in the world. If you want to dispute that then do present evidence, saying "no" is not enough.

Here is a simple test: imagine you can press either button A or button B. If you press A 10000 people are spared the severest migraines. If you press B one person is spared a mild scratch. Which should you morally do and why? My answer is: you should do A and the reason why should do A is because we can compare the outcomes in terms of wellbeing and outcome A is comparatively much better than outcome B. What is your answer and why? Anyone denying that interpersonal wellbeing comparisons are possible or that we can in any way aggregate wellbeing ("there is no such thing as general utility") seems forced to say that there is no way to discern a moral difference between choice A and B. Is that really what you believe? If yes how do you put that into practice with regard to choices in everyday life?

As for backfilling, it seems to me that some people express aversion to utilitarianism, and portray it as very different from what it really is, as a way to shy away from taking moral responsibility for the consequences of what they do and omit to do. In fact utilitarianism is in the world as it is making similar demands as several other ethical theories https://twitter.com/ben_j_todd/status/1463867302089302019


As you well know, that example is ridiculous because of the obvious magnitude difference between the two. Moral questions aren't interesting at that scale: nearly every moral system will give you the same answer.

A more interesting question from the utilitarian perspective is this: Imagine you can press a button A. If you press A 10000 people are spared the severest migraines, but one person dies. Do you press it? If yes, two people? If yes, what N is too high? Ah, easy, just sum up the total negative utility of 10000 migranes, and then the negative utility of being dead multiplied by N, and you have your answer. Laughable, of course.

The idea that people opposed to utilitarianism are shying away from their own moral responsibility is ridiculous. Utilitarianism can be used to justify obviously wrong acts, such as, again, the liquidation of small and annoying minorities in order to increase the total utility. The fact that most utilitarians would be horrified by this proposition isn't an argument in favor of utilitarianism. Rather, it is a testament to the moral common sense of even utilitarians.


Your "obvious magnitude difference between the two" is an interpersonal wellbeing comparison claim, the thing you earlier thought impossible. Progress.

"A more interesting question from the utilitarian perspective is ..."

In general the intervention priority in actual health care is guided by utilitarian inspired health economics calculations. Because even if we would want to treat all migraines fully and prevent all premature death there are limits to time, labor and resources. Consult any health economics textbook for an overview, for example "Cost-Effectiveness Analysis in Health: A Practical Approach, 3rd Edition". Can you reply with specific critique of where such established methods run afoul and what alternative you suggest that we replace them with in day to day health care priority work? Focus on the sections about QALYs and, since your hypothetical involved a comparison with death, look especially at the value estimation method called "standard gamble" https://www.sciencedirect.com/topics/medicine-and-dentistry/...


The terminology chosen is terrible, "priority" in this context is self referential and doesn't mean anything. All those (equality, utility, ...) are strategies to define the priority in the face of scarcity or competing requirements, so to call one of them "priority" is nonsense. The author is basically saying that:

- Equality strategy is to prioritize decreasing differences. (OK)

- Utility strategy is prioritizing maximizing the sum total of a set utility function. (OK)

- Priority strategy is prioritizing priority. (WTF!!!)


> Priority strategy is prioritizing priority. (WTF!!!)

From the article:

"Parfit’s answer is that we might value priority, which is prioritising the well-being of the worst off."


>There is also a catch in the hypothetical - let’s assume that the utility gain for the gifted son from living in the suburbs would be larger than the utility gain for the disabled son from living in the city. A pure utilitarian, then, must choose the suburbs. Nagel’s view is this: if you say that you would live in the city for the sake of your disabled son, despite it being the case that moving to the city creates more utility in total, you are not a utilitarian (at least in all circumstances), but rather an egalitarian. You value the equality of the boys more than you do maximising the overall levels of well-being.

The very idea that there is some measusable "utility" to compare in the two cases, independent from your moral values and sentiments, is inane.

>Let’s introduce another scenario: Imagine that the gifted boy has a total utility of 80, and the disabled boy has a total utility of 40.

What that "utility" unit would measure?

Money they can make for you? Their pontential on their own? Their future contribution to society (in what terms? monetary? intellectual?)? Any other of 500 factors (perhaps combined)?

What if you don't want to help build a society that neglects the needs of disabled people because of their lesser contribution, and thus your utility function - ie. your desired goal maximization includes helping the disabled son?

In the examples, it is assumed that ulility == favoring gifted son, which means the utility function you'll use is taken for granted (and the whole thing is presented as only a matter of whether you value utility or not).


Utility is a fundamental concept in decision theory. Arguing it cannot exist is the opposite extreme to arguing that homo economicus, with superhuman evaluative strategies and no gaps in rationality, exists.

People can compare their current situations to relative improvement, and often those are transitive (though not always). So in most reasonable cases the mathematical axioms needed for utility to be defined exist in a reasonable way, allowing for comparison through a formalized utility function.

One can certainly mathurbate themselves with utility, and many do, but ultimately utility is a discussion to simplify communication (instead of primitives) about why people make preditible choices. It extends pretty quickly to revealed preferences.

Of course, asking where preferences come from in the first place is a third rail.


>Utility is a fundamental concept in decision theory. Arguing it cannot exist is the opposite extreme to arguing that homo economicus, with superhuman evaluative strategies and no gaps in rationality, exists.

It's probably more like arguing that leprechauns don't exist.

The burden of proof [for its existance] is on those making up those "fundamental concepts in decision theory". I won't be taking it for granted just because they came up with it.

In any case, I'm not saying utility can't exist. I'm saying some universal utility can't exist, or if you wish: sorry, guys, you can't determine my utility function for me. I'll do it myself, thank you very much.

>So in most reasonable cases the mathematical axioms needed for utility to be defined exist in a reasonable way, allowing for comparison through a formalized utility function.

If we could have a "formalized utility function" for "most reasonable cases" we'd hardly have different morals, political parties, and so on...

It's mostly irrelevant (trivial) cases that have formalized utility functions. Everything else is political, that is up for debate based on interests, preferences, morals, and so on -- and especially based on idiosyncrasy.

Even maximing one's life/health is not some constant. Many prefer to smoke, drink, eat, knowing fully well it might have them, because their utility function favors enjoyment over life span. Others might sacrifice their life for some cause or another.


> you can't determine my utility function for me. I'll do it myself, thank you very much

You have now stated that utility exists.

At no place should any argument depend on some universal utility function that applies to everyone. The idea of a universal utility function is a useful simplification for some beginner classes, but quickly becomes useless for anything in the real world. In fact if there were a universal utility function(s?) economics wouldn't be hard to study, just a simple optimization problem that businesses would use.

All that we need is for everyone to assign utility in some way. It doesn't matter if your function omits critical factors, applies the wrong weighting, or otherwise is a decision you come to regret. (note that this hindsight might be wrong because you don't really know what your regrets would be had you made the other decision) All that matters is at some point you weight all the factors you consider important and make a decision based on them. You can come up with a complex formula to put numbers to it, or just go with a "gut feeling" (in many cases others are involved - perhaps a spouse). Regardless you have made a utility function for your situation.


> You have now stated that utility exists.

Only in the context of a single person. The argument presented in the article, as well as ideas like the "utility monster" are based on the idea that the utility scales of different persons are comparable.

This is not the same thing as a universal utility function, but almost as outlandish.


> utility scales of different persons are comparable.

but a decision is only made by one person, so only that person's utility function matters. A different person, using their own utility function, would come to a different conclusion and make a different choice.

So while there's no universal utility function, it doesn't matter as long as the decision maker's utility function exists (and it does, by tautological argument). In the article, the utility values of 80 and 40 for the boys are the outcome of the parent's utility function. The boys don't get a choice, and so their utility functions don't matter.


> So while there's no universal utility function, it doesn't matter as long as the decision maker's utility function exists (and it does, by tautological argument). In the article, the utility values of 80 and 40 for the boys are the outcome of the parent's utility function.

Yes, but ... the argument in the article presupposes fixed differences in utility when ascribing choices to priority, equality, and pure utilitarian views. Is it not easier to just say that the parent values improving the situation of the disabled boy more, and thus the utility of this improvement to the disabled boy's situation is higher in one parent's view but not the other?

Nearly any parent will choose a massive benefit from son A's perspective at the cost of a tiny expense to son B (looks utilitarian!). Nearly any parent will prioritize a sibling who is less well off in some circumstances. Nearly any parent will give the two sons equal slices of cake when they value them equally. But is it not easier to ascribe different utilities to these different circumstances instead of different allocation functions?


That's fair. The article uses phrasing like "the disabled boy has a total utility of 40", suggesting that the utility is an attribute of the boy, but I guess it would become wordy and repetitive to phrase it any other way.


Is the concept of utility then anything other than tautological? If I understand what you're saying, it's roughly that, "A person chooses things, and since I imagine that their choice process can be caricatured as a linearized ranking system, a utility measure must exist for them".

I'm not saying that's a necessarily false model. But it strikes me as such a crashingly unsubtle simplification that I'd want to see a ton of data demonstrating that's really how it works. As opposed to just being something that academics assume so they can write bold, confident papers with conclusions that they like.


Von-Neumann Morgenstern utility is mathematically precise; there is a real-numbered utility function such that maximizing utility is equivalent to choosing the correct lotteries according to an agent's preferences. So long as every decision one can have a preference about can be stated as a preference over expected outcomes (e.g. 50% chance of ice cream over 30% chance of cake, or related to the article: 90% child-one succeeds and 70% child-two succeeds v.s. 85% child-one succeeds and 74% child-two succeeds) then the utility function exists.

Humans do not have utility functions. We have a lot of circular or contradictory preferences and other ancient machinery in our brains, and especially we do not reason about probabilities and expected outcomes accurately enough. We might be able to grow into having a utility function while still being happy about our preferences and without changing our humanity for the worse.


> The argument presented in the article, as well as ideas like the "utility monster" are based on the idea that the utility scales of different persons are comparable.

The article makes no implicit or explicit statement about how one defines a/the utility function, but I see no reason to believe the author thinks it's a universal function.


As indicated in the sentence directly following your quote.


>You have now stated that utility exists.

I never said it doesn't. In fact I opened this thread by writing "The very idea that there is some measusable "utility" to compare in the two cases, independent from your moral values and sentiments, is inane".

That is, it's the independent from the person, measurable, utility (as per the example in TFA/theory) that I called BS.

>but quickly becomes useless for anything in the real world

My sentiments exactly.

>All that matters is at some point you weight all the factors you consider important and make a decision based on them. You can come up with a complex formula to put numbers to it, or just go with a "gut feeling" (in many cases others are involved - perhaps a spouse). Regardless you have made a utility function for your situation.

Sure. But none of this has much to do with the theory as presented in the post's examples...


> The burden of proof [for its existance] is on those making up those "fundamental concepts in decision theory". I won't be taking it for granted just because they came up with it.

What? It's trivial. People want things. Things that satisfy wants have utility. You can just look this up. It's pretty basic to modern economics and philosophy.[1]

1. https://en.wikipedia.org/wiki/Utility


You call it trivial, but the second sentence says, "Its usage has evolved significantly over time"; both can't be true. And the criticism section makes some good points: https://en.wikipedia.org/wiki/Utility#Discussion_and_critici...

I also think there are a number of questionable assumptions behind it, and even your simple version of a supposedly trivial concept doesn't match the current official definition well.

So to me this isn't so much an obvious fact about the world as a synthetic cornerstone to a worldview. Sort of like Peano's construction of the integers, or the way theists talk about the things that are "pretty basic" to their religion. Those things feel trivial to their adherents, of course. But the rest of us can find sweeping dismissals like yours as very offputting.


You're just defining satisfaction of wants as utility. "Things that satisfy wants have utility" is a statement of a definition, not a synthetic claim. There are plenty of measures of utility other than hedonic ones (or volitional ones, or whatever exactly your definition is specifying).


Yes, because that is what's meant by the word in this context.


I'm aware, but my point is that you're not making a synthetic claim - you're not proving the (axiological) meaningfulness of a concept. You're just saying "I use this word 'utility' to describe the satisfaction of wants".

It doesn't really answer any of the questions that were posed, about how you can measure and compare the 'want-satisfying-ness' of different things. How do you measure the degree of want? How do you measure the degree to which a want is satisfied? How do you compare those across human beings?

If by 'trivial' in your original comment you meant 'trivial' in the technical sense[0], then I'd agree with that. "I define 'utility' as 'satisfaction of wants'" is a statement that neither predicates nor proves anything of the world.

[0] https://en.wikipedia.org/wiki/Triviality_(mathematics)


It does answer the questions by reformulating them in exactly the way you did, which immediately highlights the GPs confusion: They somehow missed the subjective and relative aspects of the concept as it is used. There is no objective and absolute measure of want-satisfying-ness (or whatever).

> It's probably more like arguing that leprechauns don't exist.

> In any case, I'm not saying utility can't exist. I'm saying some universal utility can't exist, or if you wish: sorry, guys, you can't determine my utility function for me. I'll do it myself, thank you very much.

This is what I'm responding to.


And yet, the theory of utility works well under extensions of bounded rationality, with full acknowledgement of time inconsistent preferences.

Utimately utility is a simplifying model of human behavior.

> we'd hardly have different morals, political parties, and so on.

This touches on where preferences come from, which utility theory is mainly silent on.


Right, but if the essence of the question is by what function you measure utility, then the question as posed by the article is a moot point. utility, priority, equality - they're just slightly different cost functions for the utility. And it's not even the case that they're well-defined and clearly separated; some level of interpretation is going to be required regardless.

For example, people routinely act as if money has a non-linear utility; we'll insure ourselves against stuff partially because being destitute is worse than the mere loss of money might suggest; i.e. each additional dollar is worth less.

But exactly how you define those non-linear relationships, especially once you include stuff like happiness, health, and intend to aggregate over multiple individuals is clearly tricky, and it's not reasonable to expect any one simplified model to work well in all situations in reality. It's not even reasonable for that to be knowable or computable.

So it's both perfectly reasonable to consider it ludicrous to label one such scenario as having "40" and "80" utility without having had the critical discussion of what that utility is measuring, while also conceding that the concept of utility is reasonable and... sometimes... enlightening.


Thanks, this was a really insightful comment (as someone who spent years of my life getting a graduate philosophy degree, before doing something more 'useful'). I think the concept of utility is clearly, uh, useful, and the reason that it's aversive to people is that they tend to bundle it up with a lot of the (sociologically, not logically) related views, which tend to be more problematic.

Hedonic utilitarianism in particular turns a lot of people off, and partly for good reason. I'm deeply ambivalent about it, and I think the surrounding debates, and the assumed primacy of moral intuition in applied cases, are far harder and more open questions than most people reckon. But I can still see how examples like utility monsters, or gang rape being morally superior to garden-variety rape because there are more people to enjoy it, might make people feel like it's really on the wrong path.


Those examples are hilariously egregious, yeah! It's slightly taboo in polite conversation to see increased utility there, yep. Thanks for the kind words, too.


No problem! And yeah, I had a moral philosophy professor who had endless examples like that, including that one. They were hilarious and so intuitively potent, I just wish I could remember more of them. He could spend a full 5-10 minutes in a lecture just retailing dozens of those ridiculous counter-examples. (It was especially funny because he was a very urbane old Oxonian professor - think Richard Dawkins for a pretty close analogue to his general mien - whom you wouldn't expect to start enthusiastically talking about gang rape.)


Philosophy professor John Holbo had a blog post about ridiculously whimsical scenarios in philosophy under the delightful title Occam's Phaser:

https://crookedtimber.org/2012/02/25/occams-phaser/

There are good examples in the comments too, though I mostly recall it for getting into a heated argument with someone making utilitarian arguments for torture.


Interesting, thanks! My position on the whole 'using moral intuitions in applied cases to disprove fundamental moral theories' is basically what I said in this thread: https://twitter.com/samziz/status/1412198411579887622

Incidentally I wouldn't agree with utilitarian arguments for torture, but not - necessarily - because I don't agree with utilitarianism. I think it's certainly possible to make higher-order or rule-utilitarian arguments against torture, within the parameters of utilitarianism.


I disagree. OP is correct. Utility is a rhetorical construct. Sometimes it works well for describing morality, decisions, etc. Sometimes it's crammed in.

Using it to describe the two sons decisions is cramming it in.


> Utility is a rhetorical construct

It's a mathematical construct to show ranking of points in a topological space. With two simple axioms, comparability and transitivity, it is fairly well defined mathematically, though it typically enters the extrapolation zone at extremes.


But if the definition is barely more fleshed out than "some cost function", then the difference between utility, priority and equality as discussed in the article collapse; they're all the same thing. Cost functions, aka: utility.


Cost functions are typically the dual for utility maximization under cost constraints.

You are correct in that they are usually mathematically equivalent.


Yeah, I was struggling the find the best terminology for that, but given the ambiguity of the term "utility" in this context I though it better to avoid that ;-).


I'm certain nobody argues with you on that point. The question is: what does the fact that numbers can be ordered entail for ethics?


I’ve always heard utility in the context of a utility function. Basically:

f(u) = Wx*x + … + Wz*z, where x and z are variables that are impacted by decisions and constraints. Each variable is weighted for importance by the person / group using the utility function.

So for a home buyer needing to get to a city, the utility of the house improves as the location to the city gets better, subject to the constraint that it’s not in the river. A home buyer utility function might also weight cost, neighbors, amenities, square footage, local pollution, safety, and any other meaningful variable for the buyer.

Turning this into a quantitative formula can be cramming it in and quite hand-wavy, but ultimately it’s up to the person optimizing for their own utility to put in the variables and weights. These will be shifted by the person’s moral code (e.g. A Jewish person may highly value living in the city’s Eruv).

On a political note, big government supporters believe the federal government can define a utility function for the country that is best for the greater good. People who believe in smaller federal government and governing at the local level believe the utility functions should be defined at the individual level if possible - subject to the constraints one does not infringe on others’ rights. There are benefits to both sides (some things we can’t achieve if everyone acts independently, some things create externalities, some things have too many edge cases and unintended consequences).

I think the extreme of a shared utility function is communism, with an idea of central planning.


If the math of utility is interesting to you, check out Hal Varian's microeconomics book (he is/was chief economist at Google) or the intro grad text for microeconometrics Mas-Collel, Winston, and Green.

Utility theory's primitives are defined before the actual function.

Social choice theory is covered in MWG -- arrow's impossibility theorem is absolutely fascinating!

The field of mechanism design relies heavily on utility theory -- it's effectively the inverse of game theory, or, how to structure systems and incentives to get desired outcomes.


Yes, utility as a concept in decision theory is great, but it’s not the same as the concept by the same name from utilitarianism. In my understanding, coldtea is doubting not decision theoretic utility functions, but utilities as they are used here.

Most importantly a decision theoretic utility function is only defined up to any positive definite transformation. Inter-agent comparisons like "Imagine that the gifted boy has a total utility of 80, and the disabled boy has a total utility of 40” don’t make any sense in terms of decision theoretic utility functions.


I don't think GP is arguing that utility doesn't exist. I believe the GP is arguing that the OP is making arguments as if utility weren't subjective.

If you can attach a number to decisions, you can just do the math. The thing is, attaching a number to make a non meta argument about decision making can be bollocks since the actual utility can be -9999999 for me or 9999999 for you. An utility function is a function of the decision making agent

See the "independent from your moral values and sentiments, is inane" bit


Utility can be intrepret in many ways. Look at the social sciences and how they see minorities, e.g. disabled people. From the social scientist's view, one "utility" of these people is that they stabilize societies because they trigger empathy, which would otherwise be largely missing in a society that only aims for optimization. I want to emphasize that I find it generally humilitating to talk about utility and humans in one and the same sentence.


think of it as a label for the process by which you decide to prioritize cleaning up different areas of your house -- and utility becomes a rational and humanizing thing.

using it to prioritize your relations forces you to grapple with subjective and irrational things such as personal prefs, aspirations etc. so.. also rational and humanizing.

this leaves using it to mess with others without them participating in weight-setting (democracy as a weight discovery mechanism?), that's where it gets messy.

i fail to see where any of the three facets above make it humiliating to see where 'by priority' the most relative improvement can be made. i mean, this is all just fine talk about something innate to nature, no?


If someone has a special preference for egalitarian outcomes, this should be included in their utility functions.

Telling someone their utility values for each of the choices is equivalent to telling them their preference. Asking for their preference afterwards is pointless, they have already been told their preference.


When the article talks about utility in its examples, it is not talking about some universal objective utility that all would agree on. It is talking about the utility that the person making the decision assigns which will depend on their moral values and sentiments.

> What if you don't want to help build a society that neglects the needs of disabled people because of their lesser contribution, and thus your utility function - ie. your desired goal maximization includes helping the disabled son?

Then you'd have a case where utilitarianism and egalitarianism produce the same outcome which is great when you can achieve it, but not very useful in an article that is trying to talk about when utilitarianism and egalitarianism produce conflicting outcomes.


Utilitarianism can never be in conflict with your preferences whatever they may be. This is by definition.

If the utility values would conflict with your preferences, then those were not the correct utility values to begin with.


> In the examples, it is assumed that utility == favoring gifted son

This is a misreading of the article, which assumes utility in helping either son. The point is that while a 'pure' or 'fundamental' utilitarianism would simply say one should choose the option that maximizes the total of this utility, the priority view says there may be rational reasons for using a weighted sum of the utilities, or include additional terms.

This article should be seen in the context of moral philosophy, which (naively) might be thought of as an attempt to find a rational basis for ethics, but more realistically should probably be seen as probing the extent to which one can be rational about such matters.

> The whole thing is presented as only a matter of whether you value utility or not.

That is because it is a continuation of a discussion over the utility of utilitarianism that has been going on, in some form, since antiquity, and which picked up pace after Bentham formulated his Principle of Utility [1].

There are quite often cases where one can have a somewhat objective utility function, and this comes up repeatedly in urban planning, as it is often the case that a project that is beneficial to the community as a whole often has a downside for some (usually those living near where the project will be sited.) A purely utilitarian view almost always favors putting the burden on those who have little left to lose, and the priority view says there can be a rational basis for choosing an alternative.

Somewhat ironically, the priority view argues against what you seem to find objectionable in simple utilitarianism. Perhaps it is also worth pointing out that when utilitarianism was first proposed, it was rather radical; prior ethical notions were mostly about obeying your betters (on Earth and in Heaven.)

[1] https://human.libretexts.org/Courses/Lumen_Learning/Book%3A_...


Inane, perhaps.

I don't think these ideas can be separated from their time and place. Like most philosophical/intellectual movements, a lot of what they are is objections, dialogue and alternatives to previous ideas or competing ideas.

To us, 2-300 years later, we don't necessarily need a concrete basis for secular morality. We also don't expect morality to be reducible to a simle principle like F=ma.

To them, they were in a period where medieval theology was being replaced by secular philosophy and science. They expected morality to be solved like Newton and Galileo had solved problems in their domains. We don't expect this anymore.


I agree, i think it's far more likely that those favoring the disabled child are simply rejecting the artificial and abstract notion of utility described in the hypothetical, and going with their own experience. Which is that the real-world utility to the disabled child is in fact far higher. I don't think anyone could read that hypothetical (particularly in the age of the Internet) and believe that putting a gifted child in a city would be significantly harmful to them.


The city vs suburb is just a lazy shorthand to setup the hypothetical. If the issue is transportation time to a hospital, you could live in a suburb near a hospital.

Some cities have excellent schools and some cities have awful schools and the same for suburbs.

To make my own lazy shorthand, would you consider it significantly harmful if a gifted child is placed in a classroom where everyone else is behind grade level and the instruction is paced accordingly vs a classroom where instruction is paced at grade level or perhaps at an accelerated pace? If that's not enough, what if it's a rougher school where physical altercations are the norm.

Sure, these days, there's the internet, the magical cornucopia of knowledge, but it can be hard to get the motivation to use it.

All that said, my personal utility function measures a lot more utility for independence than for education and what not. If a better situation for the disable child may result in more independence for the disabled child, the gifted child is just going to have to make the best of a situation that's been decided for someone else's best interest.


now you're touching on the limits of knowledge of those making choices and might be tempted to rate them. that's out of scope for the deciders at that level, the parents in that story. ... the point is that they know the situation best. and that utility lets them quantify the subjective to test-run the rationalizations going into their decisions.

e.g. "maximising sum(log(utility))" like the comment on the article said. the only thing strange here is that philosophy deals with qualitative, not just quantitative domains. thus they tell these stories. :)


True, but the parents in the story are imaginary---their decision-making is not being tested. Rather, it's the observer who is being tested. But the story is so contrary to experience that is does not do what the premise of the article suggests it does: distinguish observers that care more about equality than utility. Because the premise is flawed (ie, that this is a valid test), it tends to moot the rest of the article drawn from that premise.


You can patch it back up by replacing utility with something specific, although at the cost of its mysterious air. Let's say that the smart kid will... cure cancer if he's in the suburbs, but become a drug kingpin if he grows up in the city.


It's just a theoretical value that philosopher's use to avoid the subjectivity of utility when making an argument. They are well aware it's subjective, but the subjectivity of utility is agreed upon and not of interest in these thought experiments.

When someone says, "if I had a million dollars, I would take a trip around the world!" you don't chastise them for not having a million dollars. Well, unless you're my mother ;P


If the “utility score” was an overall rating for quality of life, it might change your view? Whether I assign a numerical or qualitative value is (arguably) arbitrary: as a parent, I’m still calculating which actions I should take based on some scoring mechanism.


>as a parent, I’m still calculating which actions I should take based on some scoring mechanism.

The key is that you do it: it's not imposed upon you in the form of a normalized/universal scoring rule.


Which the article doesn't argue against. Instead, it assumes for the sake of argument that you've made a utility calculation whereby favoring the disabled son is the worse choice.


> What that "utility" unit would measure?

Well, that's up to you, is it not? Who else could determine what you value and to which degree?


> The very idea that there is some measusable "utility" to compare in the two cases, independent from your moral values and sentiments, is inane.

There's philosophical charity, in the sense of your ability to put aside your judgments to listen and gain knowledge. That can be measured. Simply ask "You know this?" and count how many no's. There's some things we will never know that are very important, like dying or being created. From those you can get to real altruism. Caring for others is the natural state of humanity and makes humans strong, nice and beautiful. Selfishness is unnatural, weak, hateful, and ugly. Whatever created you cared about you and helped you, gave you the capacity for joy and happiness. Sooner than you think you will be abandoned by everything and those will be taken away and you will be in need. It's better for you to help others than just helping yourself. Personal sacrifice is not needed. It's not okay to be hurt or be a victim. Interior motives are irrelevant. It's better if everyone in the world is cured than just you while everyone else dies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: