Hacker News new | past | comments | ask | show | jobs | submit login
Sentenced by Algorithm (nybooks.com)
90 points by lxm on Aug 2, 2021 | hide | past | favorite | 55 comments



I miss any discussion about the elephant in the room: On what data does a potential algorithm base its decision? Who puts in that data?

If you know how the algorithm works (and this one must be open source), you can game it by manipulating the data it gets to see (or not) and how the specific wording is.

Can the algorithm generate and ask questions by itself? Again that would be pointless if you know the algorithm - you would know the question beforehand and how exactly a given answer influences the outcome.

This idea cannot ever work until you have a strong general AI (which you can not predict any more and which may have its own biases) and we are as far from a strong general AI as we have ever been.


> If you know how the algorithm works (and this one must be open source)If you know how the algorithm works (and this one must be open source)

Hah, no chance any of the sources to these things will ever be public.


Well, just throw the constitution in the bin then (any democratic constitution, really). Might be fine for dictators though, getting rid of those unreliable judges.


It has been shown many times that if you know the human making the decisions, you can game it by knowing the person, or anything from having a preexisting relationship with them, being famous, dependence (e.g. they're a tenant of you, or a landlord), planting ideas, ... Most of these are not illegal, or even possible to avoid.

And then, of course, there's racism, nepotism and outright bribery.

Humans, under the best of circumstances, are known to be unfair. Can an algorithm really be that much worse ?


You miss the point. We don't have a computer process that establishes factors such as the level of remorse expressed by the offender or their claimed motive, whether their misinterpretation of the relevant law is plausible or not, or the magnitude of the harm caused to the victim or whether the plaintiff's claim of monetary losses suffered is plausible or not so the "algorithm" is a thin veneer over a human determining which factors are and aren't relevant... so you get exactly the same level of bias before plus noise from the human judge making these determinations not knowing exactly how the algorithm works and a false claim of greater impartiality.

It's probably even worse if you try to pull the human out the loop altogether and throw an ML black box at the testimony, which doesn't understand the actual situation the defendant describes at all but does pick up the statistical association between African American vernacular words used and higher sentences in its training corpus. Much easier to train a model that picks up the prejudice in sentencing factors than the situational nuance...


I think the point is that we are currently using opaque, poorly trained and erratic ML (Meat Learning) algorithms. Oh sure the meatbox says that it's allocating N years for this, M years for that, but empirical studies have proven that this readout is nonsense.

Which kind of ML is easier to train to be non-racist? I don't know, but it's not obvious to me that meat is the winner here.


Exactly this. Washington Post had an article sometime ago about software used in sentencing/parole that consistently judged black defendants more harshly (recommended longer sentences than white people for the same exact crime and recommended parole at a lower rate for black prisoners). The software was proprietary and there was no access to the code but even if it were perfectly written it learned from the data fed to it.

The software came to the conclusion that race (black vs white) was a predictor of crime and recidivism. How did it come to this conclusion? Because of the data

- Black people get arrested at a higher rate than white people

- Black people were also more likely to re-offend than white people

So the conclusion that the program came to makes sense. But it totally ignores the external factors that lead to the 2 above statistics

- Black neighborhoods are more heavily and more aggressively policed, meaning that you will uncover more crimes.

- Black people are targeted by police (black and whites consume drugs at the same rate, but black people are more likely to get stopped, more likely to get searched and more likely to get arrested)

- Over their lifetime, black people are more likely to have more contact with police (even if they live in an all white neighborhood). All it takes is for one of those times they commit a minor infraction (having weed on them etc). That conviction then becomes a justification for elevating their 'risk' level.

- Black ex-convicts have a harder time getting jobs due to inherent bias (all ex-convicts have a hard time, but black ex-convicts have a much harder time). This closes off avenues to gainful employment making turning back to crime one of the few options available to them. And once again, black ex-convicts have more contact with police than white ex-convicts.

Sans context, the input data can be an incredibly effective way of propogating bias into the model.

Even strong AI will not solve for this without some corrections/mintigating strategies. The most effective strategy would be solving the policing problem (bias, over-policing, how we close off all avenues at rehabilitation to people who have been convicted of victimless crimes)


Is the sentencing the bottleneck in the justice system? Is it something that is so tedious to do that we rather have a machine do it?

Or is it that we want to eliminate bias so we let the algorithm do it? Because machines are cold calculator that only cares about the data.

Or is it shifting the responsibility? If a judge gives a light sentence to a person who ends up committing a crime again, does the judge get the blame? Is the algorithm an externalizer that dilutes the responsibility? "Well the machine said he was clean, it knows a thing or two we don't know. Plus it's a black box I'm just an ol judge. I don't know how email works, but it works all the same."


> "Well the machine said he was clean, it knows a thing or two we don't know. Plus it's a black box I'm just an ol judge. I don't know how email works, but it works all the same."

Sounds like that, and the usual bias laundering because “AI” is helpful for that.

AFAIK a sentencing hearing usually takes minutes, maybe an hour if the parties don’t agree and there are lots of victims or witnesses and the judge wants to hear from them. Only in rather exceptional case does the judge need to adjourn for sentencing.


Really thoughts for the topic remain as always: stop letting the people in power pass the buck to the algorithim. That is the entire purpose of those algorithms as implemented in the real world as opposed to some pie in the sky theorist - letting them escape responsibility.

I will also note that algorithimic sentencing existed before computers with sentencing guidelines.


It seems the appropriate method to judge this approach is in comparison to the baseline referenced at the start: humans making “black box” judgement calls that are subject to the individual judge’s biases. If the computer programmed decision making were (already is?) transparent, then we could critique it and work to ensure something resembling fair is encoded.


"Computer programmed decision making" is anything but transparent currently. Just look at the example of Northpointe, Inc.'s COMPAS ("Correctional Offender Management Profiling for Alternative Sanctions") tool, which purports to measure recidivism risk. Even under threat of lawsuit, Northpointe refused to reveal the underlying source because it is a "protected trade secret."[1]

Are judges biased? Of course, and that is not acceptable. However, the answer is not to surrender our system of justice to unaccountable commercial actors.

1. https://www.uclalawreview.org/injustice-ex-machina-predictiv...


“Compas” was also the name of the test used by Wisconsin community colleges to assess students.

It was legally required that all community college students take it, even if they were only taking a single course.

It seemed like a really corrupt arrangement. Want to take a language class to complete a high school requirement? You have to pay $100 to a for profit company to assess your math skills.

Seems community colleges no longer use a test branded as “Compas”.

Edit: it was “Compass” and discontinued in 2015 over concerns of its accuracy.

https://www.insidehighered.com/news/2015/06/18/act-drops-pop...


How would you define 'fair'? I've a few examples, gathered from being alive and hearing what other people think for a few decades.

1. It's fair to punish someone else if you let another off the hook

2. It's fair to treat someone more harshly if you don't like what they've done

3. It's fair to turn the other cheek because they're a friend

4. It's fair to treat someone differently because of where they came from or their skin colour

5. It's fair to give someone a pass because they did you a favour

6. It's fair to be unfair because you owe someone a favour

7. It's unfair to be a victim who doesn't get justice

8. It's unfair to be an innocent person who is prosecuted for someone else's crime

9. It's unfair to not get what you wait

10. It's fair to get what you want

You will find a variation of all of these examples across the world right now.

The point is, fairness is loaded with bias and therefore any algorithm that tries to deal with 'fairness' is going to inherit the bias of those who designed it.


You make really good points. Life is bizarre and human culture is extremely contradictory and difficult to quantify.

And what's worse our definition of 'fair' maybe advancing just like culture is constantly advancing.

African Americans may have been treated poorly by the algorithm back in the 1700s when they we're considered less than a full person by the legal system for example. And maybe in the future drug offenses may be considered not a big deal.

They call the Constitution a living document.

This algorithm may have to be a living algorithm.

What if it was open source and constantly updated and reviewed to reach consensus?

There would probably also have to be human appeal processes.

I definitely think this is intriguing as a first-level sentencing determination though.


> African Americans may have been treated poorly by the algorithm back in the 1700s

Fucking hell, man.


I completely fail to see what's the issue with what OP said, care to explain?

It is completely accurate, algorithms are built by people whom live inmersed on their own temporary and geographically atomized cultures, just look at modern supposed ""unbiased"" algorithms used to ""prevent crime"", it is just racial biases baked in


For context, the issue was, as many posters said, this:

> African Americans may have been treated poorly

On my part, I'm british. "Fucking hell, man!" for me is a way of being exasperated; I was surprised with reading that. It's not a condemnation, it's surprise.

And a phrase containing 'may have' is begging for equivocation, especially when referring to black people as 'african americans' while talking about their treatment.

> Black people may have been treated poorly

^ this is the sentence that I read


I am super confused as well. Lol.

This website is really strange.


I suspect the "may have" is what set people off.


I see how the parents wording is confusing. IMO, they weren't saying African Americans "may have been treated poorly" in the past (I suspect this is quite uncontentious), they were saying that even this algorithm, now, applied back in the 1700s may not have eliminated bias. Instead, it may have simply applied the prevailing biases emotionlessly. It's hard to see the inequities of your system when you participate in it.


Thanks for letting me know.

So strange.

The 'algorithm' is the noun object of the preposition.

Not society or history .

Is modern English education really that poor?

Or is there purposeful misinterpretation in order to be offended?

I just don't get it.


When I read it, as a native speaker, I read "treated poorly by the algorithm" as being a term for the behavior of culture and systems of the time, and not a hypothetical algorithm from the future that took as input the culture of the time and made decisions about fairness.

I think it was the use of "the algorithm" and not "an algorithm" that most affected my reading there - "poorly by an algorithm" comes across, to me, as "poorly by some algorithm", rather than "poorly by _the_ algorithm", implying a specific instance of a thing, already known by the reader or referred back to later, doing it.

Consider "treated poorly by the system" versus "treated poorly by such a system", which has an even clearer connotation, to me.


My first reading as a native speaker was that "the algorithm" meant "the Man". But after several readings, I think he meant that this algorithm would categorize someone who was legally considered less than a citizen as a more likely recidivist. Which is an interesting point. Particularly in the US where there's a notion of illegal immigrants having already broken the law. Would this algorithm consider them to be more likely to be repeat offenders if it were weighted to treat noncitizenship as a factor, or overstaying a visa as a crime? And if so, then why wouldn't it weight slaves as more likely to reoffend. And if it did that, what would stop it from weighting the descendants of slaves the aame way?


I thought the US was somewhat unusual in its lax treatment of illegal immigration. However lying to or sneaking past border officials to enter a country is literally a crime yes (certainly for foreign nationals, and probably not kosher for citizens to do either). Though entering on a visitor visa and overstaying is just a civil violation with a civil penalty (deportation). If a citizen had a record of using false identity documents or trying to evade legally-enforced border crossings, a fair algorithm would presumably have to take that into account just as well.


Interesting example of how context can affect the sentencing and seriousness of the crime!


Well if you enter as a legal visitor through a border checkpoint you give the govt the opportunity to do security checks, restrict import of various things, charge customs taxes. Plus they have an idea you are there. It makes sense that sidestepping some or all of that screening and record-keeping will be treated more harshly.


Yes! Exactly. If humans wrote an algorithm, to determine sentencing of crimes, it's possible that biases and predjudices of the time would be included in the algorithm because they don't realize or don't want to admit that their current ways are unfair.

Much like the constitution did!

But the constitution can be amended to be more fair and so the abstract sentancing algorithm, whatever it may be, would have to be amendable as well.

I hope that statements not racist!


> African Americans may have been treated poorly by the algorithm back in the 1700s

Tell me about how your algorithm enslaved people.


If you're asking if I took a time machine back to the 1700's to use an imaginary algorithm, to enslave people, I assure you good sir I did not.

This is indeed one of the potential problems with inventing an algorithm to sentance people, as I mentioned before, is that it would be written to match the biases of the time period.


Your outrage radar is a little too sensitive pal.


You don't see any problem with the "may have"?


I think what OP meant is "if the algorithm were designed (or perhaps implemented) in the 1700s according to standards of fairness held by those in power at that time, then it might have treated African Americans unfairly (despite being an algorithm)". I'm not a mind reader but that seems to make some sense given the context of the discussion.


You don't have to go back to the 1700s to find African Americans being treated poorly by the judicial system.

It's a weird shifting of the sands that isn't necessary to the conversation, when the context is a subject that is causing real harm now.


That supports my point even more.

If certain groups of people are being treated poorly in modern times then an algorithm that sentences people would possibly include that bias as well.

Is that statement offensive to you?


> If certain groups of people are being treated poorly in modern times then an algorithm that sentences people would possibly include that bias as well.

I’d even change the word “would” to “will” as that has already occurred time and again.

I think you two are violently agreeing with one another, but misunderstanding one another as well.


Come on. Plantation era slavery was great! Free housing, free food, a lot of excercise, no Bay Area rent prices.

What's the big deal?


[flagged]


?


?


I can actually see a use case in earmarking court decisions for review that are far off from what may have been expected.

Handing over the decision altogether, however, is like handing justice over to a theatrical magician ("The Great Fully Automated Barnum Court"): it's commercially incentivized, you may not know the trick, because it's a trade secret, and it's not the real thing, either. (But, exactly because of this, people will fall for it.) Also, judgement is very much about discerning correlation from causation, so correlation based data may not be the best foundation, to begin with.


This reminded me of something I read, and I can't recall whether it was in Argentina under the junta, or in North Korea, or in the USSR ...but the gist was that people sentenced for minor crimes or for irritating the state were essentially assigned random sentences and punishments, divorced from or even absurdly light or harsh relative to their suppose crimes, as a way of terrorizing the population.

Or maybe I'm thinking of a Kafka story, or Dostoevsky's mock execution. Or 1984. Shit. I've lost it.

[edit] Might have been from the Killing Fields. Wow, this is really dredging up a lot of wonderful historical references...


Don't forget the film Brazil, where a bug-induced misprinted warrant leads to an obviously wrong person being arrested and then interrogated to death, because how could the machine be wrong?


"Something resembling fair", i.e. unfair? We already have that.

Not that the idea of codifying fairness is new; a certain guy named Hammurabi started the trend a while ago.

The result of humanity's collective attempt to create a Transparent Fairness Flowchart still leaves much to be desired.

Replacing codex with code, will, however, streamline the process by entirely removing (the already meager) accountability for bad decisions, as well making it effectively impossible to correct them:

"Computer says no" [1]

[1] https://youtu.be/0n_Ty_72Qds


You can't automate justice.Minority report explained it perfectly, yet the decision makers, out of laziness, choose to ignore this.


Decision makers don't watch movies and don't read HN.


Actually, the headline could be more technically correct written as "sentenced by algorithm executing on backdoored remotely controlled computer", as that is the case today.

Enjoy your dystopia of greed, fraud, and injustice.


Isn't the court system itself flawed already. How can one person win a case by hiring a "better lawyer". It means the other party can be proven wrong if you have enough money. Which means court system does not depend on facts as much as how lawyers twists words to fit a law.



So you're saying that computer algorithms are classifying based on correlations they've found in the data?


If the model starts having "racial bias" then the answer is to remove the "race" and "skin color" parameters and opensource it, not make propaganda against machine learning because it has biases. Of course it has biases, you know what else has biases? Humans, who will outright lie about their biases (such a feat machines have yet to do) to divert the attention for personal agendas.


Combine this with UBI and a monitoring collar (cellphone). Running a society becomes ridiculously easy. Simcity.

It might not be a free society per se, but it might be a very happy one.

(I'm getting that friendly "you are posting too much, wait an unspecified period of time before we'll allow you to post again." thing. It discourages trolls but it also enforces mediocrity and conformity. Just saying.)


Happy? People sentenced by a judge or jury have a hope of appeal. People sentenced by Google don’t have any hope.


Yeah, but we would have a lot more free time. Life would be simpler.

And as long as you keep your behavior within the statistical sweet-zone the justicebots will leave you alone. (Much like social media, actually)


Look up chilling effects. This would be literally a kind of enforced conservatism.

Imagine this was up when Gallileo was being sentenced. How many Socrateses do you want to prevent? And so on.

Society can be wrong and mindlessly enforcing status quo is often immoral.


Yes. As I said. It might not be a free society, per se.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: