Hacker News new | past | comments | ask | show | jobs | submit login

As someone who spends almost all of my productive time on earth trying to solve problems via algorithms, this paper is the kind of take that should get someone fired. God I forget how much stupid shit academics can get away with writing. Right from the abstract this is hot garbage

> algorithms even though (2) algorithms generally outperform people (in forecasting accuracy and/or optimal decision-making in furtherance of a specified goal).

Bullshit. Algorithm means any mechanical method, and while there are some of those that outperform humans, we are nowhere near the point where this is true generally, even if we steelman this by restricting this to the class of algorithms that institutions have deployed to replace human decision-makers

If you want an explanation for "algorithm aversion", I have a really simple one: Most proposed and implemented algorithms are bad. I get it. The few good ones are basically the fucking holy grail of statistics and computer science, and have changed the world. Institutions are really eager to deploy algorithms because they make decisions easier even if they are being made poorly. Also, as other commentators point out, the act of putting some decision in the hands of an algorithm is usually making it so no one can question, change, be held accountable for, or sometimes even understand the decision. Most forms of algorithmic decision-making that have been deployed in places that are visible to the average person have been designed explicitly to do bigoted shit.

> Algorithm aversion also has "softer" forms, as when people prefer human forecasters or decision-makers to algorithms in the abstract, without having clear evidence about comparative performance.

Every performance metric is an oversimplification made for the convenience of researchers. Worse, it's not a matter of law or policy that's publicly accountable, even when the algorithm it results in is deployed in that context (and certainly not when deployed by a corporate institution). At best, to the person downstream of the decision, it's an esoteric detail in a whitepaper written by someone who is thinking of them as a spherical cow in their fancy equations. Performance metrics are even more gameable and unaccountable than the algorithms they produce

> Algorithm aversion is a product of diverse mechanisms, including (1) a desire for agency; (2) a negative moral or emotional reaction to judgment by algorithms;

In other words, because they are rational adults

>(3) a belief that certain human experts have unique knowledge, unlikely to be held or used by algorithms;

You have to believe this to believe the algorithms should work in the first place. Algorithms are tools built and used by human experts. Automation is just hiding that expert behind at least two layers of abstraction (usually a machine and an institution)

> (4) ignorance about why algorithms perform well; and

Again, this ignorance is a feature, not a bug, of automated decisionmaking in practice with essentially no exceptions

> (5) asymmetrical forgiveness, or a larger negative reaction to algorithmic error than to human error.

You should never "forgive" an algorithm for making an error. Forgiveness is a mechanism that is part of negotiation, which only works on things you can negotiate with. If a human makes a mistake and I can talk to them about it, I can at least try to fix the problem. If you want me to forgive an algorithm, give me the ability to reprogram it, or fuck off with this anthropomorphizing nonsense

> An understanding of the various mechanisms provides some clues about how to overcome algorithm aversion, and also of its boundary conditions.

I don't want to solve this problem. Laypeople should be, on balance, more skeptical of the outputs of computer algorithms than they currently are. "Algorithm aversion" is a sane behavior in any context where you can't audit the algorithm. Like, the institutions deploy these tools are the ones we should hold accountable for their results, and zero institutions doing so have earned the trust in their methodology that this paper seems to want




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: