Hacker News new | past | comments | ask | show | jobs | submit login

How is it in an opposite direction with computers? Whenever you are writing a program you are trying to as much as possible fulfill what your user wishes.



Is it what your user wishes or what you wish your user to see.


I'm not sure that's exactly true and I'm not sure on it's applicability to the article.

The author's point isn't about that it disregards human feelings in my opinion, it's that we implicitly trust and idolize the machines without thinking about the biases and uses for a lot of the technology we rely on. I actually feel the examples used by the author are not great and a bit showy and repeating their initial anecdote, but let's consider some technologies that carry the same premises the author comments on but have a lot of misuses by humans.

- KPI tracking tools. Useful of course, but what a lot of persons miss is the I in KPI, which is __indicator__. Numbers don't lie, but the people reading the numbers do, often unintentionally. Consider the classic example of a company struggling with customer satisfaction (CSAT) that looks just at KPIs, sees the entire team is in the green save CSAT, but customer satisfaction is getting worse and worse. The management is scratching their heads because the numbers seem contradictory, but in fact, they fail to realize that the KPI for closed cases has been optimized for by the support team and emphasizes closing cases by any means instead of solving issues correctly

- Human detection mechanisms that fail to detect persons with darker skin tones (this was frequently an issue even as recently as the 2010's when the tech was getting a ton of attention); the researchers didn't consider or didn't bother to test with darker skin tones

- Speech to text that cannot handle accents/dialects or doesn't work for non-english at all (I even found this today talking with a colleague who is a non-native English speaker; as I gushed over the M1 Mac dictation tool being pretty damn good, she reminded me it's because I'm a native speaker and the dictation is not very good at handling non-native speakers. Their accents may be parsable by humans, but the dictation AI is not trained to really handle this)

- Edit: adding this one cause it hits close to home. Call time monitoring that punishes non-native speakers or persons with speech impediments because they need to ask additional questions or just take longer to convey the same point. At one of my workplaces where I oversaw support, I lost a number of good engineers to management KPI obsession because of such non-sense. They were demonstrably better in every technical metric than their English-native speaking peers, but the KPIs for calls showed them to be inefficient. Rather than analyze the actual content and capabilities of the engineers, when layoffs came, the KPI was the only metric considered and we lost some very good tech people needlessly. I did not stay with this organization for long as I was disgusted by such practices and my concerns fell on deaf ears.

The list can go on, and yes, implicitly the machines are working exactly as designed and there are human failings that go into the misinterpretations. The problem is the machines being hailed as the ultimately objective and functional implementation, an authoritative statement on the condition the machine is programmed to check, and this is exactly how they're marketed or even thought of by the creators.

Add in the continuing obsession with minimum viable products to get to market and you just end up with these implicit biases being baked into products from the beginning and it can have very nasty side effects.

That is the concern I believe from the article; it's not just that machines do exactly what they're programmed to sometimes bad outcomes, but that we put a lot of faith and trust into these machines and models without questioning it.

I respond to this post as I think it's not a safe thing to assume that when you program you try to fulfill the user wishes as much as possible -- WebDev is a perfect example filled with tons of "Dev-friendly" features that users loathe, but it's better for the Dev and it looks better for the company to have these. Or even just talk with your colleagues from time to time and ask how many shortcuts they've taken that is a pain for the users, but ultimately easier to write and lets you ship sooner. Hell, I'm guilty of this so I'm not without sin.

My takeaway from the article is that we need to avoid the idealized understanding of programming and avoid concepts like "Code is Law", because it fails to recognize the biases we write into our code and systems. Don't take biases as political either, think about simple use cases you miss and never consider when writing some program or modeling your data structure, innocently enough in most cases, but now it's baked into the program because refactoring is too expensive due to tech debt.

A program and a hungry/scared mountain lion might indeed both just be acting as they're programmed; no one is trying to convince me to trust the mountain lion though.


> we implicitly trust and idolize the machines

I think it's a strawman. Nobody I know idolizes machines. It's abundantly clear that software has its limitations and machines are only as good as they are programmed.

The examples that you are giving are usually treated as bugs. Nobody in their right mind would say that a somebody is not a person because they haven't been detected by some image recognition software. Everyone would just agree that that software sucks at recognition and should be fixed.

I'm not saying that they programs are always achieving the goal of fulfilling users' goals, but this is the general direction in which the programs are developed. Even if you are exclusively driven by greed, usually the best way to sell your program is to make it useful.

And yes, you can give examples where the best way of monetizing your program would incentivize doing something which is not user-friendly, but ultimately, if your program is not useful at its core, it will not be used, so making it useful is your first priority.


I will grant that it's a broad generalization, but I don't think it's a straw man. The implicit sales point of a lot of software is that it removes the overheard of research and quantitative looks and provides hard data. If data is not questioned and checked, then there is an implicit trust.

Yes, I would agree the above are bugs, but the point is more about:

1. How did the bug get there in the first place?

2. What is the time to resolve these bugs, if they ever are?

3. A program can be used for things it was never intended to that has harmful effects. This is the basis behind exploits where software doing exactly what it was designed to has an unintended effect. It's not about universal usefulness, it's about who the program actually helps.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: