Hacker News new | past | comments | ask | show | jobs | submit login
The Hidden Costs of Automated Thinking (newyorker.com)
166 points by laurex on July 25, 2019 | hide | past | favorite | 59 comments



The leading example of modafinil is erroneous.

Modafinil works by at least 3 mechanisms. Part of the complexity is because we don’t yet fully understand how and what causes sleep; or why it is so necessary that without it you will surely die. So balancing what does what it (modafinil) works by is difficult, however possibly the greatest contribution to our understanding of the mechanism governing the drive towards sleep has been modafinils insights into this:

As you go about your day, adenosine (yes, the dna base) accumulates in the extracellular space in the brain. These levels reach a peak before you sleep; sleeping allows re-uptake/cleaning of intracellular adenosine. Theories have proposed that levels of intracellular adenosine contribute to the drive to sleep/fatigue.

Caffeine binds as an antagonist to the adenosine receptor; which is where the stimulant effect is derived from.

Modafinil causes re-uptake of adenosine; so there is less intracellular adenosine bonding adenosine receptors (and presumably causing the reverse of stimulation), a function normally carried out at night.

There are at least two other known functions since I last did an enormous deep dive in modafinil and read every published paper during Med school


So how should modafinil and caffeine intake be leveraged to best work together?


That's a deep question, I don't think they mix well and this isn't my area of specialisation. I generally try to avoid receptor mashing so try to stick to one thing unless work absolutely demands it (I also think it's very important to have a wash out period where you are clean, we really don't know what the long term effects are but Modafinil has been used since the early 90s and doesn't seem outright harmful). Caffeine has a short half-life (depending on your genetic profile), said to be 4-5 hours online but I don't find this to be true to me (i'm a fast metaboliser according to 23&me); whereas Modafinil has 17 hours. take from this what you will.

My honest opinion is that you should try to balance your life, sleep well and exercise, because everything else is just a crutch, and crutches shouldn't be used long term.


The most important phrase of the entire piece:

“A world of knowledge without understanding becomes a world without discernible cause and effect...”


I've already started snarkily saying that machine learning experts are people who believe that their degree in statistics allows them to ignore the correlation causation fallacy.

To put that point another way: It's commonly known that shark attacks are rare. But that's because we know where sharks are. We don't swim in beaches where sharks can easily swim, and rarely far out enough for them to be prowling. When swimming in deep water, shark attacks are comparable to risks of sports like cycling [1]. Sharks aren't going to kill you, but if we didn't have institutional knowledge of where they attacked (and continued watersports at the rate we currently have), you'd likely know someone who died of one.

[1 ]https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3941575/


it sounds to me like you're assuming the reason why people don't swim in deep water is because there might be sharks there?


One of the reasons that few people swim in deep water is because there are some number of sharks there.

Most problems reach equalibriums. Some problems have a very strong S-Curve effects. "How deep is the water" and "probability of dying of a shark attack" has a sigmoid curve, where as soon as you go into water deep enough for sharks to actually swim, the danger goes from "Unheard of if not impossible" to "Potentially Life Threatening but within acceptable risk territory". Once out there, it keeps going up.

People don't swim in deep water because deep water is not generally accessible. One of the reasons that deep water is not accessible is because deep water also contain sharks. These have reached an equilibrium.

Second example of this fallacy:

"Helicopter parents" is a commonly cited problem [1]. Parents are overprotective of children. The rise of helicopter parenting has coincided with a reduction in child mortality rate from non-natural causes. Wouldn't that indicate that helicopter parenting is working? The answer ranges from "No" to "Maybe". Children killed by car accidents correlates with a decline on overall accident rate - But much of that is driven by initiatives like MADD, which has worked to improve general auto safety as the most effective way of protecting children. These factors can be compared to stoichiometric math - They are trending towards equilibriums, but probably have not yet reached them, and as other reactions happen around them the setpoints they are trending to change.

[1] https://www.washingtonpost.com/news/wonk/wp/2015/04/14/there...


>One of the reasons that few people swim in >deep water is because there are some >number of sharks there.

I can't help but think even if there were no sharks at all the amount of people swimming in deep water would not increase that much, because I find it likely that the other causes of people not swimming in deep water have a much greater effect on how many people actually swim in deeper water.

Also there are people now who swim in deep water specifically to swim with Sharks.


You're kind of missing the point though. Correlation is enough to get results. Worrying about the edge cases where correlation falls apart is simply worrying about insufficient training data.


And what ML system in practice will not have some insufficiencies in the training data? The wonderful thing about ML is we won't even know the problems unless we extensively experimentally test the resulting model!


What kind of results do you expect from blindly using correlation? Building a e.g. credit rating model with this kind of mentality will just result in a racially biased model.


Why the hypothetical? Valid uses are already implemented. Salient points on a face do not cause a face but you can still get very good facial recognition.

Its a parlor trick perhaps, but its silly to ignore the results.

As for building a biased system, again its just about the data you use and you could easily build an "intelligent" system that had a racial bias. We should worry about automation that simply pushes the status quo but I don't think that's a problem specific to statistical based ML.


I suppose it will result in a model with all sorts of biases among which a racial bias will be one.


Ha! Superb! That neatly describes the state of politics in the USA and the UK at the moment....


The is a lot of wisdom in that article

> It’s possible to discover what works without knowing why it works, and then to put that insight to use immediately, assuming that the underlying mechanism will be figured out later.

> Our accounting could reflect the fact that not all intellectual debt is equally problematic.

We often talk about the technical debts we are accumulating when creating something new. But what sometimes can be worse is when one forges ahead without the proper understanding, extrapolating from familiar but possibly not applicable precedence.

The complexity of modern tech makes it all but impossible for even most of us here to fully understand how our computers work - a very different situation from the time Woz was able to hold all moving parts of the Apple design in his head.

We have to rely on stuff we don't understand and it can be an emotional challenge with different individuals reacting very differently. Avoidance leads to a crawling speed and attempts to reinvent the wheel when the simplest sufficient solution calls for a four wheel drive. Deliberately closing eyes is also not uncommon - believing in the magic of the solution with no regard to the laws of nature. Overly broad deep diving, studies and investigations is another anti-pattern. As is being overly conservative - my current project technical debt is way too much code as we did not fully understand the framework we build upon and feared to make the required small innovations.

Finding the right balance here as an individual or finding common ground as a team is challenging.


We rely on a lot of things without understanding how they work from the dawn of times. They are reliable because they passed the test of time, passed through many generations. On the other hand, our understanding about the inner functioning of them changes - the will of Gods, aether, thermodynamics, quantum physics.

Nassim Taleb covers this idea in his Antifragility.


Glad you mentioned that book as it was relevant and eye opening to read.


> It’s possible to discover what works without knowing why it works, and then to put that insight to use immediately, assuming that the underlying mechanism will be figured out later.

Asbestos was a remarkable flame-retardant material. However, putting it to use as widely as we have was probably a mistake.


That's a bit of a different thing. In that case, the problem was not a lack of understanding of how asbestos works (it's not like people got sick from excessive fire resistance). We just didn't consider that particles would be inhaled, and what would happen when they were.

The asbestos sort of danger is much more insidious and omnipresent. You have to not only know that your use is safe, but consider everything that could happen to it on the molecular level and that all the consequences of all those things are safe.


I am surprised the author didn’t use one of the most salient examples: navigation. I used to consider myself quite good at navigation because I took the time to explore my city. Now, I blindly follow instructions until I am lost when my phone dies.


I'm super glad for navigation apps in Los Angeles. In fact, the main reason I finally adopted a smartphone was because of how frequently a different route can save you a half hour of waiting in traffic (or more).

But I'm even more glad that I learned to navigate LA via map and memory well before I came to rely on GPS.

Lately I've been trying to keep that fresh by taking a moment to mentally walk through the navigation instructions before I start traveling somewhere. Having the overview in your head -- even if you forget some of it -- makes a world of difference between building a geographic model and just following instructions.

Somewhere in here there's probably a lesson or two about the difference between augmentation and automation, but it's not one I've teased out well enough to articulate better than this.


Yeah, one of the things I think about a lot is supportive versus controlling technology. Google Maps, etc, seem uncomfortably in the middle to me. Yes, I told it where I want to go. But after that, I'm basically a peripheral device to Google for the duration of the trip, and it doesn't help my geographic sense much.

I recently went to Amsterdam. Before I left, I got Google Maps directions for various places I was going and then used Google Earth to "walk" the routes. After 10 or 15 minutes I built up enough of a sense of landmarks and general layout that I felt pretty well oriented when I arrived.

That was cumbersome to do, but I'd love to see GPS tools move in that direction of supporting not just my current goal, but my long-term independence from needing turn-by-turn directions for everything.


Couldn't you look at the road and landmarks while you drive, to learn the geography?


No, because the standard interface shows me a postage-stamp sized understanding of the world. It's sufficient to follow the directions, but not useful for learning the broader context in the same way that one has to do with physical maps.


I felt way smarter when I had to navigate by a paper map and plan things out. It's almost like I've lost that navigational skill. From time to time I will buy a map though just to exercise the brain circuits.


Don't blindly follow. I keep my GPS in North-Up, and I always zoom out to see the intended route. Also, GPS's tend to take you through extra turns a 'native' driver of the area probably wouldn't take. Save 30 seconds at .1 miles by taking 3 turns through this neighborhood, when in reality it was just as quick to continue straight and turn right at the major intersection.

You really only need to know how to get to and a major thoroughfare, how those thoroughfares connect to vicinity of your destination, and then the steps from the thoroughfare to your destination.


It's not quite the same, because navigation systems aren't typically black boxes to their creators. Someone down the chain understands how they work.


I used to drive to different sites as part of my work early in my career. I'd have given my right arm for satnav, it would have saved so much aggravation.



Really there is something the author missed - just because we have a theory doesn't mean it is right. Amusingly all of the flaws listed to AI apply to Natural Intelligence technically.

I think it isn't truly artificial intelligence but fundamental theory vs practice in a long standing dance over eras.


It's hard to know which science fiction story to quote here; The Machine Stops is the obvious choice, but more people may be familiar with the empire that Asimov's Foundation supplanted, who valued layers-removed "analysis" over original research.


The Machine Stops, thank you!


What's hidden about it? Every time you call a bank, government agency etc you have to step through a mountain of IVR's and frequently have to talk to time wasting "AI" just get to a human which still does the bulk of the non-trivial work and whom can help you.

It should be obvious that we have and are continuing to push a lot of half baked AI into industry and government processes and that it's frustrating users no end.


I'm not sure it's specific to AI, I've always had to negotiate past front line support to get to someone who knows what they are talking about. The world is built for average, not for you.


I'm completely average, not above, nor below (at least in my own personal assessment - others may differ).

The difference is the front line personnel can sense if you're agitated and they need to escalate your case quickly. I've heard saying "operator" repeatedly down the line may help in this particular case. A human operator will normally react. AI/ML/Expert Systems etc don't care.

Plenty of "average" people have complained to me about this use of their time, and that it is not good value for money, especially given the extravagant fees or taxes they may be paying these particular organizations.


The real benefit will come once everyone’s phone number is tagged with a dollar value, so if you’re a low value person, you get sent to the back of the line, perhaps even being dropped as a customer because the system can calculate how much you cost to support versus how much revenue you bring, and if it’s negative, then there’s a reason to not do business with you.

This always did happen before, people who knew people or were much higher net worth obviously got better service, but when it can be automated and subjected to everyone in a granular fashion, it really brings home where you are in the hierarchy and removes any veneer of being equal in society. This is also already implemented at various types of businesses via their “rewards” programs where you’re tiered into different support based on how much you spend or can spend, and get routed to different support agents accordingly.


I personally don't think everyone should be treated equally by support.

I've been left waiting at the counter at an auto parts store while the store personnel left me to address the needs of the local auto mechanic's shop (who undoubtedly buys 1000x the parts I buy in a given year). The parts store should prioritize the mechanic over me, because they're a much more valuable customer to the parts store. It doesn't do me any good to have the parts store fold because they served everyone exactly in sequence.

The USPS undoubtedly has specific support lines and discounted rates for high volume mailers while I as a retail customer pay retail prices and get crappy retail support. Amazon and UPS should get much better service from USPS than I do.

Comcast Business customers probably (hopefully) get better service that $99/mo “TriplePlay” residential customers.

People with startups here probably prioritize engaged, paying customers over free users.

Airline Elite or Diamond or Private customers spending $25K or more every year should get better service than the buyer of a once-a-year $250 trans-continental ticket. When an airline experiences a disruption, they're going to re-accommodate their First Class and Elite frequent fliers first.

That's just good business, every bit as much as when I walk into the local diner and the server brings me a coffee prepared as I normally order it before taking someone else's order.


I agree, but the problem for a buyer comes when the number of sellers is so few (2 or 3 in many markets), that they effectively have no options. You can easily be blacklisted nationwide (perhaps deservedly, perhaps not), but there is no recourse, and no transparency.

It's also psychologically different when you know an organization is tracking every single person and how much future potential revenue they can bring versus how much they cost to support and price discriminating accordingly. It's obviously the optimal thing to do as a business, but as a society, the idea that we're equal and should be treated as such is also an important feeling. I can see possible discord in society from having that socioeconomic tiering be so blatant and in your face, but perhaps it's inevitable in a world where the gap between the haves and have nots keeps getting bigger and bigger.

I don't go to theme parks much, but I did a couple years ago as an adult for the first time since I was a kid, and I have to say it felt weird to see all of the different tiers for the queues for the rides. You pay $x, you wait in longer line, you pay $x+$20, you get to skip to halfway through the line, you pay $x+$50, you get to skip to front of line. It really brings to the forefront how un-valuable your time may be compared to someone who can afford to spend more, and I know the world has always worked like that, but I wonder how it feels to the kid who can see it happening in front of them.


Of course. And in a well functioning society, you'd be able to move to a competitor if you were treated badly enough.

But of course, that won't happen for much longer, because we love our monopolies (yay FAANG or utility companies etc.)


Some of it is just an unintentional consequence of the benefits of technology and its ability to scale leaving no other option (other than to legally forbid it, but then another country might allow it and take advantage, etc, etc).

Consumers can and have benefited from economies of scale greatly that the big companies can provide in the short term, but in the long term it can be costly as they exercise their monopoly power, but who can accurately put a price on this so that people choose wisely today.


My personal theory is most of those systems aren't AI. They are over-seas operators that listen to your choice, and press a button to direct you to the next menu.


Those systems sound more like traditional “expert systems” and not really the more threatening ML.


Well they are for now. Wait until they get upgraded prematurely too! I should add that "expert systems" were widely touted in promotional materials as being "AI" at the time they were popular, much as "ML" is today.

Yes we're closer, but we are still a million miles away from AGI. Or unregulated self driving cars. Or flying ones.


I like this article, but am a bit disappointed they didn't mention another reason not understanding the system is bad: possibly-unknown biases baked into the data set. For example, a machine learning system designed to guess who is a criminal that is trained on existing police arrest data, say, would likely conclude that being poor or black is an indicator of criminality somewhere within the model. But we can only find that out by experimentation, we can't see how it works. There's a huge risk in this type of bias laundering, when suddenly it's not “the biased police think”, it's “the computer thinks”.


Is it "bias" if it's true? The issue is that we want to override statistics and distort the analysis for ideological reasons, but that's not a fault of the algoryhm, it's a feature request.


> Is it "bias" if it's true?

I think it's quite common that the quantity you're interested in isn't observable, so you need to proxy it with something. The GP's example is "Person X is likely to commit a crime". If that could be estimated reliably, it would be extremely useful for allocating governmental resources like policing and education.

The problem is "Person X is likely to commit a crime" isn't observable, so a careless researcher might proxy it with "Person X is likely to be convicted of a crime". The latter is actually very different for the former, since it includes factors like a defendent's ability to hire a good lawyer, existing police presence around Person X's neighbourhood, and government priorities on which crimes to prosecute (think crack vs. cocaine in the 80's).

Any good social scientist or economist will be aware of all of this. But once you bake it into a model that doesn't explain itself, you have a mess on your hands. Especially if the model gets more credence than it deserves by people who don't understand it.


Often, yes. Because the statistics themselves can be distorted. For example, consider any of the openly racist police forces in the Jim Crow south. Any naive system based on their data would mirror the racism of their practices.

And that's ignoring more complicated feedback loops. Since colonial times, American whites have often used their dominance to keep black people impoverished. [1] Poverty and crime are correlated. Wealth is correlated with getting away with crime. So if a system looks at crime statistics without considering at the history, it would be easy to perpetuate the ugly parts of that history.

[1] See Kendi's "Stamped from the Beginning" for the colonial-era laws and practices, and Loewen's "Sundown Towns" for the Nadir up through suburbanization.


ML algorithms per se are in theory neutral on this subject, sure, but a trained model acquires and can amplify the biases of its source data-set.

You say “ideological”, but it is also sound science. Machine learning models of this kind are based on observational data and thus find correlations without knowing anything about causation, and so you need to put in a lot of effort to find and avoid problematic correlations. Of course, then the magic disappears, because an ML algorithm can't actually tell you if someone is a criminal.


We know, based on statements from ex-agents within the DEA, that the DEA didn't go after drug use in white suburban communities because it would have been politically untenable.

The incarceration patterns therefore reflect this political bias.


Though I think your statement is likely overall true, in a sub-thread about biased conclusions from data, it's important to observe that from the statements of those ex-agents, all you can conclude is that parts of the DEA didn't go after drug use in white suburbia...


> Is it "bias" if it's true?

Yes. Statistics are for populations, not for individuals.

Using a statistical correlation as the basis for an individual decision is inherently biased, but is something that is seen all too often unfortunately.


That's correct, but we're talking about an essential feature request.

If the algorithm is going to insist on treating all black people like criminals because their crime rate is higher, than it's a bad algorithm and needs to be fixed before shipping, or scrapped altogether.


While I think it is a meaningful question to ask, I had a really hard time to get very deep into the article. At any one point in time, at least 30% of the page is covered by an advertisement. Some of the time nearly 100% of the page is an advertisement. No doubt that the page layout and advertisement itself were decided upon by some form of automated thinking (in the case of layout css in the case of advertisement something closer to what we think about when we say AI).

The author raises a concern about automated thinking. But the article is displayed in a scenario that relies on a plethora of automated thinking. Let me guess he's only using good automated thinking while the other automated thinking that affects him is the bad kind?

Ultimately, humans use tools and we push really hard to develop better and better tools. Maybe the tools other people are using concern you (and maybe that's a good thing to be concerned ... after all shoe stores used to x-ray feet), but progress continues and those that don't adapt are left behind.

Maybe he's got a point, but I also don't see the new yorker abandoning its website and reverting to paper. How bad can the automated thinking really be.

Honestly, I'm not sure I have any major refutations or solutions concerning the article. It's just that I keep bumping into people making odd statements about how they're really worried about the computers or the internet and they're making these statements on facebook. Seeing someone concerned about AI on an ad supported website makes me wonder if they appreciate how much of their existence is already dependent on AI.


I particularly liked the point about how much worse this gets when the output from one (or more) ML system is used as the input to another.

Not something that had occurred to me before


There were reports a few years ago that Google Translate had gotten very bad and the speculation was that the training input data was culled from Internet sites that themselves had used Google Translate to do translation.


"The blind leading the blind"


The end result of AI systems that make decisions that cannot be explained would be to instantiate these automated practices into de facto law.

At some point, if a decision process is widely adopted yet cannot explain itself, law becomes arbitrary. Ends justify means - ipso de facto. Power becomes unaccountable.

That's the best reason I know to demand accountability (and explicibility) from any system or authority, AI or not.


Not working in the field might put you in "not in field" debt. AI will let people concentrate on other things.


I do think that there is a risk where in the future we rely on AI so much we no longer need people asking questions that matter and go down the road so vividly painted in "the machine stops" (1909), wherein a certain character looks at Aegean peninsula, the birthplace of the concept of idea, and, in disgust, laments "no ideas here".




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: