Hacker News new | past | comments | ask | show | jobs | submit login
Language records reveal a surge of cognitive distortions in recent decades (pnas.org)
283 points by jbotz on July 25, 2021 | hide | past | favorite | 148 comments



If you look up at the sky and say "oh its sunny today", you are overgeneralizing because you haven't seen the rest of the city. If you had worn masks or started being cautious before COVID was widely acknowledged, you were catastrophizing. If you ever talk or act based on a mental model you have of a friend, you are mindreading.

"Cognitive distortions" are the only tools we have to reason about anything in the presence of limited information (which is basically always). Its basically a toolbox to let you discredit any thought whatsoever, which is convenient when a patient writes down negative thoughts and the psychiatrist can just hand them a list. But it would work just as well on positive thoughts or any thought whatsoever.


The phrases that the study treats as overgeneralizing are:

all of the time, all of them, all the time, always happens, always like, happens every time, completely, no one ever, nobody ever, every single one of them, every single one of you, I always, you always, he always, she always, they always, I am always, you are always, he is always, she is always, they are always

It's not always overgeneralizing to use one of these. "Every single one of them wore black" is (potentially) completely factual. A large increase in their use in books, however suggests that authors are overgeneralizing more than they did previously.

Phrase list: https://www.pnas.org/content/pnas/suppl/2021/07/22/210206111...


I call this talking in superlatives. It probably isn't exactly the same as the article is talking about but it is related. Especially the youth today prefers talking (and thinking?) in black and white. I think it is the effect of two things:

* Media which tries to sell rather mundane news as sensational (exaggerating) * Management literature which tried the last 30 years hard to push people to the limits, to walk the last mile, to strive for the best, greatest, mediocrity is the evil.


Folks who overgeneralize like that are incredibly easy (and unsatisfactory, due to the predictability of a negative response) to troll. Merely be pedantic, provide a counter example, or suggest a thought experiment in which their generalization may not be true.


Well who is trolling who - if their overgeneralization prompts you to correct them or provide a counterexample, maybe you're already feeding the troll.

Was I trolled into pedantry by your comment?


It’s trolls all the way down. This is why forms of trolling that aren’t blatant flaming get under people’s skin: it’s taking normal good-faith but ignorant user behavior and sending it into the uncanny valley. One troll shutting down a whole community is more often the case of the community having a pre-existing affinity to develop a digital autoimmune disease: once the users start treating good-faith users as potential trolls, the community turns irredeemably toxic.


>>Merely be pedantic, provide a counter example, or suggest a thought experiment in which their generalization may not be true.

It solves one problem but walks right into another. Your counterexample would be observed as so-called sealioning[1], "concern trolling", or one form of the ever-expanding nebulousness of "gaslighting". How does one diffuse obstinate hypocritical crusaders while proving one's own point? The solution doesn't seem to be making a well-made statement. Disagreeing is considered making a "bad-faith" arguments. The answer isn't pointing out their fallacies and thought-terminating clichés. You'll be accused of co-opting an dog-whistling. The only solution I've seen is to do or say as one will as though others don't exist. A lesson learned in one of Aesop's fable, The Miller, His Son and His Ass.

[1]https://wondermark.com/1k62/


Or that the meaning of these phrases has changed over time.


From the abstract:

> This pattern does not seem to be driven by changes in word meaning, publishing and writing standards, or the Google Books sample


Word. In fact, most of these phrases, to me at least, imply sarcasm.

To say "social science studies are bullshit" implies a more generalized claim to me than "social science studies are always bullshit"

People use overt generalization words like these, in my experience, to indicate generalization with obvious exception.

This study is selecting for precisely the wrong thing, imo, or is not properly interpreting what they've selected.


I'd like to see their analysis of the phrasing on a forum such as this, where one would have to outline the myriad of corner cases and exceptions lest we get speared by the pedantic knights of the internet.


My mom does this a lot. Daily, in fact. Has been doing so for decades. She's not self aware enough to do any better. I've tried. She doesn't care. She's not a particularly reasonable, rational person.


Are you overgeneralizing her as someone who is always irrational?


Nope. Unlike you I know her.


No one is particularly rational. Reasonable is maybe 50/50 at best. We can strive though.


As an exception I must certainly agree. People not only aren't being very rational in general, they also really suck at being irrational. Most people are closer to animals than humans.


This is a common theme in these kind of analyses (even used in economics!) but often the ratio of false positives is sufficiently low for it to not matter enough.


You can find these exaggerations in Shakespeare and Jane Austen.

The authors need to understand the difference between rhetoric and factual reporting.

It's not unusual to say "This is the worst thing in the world" to make a point without believing that it literally is absolutely the worst thing in the entire history of human experience.


Alternatively, none of those bad things are happening to me, so this is literally the worst thing in the world.


> "Cognitive distortions" are the only tools we have to reason about anything in the presence of limited information

No, clearly they aren't. But you know what does qualify as a cognitive distortion? That very statement[1]. The "only" way to reason about "anything" based on limited evidence? Really? I mean, Kalman and Bayes would maybe like to have words.

[1] I can see a few, but I'll go with "dichotomous" as the biggest mistake you made. You lept straight from "Sometimes these mental tools produce correct results" (which is true) to "These tools are the only way to produce correct results" (a ridiculous distortion).


One of the results that stood out to me was that (a) dichotomous thinking was the most notable distortion in germany during nazism and (2) they (and the rest of us) are back to peak levels.


I'm talking about reasoning in day-to-day life. It's cool that you are able to use bayesian models & kalman filters to decide whether you should try diet X, or whether someone is lying, or if crypto is a sound investment, or whether to take an umbrella with you when going out today. This is not even scratching the surface of the kinds of nuanced & personal problems that people go to therapists for.

I'm a little dumber, I just overgeneralize to "fad diet X is probably not that great" and skip the hours long scientific paper review & construction of bayesian models and priors.

Here is a recent tweet[1] claiming this paper is badly flawed because of changes in the content of the Google Books data this study was based on. Make sure to update your bayesian models accordingly! Should I lower my prior of "this recent scientific study is trustworthy so i should take it at face value" by 1%, or 5%, or 50%?

[1] https://twitter.com/benmschmidt/status/1419497587296571395


> It's cool that you are able to use bayesian models & kalman filters to decide whether you should try diet X, or whether someone is lying, or if crypto is a sound investment, or whether to take an umbrella with you when going out today.

This part isn't dichotomous, it's just a strawman (something that I'm guessing would go into the "overgeneralizing" bucket in the linked article).

I didn't say you had to use fancy math to live your life. I replied to your statement that one had to use distorted reasoning, and cited the existence of two named techniques (about reasoning from limited evidence) as a fun way of making the point.

My point wasn't about what you said anyway, it was that you were shouting and flaming about it using exactly the techniques the linked article was noting are on the rise. You see that, right?


I wasn't flaming anyone in my original post. Really just expressing some of my skepticisms of cognitive behavioral techniques. Personally I find it difficult to apply in real life when I believe that the technique is strong enough to discredit anything but the strongest scientific claim. The flaming was only to your reply, so you may be indulging in overgeneralization and mind-reading.


It sounds like you are more of a Markov kinda person.

For me, analogies are like an easy crutch that can get you that last mile.


You lept straight from "Sometimes these mental tools produce correct results" (which is true) to "These tools are the only way to produce correct results" (a ridiculous distortion).

Those aren't mutually exclusive. Something can do something only sometimes, and also be the only way to do that thing. If he/she is wrong, It should be easy to prove that their claim is incorrect though by thinking of a counterexample


No, that's not how 'cognitive distortions' work. Think of the process of mental modelling as a hugely complex input-output function. CDS are specific patterns of mental output, that have been associated with changes to the thinking apparatus that we associate with certain psychological disorders.

These are not simple bugs. In fact we know today that working on changing these thinking patterns can in many cases help alleviate the underlying disorders (that's the C in CBT)!

To come back to your example: Being cautious is not catastrophising. Feeling a stress response or even a panic attack or anxiety because of pathogens is (we are taking a broad definition of catastrophising here).

Also beware of the fallacy that positive thoughts are the opposite of negative thoughts - they aren't. They are distinct factors that share some correlation.


> But it would work just as well on positive thoughts

Have you heard of toxic positivity? There’s a growing movement against incessant positivity that has become endemic to a lot of online discourse in particular on Instagram (def not on twitter).

Nothing ever sucks, it’s a challenge! Nobody is sick, they’re a fighter! Nobody is a mean prick, you just need to understand their perspective! Nothing you do is ever stupid, people are just haters!


> Have you heard of toxic positivity?

I suspect it’s a byproduct of “enlightenment now” movement which asserts that humans haven’t had it so good as they do now. Worldwide poverty is the lowest ever, least violence etc. There’s a minor issue of growing inequality and impending climate disasters but that’s all ok and will work out well. Its torchbearer is Steven Pinker and he’s ably supported by, unsurprisingly, the richest 0.1%.


He's also supported by people who believe in empiricism. If it feels like everything is terrible, I suggest watching less news.


You are correct that it is a toolbox. You are also correct that the tools could be used inappropriately. Neither of those truths invalidates the actual tools. Every tool in the world can be used either correctly or incorrectly.


"Oh it's sunny today" doesn't imply anything about the area it is supposedly describing, so it's not an overgeneralization in any way.

"If you had worn masks or started being cautious before COVID" you wouldn't be catastrophizing, because the identifying symbol of catastrophizing is concluding with high confidence a catastrophe from limited or contradictory evidence. We had plenty of evidence for pandemics (and most epidemiologists saw this coming for decades).

I believe you might be confusing cognitive distortions with heuristics (which are similar, but not the same).

Cognitive distortions correspond to irrational thought patterns or lines of reasoning.

If you have limited information, by example, you're not supposed to just jump to a conclusion (except in a matter which requires immediate response), and this is often seen in many mental health disorders where, by example, a person may jump to conclusions such as "everyone hates me" or "I'm screwed for life" from isolated events which predict no such overarching conclusion.

Those kinds of cognitive distortions are not the only tools we have to reason about anything in the presence of limited information, we can often do much better, which is why it's used on CBT.


There’s nuance in almost everything. When writing or speaking you have to generalize, or you’ll just end up rambling.

So when someone says something, you can almost always say they’re being too general and point out some obscure exception. It’s better to just take every statement and implicitly “… in most cases”.


This ignores the whole argument about the proportionality of such rhetorical tools in discourse.


> If you look up at the sky and say "oh its sunny today", you are overgeneralizing because you haven't seen the rest of the city.

Oh? You have seen the sky over the rest of the city. You can see a long way, in the sky.


> You can see a long way, in the sky.

...at least when it's sunny, that is.


5 years ago my therapist handed me a sheet of paper titled "Checklist of Cognitive Distortions." Ten negative modes of thought. I didn't immediately grasp how ingrained each one was in my mindset and habits - it took years and I'm still learning.

And one thing I've learned is that there is a wide incentive to embed cognitive distortions in the news, stories, and attitudes which comprise culture. Telling people they are victims of more powerful forces beyond their individual control.

Victimhood mentality is the worst mindset a person can have. I wouldn't wish it on my worst enemies.


> Telling people they are victims of more powerful forces beyond their individual control.

First, being victim of forces beyond their control is true for almost everyone.

Secondly, it's probably more healthy for people to not understand or believe it.


As with so many things, the answer probably lies in the middle. Any of us could be killed by lightning at any time, but 99.99% of us have the power to avoid being struck by a train.

Most outcomes, good and bad, lay between those extremes.


> First, being victim of forces beyond their control is true for almost everyone.

I think this misunderstands the crucial difference here. The phenomenom described is not about power dynamics, it is about storytelling: Many don't even care to investigate whether there are really powers there who are at fault for their suffering. In fact it is quite the opposite: Because they feel special as individuals they cannot stand the idea that some powerful force damages them as mere collateral, because this would mean to admit to be small and insignificant. At least I'd say this is the conclusion they would typically reach after looking into the real causal relationships behind the powers and their lives based on facts.

However, it might be much more comforting to tell yourself a story in which "they" target you or your people specifically and intentionally. The more evil you can paint "them" the better, because it makes your cause more noble and heroic.

Telling yourself these kind of stories might be comforting, but cements the role you are playing within that society: Forever a sucker, a playball of forces you cannot comprehend, easy to manipulate, gullible, afraid, angry. Healthy as long as you don't march to Russia like my Nazi-grandfather did when he was 16.


I find it a bit discouraging that the scientific consensus on what's a proper or wrong belief is entirely subject to cultural circumstances local in time and place.

Displace any individual considered healthy about 5 decades back or forward and they'll appear unstable. Make it 5 centuries and they'll basically appear fully psychotic (and not just because they just travelled through time).



The one thing that you have learned is false.

Things have gone well for me but in my longer life I've seen many many people who were legitimately "victims of more powerful forces beyond their individual control".

In America, the zipcode of your birth determines your future income far more than your performance in school, for example.

Just on my Facebook yesterday, someone I'd known for twenty years died of cancer, leaving two young kids. She'd led a normal life without any bad habits like tobacco or alcohol and it was simply bad luck - powerful forces beyond her control.

Compassionate people understand that the real world is difficult and uncertain even for dedicated and hard-working people. The whole "victim mentality" is almost always a symptom of other far-right beliefs that would be less palatable if spoken out loud.


Control over our circumstances is always by degree. In other words, it's not binary.


The data is quite interesting, but the hypothesis (imo) is far from the only plausible explanation. I'm not saying this to disparage the work, just to frame.

"Individuals with depression are prone to maladaptive patterns of thinking, known as cognitive distortions, whereby they think about themselves, the world, and the future in overly negative and inaccurate ways. These distortions are associated with marked changes in an individual’s mood, behavior, and language. We hypothesize that societies can undergo similar changes in their collective psychology that are reflected in historical records of language use. Here, we investigate the prevalence of textual markers of cognitive distortions in over 14 million books..."

Interesting result: https://www.pnas.org/content/pnas/118/30/e2102061118/F4.larg...

So... the finding is that language patterns typically associated with depression have rapidly become common in book language. Interesting. The interpretation is up for debate, I suppose. Maybe its just writers are more depressed.

Financial events are labeled, but they don't seem to have impacted the data much. Internet usage, OTOH, seems (at a glance) highly correlated to whatever they're measuring. Maybe online culture moved language in this direction with no real relationship to depression. Maybe the internet made people more depressed. Maybe the internet made writers more depressed. Maybe some complicated knot of those. IE, the internet popularized maladaptive language, which has made us all more depressed.

In any case, assuming the methodology is reasonable, it does look like they've found something here. Worth a discussion.


> Maybe its just writers are more depressed.

That doesn't disagree with the hypothesis that it's a society-wide phenomenon.


I'd be tempted to say the language patterns could be the most interesting thing. They're something that you could was more widely and objectively measurable than "depression". Starting with language patterns and seeing what they're most strongly associated with might be interesting.


This is a very simple analysis. Here's the phrase list: [1] It's quite short. It's clear how they get the numbers, but not clear what, if anything, they indicate.

A big problem is that they only count one side.

"Fortune-telling: Making predictions, usually negative ones, about the future" - counts the phrases: "I will not, we will not, you will not, they will not, it will not, that will not, he will not, she will not".

One would expect that they'd also count "I will, "we will", etc. and show a ratio. But no.

This is measuring something, but what?

[1] https://www.pnas.org/content/pnas/suppl/2021/07/22/210206111...


Not to mention that "I will get cancer" is a positive statement under that rubric, and "I will not have any problems" is negative. Shocker, you kind of need the rest of the sentence to extract meaning!


> One would expect that they'd also count "I will, "we will", etc. and show a ratio. But no.

No - what you want to measure is the absolute number of occurrence, not the ratio, the reason being that "making negative predictions" is not the opposite of "making positive predictions". Similar to the fact that negative and positive affect aren't opposites but distinct factors (that share some correlation though!).

'Common sense' can easily lead you astray here. It's a good example for something that looks simple, but really isn't.


Does this study consider the modern ease of getting a work to be publicly available? Publishing, in some form, is so much easier now than it was 20 years ago. Is it possible that publishers of the past just said, "no, we don't want so much depressing stuff."?


Now they have the analytical tools given to them by the tech industry to say "Yes, we need more of this depressing stuff because it's most profitable"

To make matters worse, journalism is not longer a career that provides a path to a respectable middle to upper middle class life, so you have an entire profession where those working in that profession have generally depressive prospects in life.

Want to make society content? Keep journalists content.


From the abstract:

> This pattern does not seem to be driven by changes in word meaning, publishing and writing standards, or the Google Books sample


Looking at the graphics it seems that the start of the pattern is closer to Reaganomics than Facebook for the English speaking world. For German and Spanish started way later, closer to social media.

So, maybe in the USA it started early and the it spread to other parts of the world thru social media. One way or another it seems important to study the reasons and the effects of such change of language.


One thing to consider is that cable TV with many channels proliferated 1-2 decades earlier in the US than most other countries. We had 40+ channels and the resultant social splintering of interests much earlier as a result. It wasn't until satellite TV became a thing that many parts of the world could get more than a handful of over the air channels.


This first big spike is labeled 1999, that's probably Y2K.

Then came 9/11, then MySpace launched in 2003.

Reaganomics was the early 1980s, not sure how you're seeing it as closer to the spike than social media.


Social media invented by US industry, that was mutated by Reaganomics.

Interesting.


An interesting effect. It's just that the naming could somehow be better. It feels weird to call a cognitive _distortion_ some collective zeitgeist that apparently correctly identifies a situation when shit has really hit the fan.

I mean, why would it be a 'distortion' to be somewhat fatalistic about the current climate change trend that may actually lead to civilizational collapse and an extinction event.


I see your point and I used to think this too to an extent but I think it’s worth appreciating that we are looking from the outside in to a highly specialized domain and like most of these domains it has developed its own unique set of language and definitions. And that paper is written for people in that domain / community, not for those outside of it.


Exactly. This article comes with the typical psychiatrist bias that depression is indication of some personal deficit (cognitive distortions or faulty brain chemistry) when in my personal view, given the state of the world there are many good reasons to be depressed.


And there weren’t plenty of reasons in the past? Do you read history at all?


sure I do. as another comment pointed out, the trend roughly tracks the decoupling of wages from productivity in North America. seems as good of a reason for disillusionment as one could find.


> the trend roughly tracks the decoupling of wages from productivity in North America. seems as good of a reason for disillusionment as one could find.

How about famine, war, epidemics wiping out a third of the population, totalitarian oppression, etc.?


Are you aware that humanity developed nuclear weapons, and then proceeded to build tens of thousands of them, in the middle of the 20th century?


A very nicely done and interesting study on a very relevant topic, with worrying results. Thank you for sharing this! I found the following quote particularly interesting:

"It is suggestive that the timing of the US surge in CDS prevalence coincides with the late 1970s when wages stopped tracking increasing work productivity."


> It is suggestive that the timing of the US surge in CDS prevalence coincides with the late 1970s when wages stopped tracking increasing work productivity.

I don't agree with the authors' summary of their own graphs. If you look at the graphs, it's more like "cognitive distortions hit an all-time low around 1980, began slowly inching back up from 1980-2000, then accelerated more rapidly upwards after 2000, leading to a clear trend break by 2005 or so."

I've also heard conflicting info about this and would like a more definitive analysis if anyone knows of one. I've seen people claim that total comp, including especially health care benefits, did not stagnate and continued to track productivity.


It feels quite possible to me that, despite total comp tracking productivity, wages ceasing to track work productivity could still cause increased anxiety due to a lack of choice in how to spend that fraction of compensation.

Health Care as a job benefit ensures that that fraction of compensation is always spent on health services, at minimum. In many ways it is more of a subsidy to health insurance companies and health care providers.


Summary of the study based on the comments I've read so far: "People are saying 'I am a...' a lot more than they used to, which proves that the world is going to shit because most people haven't been paid what they're entitled to for their whole lives"... this "study" seems like absolute trash science, someone's opinions and narrow interpretations dressed up as some kind of higher form of knowledge.


Focusing in just authors may make this fall into the Simpson's Paradox (https://en.wikipedia.org/wiki/Simpson%27s_paradox), if you look at the bigger set the clear trend may vanish.

Even if authors are part of the population and living (some of) the influences that the general population have, there are a whole ecosystem and pressures that should be taken into account, including changes in the editorial ecosystem with time.

There are other sets of public data that may or not track the general population, like social networks activity, blog posts and comments on different sites. But that is affected by changes in culture, population and external influences (including disinformation campaings), and the selection of sites may select also the kind of users that may add a bias to the results. I wonder what deviations would be seen at i.e. slashdot that should have around 25 years of comments, with all the previous objections that I already said.


It is because the only logic we have left is Market Logic and the only ethical barrier we have left between us and violence is The Economy. The Economy is entirely faith based. And our faith is waning, and among younger generations, is gone. To survive in Market Logic we must act as though we believe we are fully individuated and on our own with bootstraps tugged. But how we actually live is totally, globally interdependent. The pandemic, on top of this, insists on it. This is the source of cognitive dissonance. ( JP Dupuy The Economy and The Future)


Quite a few theorists have stated things along the same lines, and I think the analysis is generally correct.

After the “death of god”, man lost a common frame of reference and replaced highly spiritualized modes of existence for highly economic modes of existence. Well, now we are facing the “death of the market” and finally realizing that it is a destructive fiction and that it is not reasonable to reduce everything to exchange value and calculation insofar as it leads to the actual existential crisis that is climate cataclysm.


The idea that a decline in religiousness is what lead to an increase in the valuation of the market is unsupported by any historical sequence I know of. You even have Weber's classic text, The Protestant Ethic and The Spirit of Capitalism, which more or less implied the opposite. Many very religious and very "pro-market" societies exist - the ex-USSR oppositely, was an officially godless, anti-market economy (and you can call that "a religion" but that degrades "religion" to strong belief).

The only way to argue this is a "greed became dominant when man abandoned [our idea of] true worship" approach, which many religious people indeed argue but which is a pretty clear "no true Scotsman" type fallacy when held to the light of day.


You make good points, but my assertion isn’t necessarily to oppose god and capitalism, nor to say that returning to religion would solve all our problems, I’m just implying we’ve traded “god” for the “market” and that both have affinities insofar as they are both based on founding texts (bible, wealth of nations) that prescribe a belief system more based on faith (saving grace, invisible hand) than reason.


Yes. An excellent book on Market Logic is JP Dupuy Economy and the Future: A Question of Faith. It is philosophy but it is, vs Zizek or Lacan, an easy read. It is a very precise, logical argument that to debate here will end polemics.


There are almost no present day capitalists who have read The Wealth Of Nations. Indeed, the field known as Classical Political Economy is basically moribund and has been replaced by Economics. But the modern factory owner doesn't follow even economics on the basis of faith - indeed, they generally follow much abstract economics at all. I'm not all "pro-market" but it seems clear markets don't need faith - buy stuff, sell stuff, make money, lose money. The only thing akin to faith you have is confidence that X you buy today will be worth more tomorrow. But that's no equivalent to the unquestioned faith asked for by religion.


But there is an element of faith in the market. If the capitalist read the Wealth of Nations or not, s/he believes in the invisible hand and that free markets are the right way to manage our lives. It is the ideological package that guides people who embrace capitalism as their way of life.


A capitalist has no need of believing capitalism is a good way to manage "our lives". The capitalist simply acts in their own self-interest. I think Karl Marx and Ludwig Von Mises would agree on this. Marx would consider the system evil, Mises would consider it good. But neither would consider it "faith based".


The piece you’re missing here is that the capitalist theory is that things will work out well on a macro level when everyone is acting in self-interest that is the key to capitalism as an economics (more specifically as an acceptable mode of the economic term in “political economy”) and it is the only reason it is seen as acceptable. The whole reason capitalism is seen as “rational” is because of this argument, if the only thing that were claimed is that “everyone should act in self interest and who knows what will happen” no one would accept it as a reasonable economic system. Capitalism took root because of this notion that rational agents acting for themselves would necessarily (with no evidence to back it) lead to the best scenario (the idea of the market dictating whats best for mankind through what it desires, supply and demand). It is also its article of faith, which mounting evidence (see climate crisis) argues against.

Marx’s whole point was to argue why this proposition was false (why “everyone acting according to self interest” actually led, in spite of the good it produced in the short term, to longer term evils, like proletariatization, exploitation of workers, concentration of wealth, etc. though Marx did not forsee (and couldn’t of course) the catastrophic effects self interest would have on the climate)). The whole point is that the progression of capitalism up to this point has proven we need more than just “self regulating markets or self interested “rational” agents” since we’ve come to see the absence of other shared values leads to species level existential threats (again, climate).


What will we replace it with this time?


A complex, accuracy-seeking metaphysical model of reality seems like a decent goal to aim for. It is a bit of a vague, hand wavy goal, but I think this is both a bug and a feature.


I don't know. We're running out of sources of trust. Lawyers? No. Bankers? No. Political leaders? No. Captains of industry? No. Journalists? Almost extinct.

We hit bottom when the Catholic Church and the Boy Scouts both turned out to have thousands of pedophiles in positions of authority.


We can trust in the earth if we pay our respects to her.


> We hit bottom when the Catholic Church and the Boy Scouts both turned out to have thousands of pedophiles in positions of authority.

Of course prior to 1980, the church had never done anything to harm those it claimed to care for. Not sure why Catholics are singled out here.


I saw on Twitter that you can search for "he said,she said" and you get massive recent spikes suggesting a change in the composition of Google's data. (i.e. More fiction where characters are saying things)

https://books.google.com/ngrams/graph?content=he+said%2Cshe+...


These are interesting data, but heavily sensationalized, overinterpreted and editorialized - as per usual from this notorious "tabloid journal", as statistics professor Andrew Gelman calls them.

For example, the methodology for detecting "cognitive distortions" is extremely simplistic, relying on short phrases like "I am a", "everyone thinks", and "still feels". It is far from clear what an increase in people saying "I am a" means, and what it really implies about the rate of genuine cognitive distortions or depressive thought patterns in the population. The reasoning here is basically "X correlates with Y, X is up, therefore Y is up". That's not a strong argument. These data are a starting point, not a conclusion.

The authors also seem to be leaning so far into a desired narrative that they can't accurately read their own graphs. The rapid rise in CDS clearly begins around 2000 - not the late 1970s, which are just the low point of the graph.


In their defense, I tend to think social science is shoehorned into an academic formula that it can never really fit into.

You kind of have to have a hypothesis, then "test" it with data. A more reasonable approach (IMO) would be "We found this interesting phenomenon" followed by inevitably speculative interpretation. That's realistically how it works in practice, but to be published you need to fit a popperian formula. A more ponderous version of this probably wouldn't have been accepted.

I mean, using a method of diagnosing depressed individuals on a collective psychology of a society is already veering into wtf territory, in terms of interpretation. OTOH, it is interesting and probably significant somehow.

IDK how or what they could have done better, and this does seem like a result worth publishing, assuming the methodology is good. Ultimately, I think people reading the paper mostly read it that way anyhow. Authors found X. Seems interesting. Here's how they're interpreting it. Here's my speculation.

Journalists OTOH, they'll cite the hypothesis verbatim if it fits the article they're writing. I think that's where the shoehorning is a problem. As long as you're reading the paper directly yourself, who cares what order the paragraphs are in.


Then maybe we should come up with a new term for it and stop calling it science. If it’s not reproducible or falsifiable, it’s not producing scientific conclusions that expand our understanding of the world.

It’s certainly useful data, but otherwise it lacks any meaningful analysis.


Reproducibility is not a problem here. Falsifiability is.

That said, this can't be "useful data" unless you are willing to accept non falsifiable interpretations. There's no way of formulating such an interpretation.

Whether or not we call it science is semantics, highly loaded semantics. We could stop calling it science, but then we'd have to also stop being derisive of unscientific methodology.

More realistically, maybe we should rename "the scientific method" to "the Popperian standard." After all, both the term science and its associated culture predate the definition/standard you are alluding to.

The researchers themselves haven't done anything wrong. They're researching interesting things in theory field, and publishing in the format journals demand... ostensibly to satisfy objections such as yours.

You can't have it both ways.


The process of science needs this useful data - a problem is that we currently have coupled the publication of these useful observations with scientific conclusions, which results in publishing lousy conclusions and not publishing valuable observations (e.g. negative results); perhaps it would be better to decouple these things so that people can (and would be motivated) to publish their primary studies/observations, where there would not necessarily be anything falsifiable, and the conclusions can be done afterwards by others, preferably combining multiple observational studies (thus being not only reproducible but reproduced), like e.g. meta-studies often used in medicine.


I’ve long stopped using the phrase “social sciences” and use “social studies” instead. (I include medicine here, or at least studies with 5% p-value… not science if compared to e.g. 5-sigma physics.)


How is it useful if it does not expand our understanding of the world? What use does it have?


You probably can’t stop other people from using words you believe or even know to be inappropriate. Thankfully it’s sometimes easy to spot science that’s not really science. It’s the stuff that seems legit but isn’t that’s a problem. I’d say the problem is actually bigger in harder sciences. Anybody can look at most social science research and see it’s dodgy. Collusion rings in comp sci research are harder to detect, just to pick an example.


> In their defense, I tend to think social science is shoehorned into an academic formula that it can never really fit into.

Pesky reproducibility!


> Authors found X. Seems interesting. Here's how they're interpreting it. Here's my speculation.

Agreed, that is mostly how it goes.

On a similar note, null-hypothesis testing is not a true and tried scientific method. It started sometime in the early 20th century. Aside from p-hacking and mistakes of that kind, there are many philosophical problems with the use of statistical inference itself.


> this notorious "tabloid journal", as statistics professor Andrew Gelman calls them

One professor's statement (citation?) doesn't discredit the Proceedings of the National Academies of Science. I would need much more evidence than that. A snarky comment - a common means of discrediting institutions, a popular trend these days - costs nothing to make, and carries little evidence, reason, or meaning.


Andrew Gelman has provided plenty of evidence, though. I'd certainly say that PNAS, though it's a high quality journal, does sometimes go for "marketable" social science content over less shiny, but more substantive work.


Hmmm, maybe it could be called something like "syllogistic materialism?"

It's a predictable badness that these "revelations" are specific sentiments about what follows, aka "the nouns" (more or less). "Still feels X," "I am a Y," "everyone thinks Z," all require that X, Y, and Z all have fixed meanings.


>The rapid rise in CDS clearly begins around 2000 - not the late 1970s, which are just the low point of the graph.

I don't think this is taking full account of their claim. They're saying it's a "hockey stick." If you look at the graph like that you can understand what they mean.


Sounds like you’re saying the authors suffer from cognitive distortions. They’re disproving their point by proving their own point.


Their work is definitely an illustration of precisely what they're talking about.


Maybe we have red entirely different articles. Because the one linked contains whole section indicating the method has limitations and they don't try to establish any causal effect.

For me the study is interesting because I observe the very same change in music lyrics and movies. They become vividly grimmer compared to earlier years.

I have seen a young person explaining it by stating young generation expresses emotions in a more toned down way.

On the other side, if we perceive books and other forms of expression as a way to share stories receiver can not live by himself, it would mean the opposite, that people lives became better. And because of this art fills the void of negativity. An explanation that is close to my understanding.

Either way there is a collective shift in expression and it is interesting to research it.


Really? As I was reading it, I thought that the last decade's music was considerably less dark than the music of my youth, circa 2000.

My follow up questions to this paper would be which books exhibit this depression-associated language? Does it vary by genera? What happens if you weight by sales/popularity. I'd also be curious to know if this is reflected in news media, either print or television. I feel that news was gotten more emotive and emphatic, but not really sure.


I can't for the life of me find it, but years ago I read that the 1970's had more #1 hits in minor keys than any other decade Billboard tallied. As a child of the late 60's early 70's, it certainly feels like sad songs evaporated over the years. I was too old to enjoy grunge, so I'm not sure if that counted, but it definitely seems like songs were moodier and sadder in my childhood. Funny if it turns out that everyone thinks this regardless of decade!


If you're thinking of things like gothic rock (do they still make it?) then it's important to understand that it stops being entertaining to create spectacles of the caricature of darkness once the darkness has permeated everything. What we're witnessing is more like a dark age than dark clothing and makeup. Important distinction.


> I thought that the last decade's music was considerably less dark than the music of my youth, circa 2000.

Do you have any data to support that more generally? Necessarily, anyone's experience of a field so enormous as music will be narrow and biased.


"Our sample comprised over 1,000 Top 40 recordings from 25 years spanning five decades. Over the years, popular recordings became longer in duration and the proportion of female artists increased. In line with our principal hypotheses, there was also an increase in the use of minor mode and a decrease in average tempo, confirming that popular music became more sad-sounding over time...."

https://psycnet.apa.org/record/2012-12935-001


I like the methodology in general, but boy howdy does "25 years spanning five decades" set off alarm bells. Which 25? Why not ~50?

(It's not necessarily a problem; every other year for instance probably wouldn't be a cherry picked slice.)


Do you have any data to support that more generally?

No. I threw out all my CDs years ago.


You agree there is a rapid rise, you just don't agree with the dates: do you have a theory as to why?


Not parent but it's likely to be a combination of several factors:

* Impact of social media on mental health

* Use of CDS by print/online media

* Social divide and tribal thinking

* Perceived instability and uncertainty


The paper picks 1980 as the start year…interesting because the recent HN article about obesity-caused-by-contamination post also picked 1980: https://news.ycombinator.com/item?id=27936016

Perhaps spurious, but could be seen as more evidence for the contamination theory, since depression and weight gain are correlated.


Complete sidenote, but when did google n-grams start getting attention again? Last I looked at it it seemed pretty dead and unlikely to see any further updates. Now not only is it updated but it has a lot of features I don't remember seeing before e.g. wildcards and part-of-speech wildcards https://books.google.com/ngrams/info


They also started getting new data again. For the longest time, it ended in 2000, IIRC.

My previous go-to ngram to link was "participation trophy"[0], which previously had its biggest spike in the 1950s. There's a massive spike now, starting 2006 or 2008, but my rhetorical point still stands: participation trophies were a big thing -- in writing, anyway -- when Boomers were kids, not Millennials.

[0] https://books.google.com/ngrams/graph?content=participation+...


For more information on cognitive ditortions, this article does a nice summary... It also claims a similar observation in colleges back in 2015.

https://www.theatlantic.com/magazine/archive/2015/09/the-cod...


I think this could be driven by personalization of internet content. It started with the innocuous idea of presenting only the information you need. And, it turned into a system of echo chambers where your world view would be limited to certain ideas that you are comfortable with. And, it leads to lots of cognitive distortions. Also, the abundance of media on the internet force the content of the media to be louder for it to be visible. Nuance has to give way to crass generalization to get deeper media engagement.

I seriously worry about the trend and I am not very hopeful about the direction it takes.


The data seems to support that people in America who write books are depressed.


What I'm interested in seeing is the types of phrases that dropped during this time period. What is not considered a sign of "cognitive distortion"?


I don’t think the methodology is showing anything conclusively. The peak could be due to a linguistic shift or due to a difference in the type of work being published. Even if books use phrases like “I am a” more often I don’t think that necessarily means people have more cognitive distortions and if people have more cognitive distortions that doesn’t necessarily mean they are more depressed (only the inverse has been shown afaik).


It is an incredible result, my first thought is that it depends so much on the sampling of what books get published, and what books wind up in the maw of google books.

I have done things like look at specific dates that turn up in Wikipedia and you see some things that are real but when you get close to the time frame Wikipedia existed sampling effects are strong.

It would be fun to look at ‘I am a *’ though.


The phrases are proven/believed to correlate with Cognitive Distortions, but that is when they are used in conversations, or diaries, or maybe short writings. Not in books, which are very different from those things. Doctors don't ask a patient to write an entire book and count these phrases to determine whether or not the patient have Cognitive Distortion.


It's from the lack of heavy metals - lead is necessary for transporting glucose to the axon, and its deficiency leads to massive synapse loss and resulting mental illness.

The data that supposedly prove heavy metal toxicity are extremely suspect, sometimes outright bizarre. 1. They are elements, so no major changes could possibly occur. Thry appear in all recent geological history. 2. Their toxicity somehow went unnoticed until recently. There are no records of cadmium being considered toxic before 1970. Somehow nobody noticed the toxicity of the most toxic known metal. Thallium was noticed to be toxic soon after its discovery. Lead was only known to cause poisonings in very large amounts and mostly limited to breathing exposure, and was used in things like make up and even sweeteners. Presumably any noticeable toxicity would prevent it from being used in such cases. 3. The purpoted mechanism is absurd. The body picks the metals to be incorporated proteins with great accuracy, and pick the heavy metal, which poisons the protein and activates it when it should be inactive. It just sounds completely absurd to me. 4. Heavy metals only accumulate until certain concentrations are reached, then they start getting excreted. This effect is widely noted.


> It's from the lack of heavy metals - lead is necessary for transporting glucose to the axon, and its deficiency leads to massive synapse loss and resulting mental illness.

I'm going to need a reference for this because I was under the impression that there is no safe level for lead in the body, and it serves no biological purpose.



I can't download the paper. I'm sorry, but reading the abstract, I don't see how that paper supports your argument.

"Highlights

• Chronic Pb exposure could result in a lower weight gain in rats and a higher Pb content in the brain of model rats.

• Pb exposure reduced activities of key enzymes of glucose metabolism in the brain.

• Pb exposure could disrupt the insulin signaling pathway in the hippocampus of rats."


Lower insulin is good. Some of these proteins are targeted by Rapamycin.

The glucose transports are increased, (GLUT3 is the one that supplies axons) and the rats weighed less.


Interesting. Any references, further reading?


See the link above. Also notice the supression of mTOR.


The economy is a negative sum game and so, while the rich keep getting richer and their friends keep getting richer, opportunities for the majority of people are drying up. It becomes increasingly easy for the rich to earn money and increasingly difficult for the poor to earn money.

Almost everyone who rich people interact with are at least relatively well off and doesn't have to work too hard for their money so they don't see or relate to the suffering and hopelessness of the poor who are desperately competiting for their attention.

Wherever the rich look, things start improving - But where they don't look (which is most places), things are always getting worse.

In this crony-capitalist system, the attention of a rich person is as good as money.

The monetary system is to blame for this. When currency isn't backed by anything, the economy and society becomes 100% about capturing the attention of rich people. You cannot compete in this system without the approval of rich people. No matter how much better value your products or services may be; you can never compete because their earnings are mixed in with easy money straight from the money printers, yours aren't - You can never beat the margins of a big corporation which has direct currency pipelines to hedge funds, governments, etc...

Money should not be so important but when you are far from the money printers, it's the only thing a rational person can think about. Getting the attention of rich people is the only way to get closer to the money printers. It's not a distortion to see things as negative or bleak. Things really are bleak for most people. The real distortion is thinking that everything is fine.


The comment must have hit a nerve because people are almost outvoting the downvote bots.


I challenge this:

> whereby they think about themselves, the world, and the future in overly negative and inaccurate ways

Is it not possible for 'rock and hard place' depression to be an _accurate_ negativity about the world in the face of oftentimes seemingly impending climate disaster?


After having spent way too long arguing with people on reddit, I'd certainly tend to agree with the hypothesis, even if the claimed evidence has a pile of holes.

And not just politics. Most people can't analyze a car accident without a pile of cognitive distortions.


Then there's this:

"I’m a Parkland Shooting Survivor. QAnon Convinced My Dad It Was All a Hoax."

https://www.vice.com/en/article/epnq84/im-a-parkland-shootin...


So they put any instance of a phrase like "... it feels like ..." as emotional reasoning? Say, "it feels like it'll rain"? I question their data analysis.


It's telling that a search for the term "sentiment" in the article yields nothing. It's an obvious companion analysis.


As far as I understand sentiment analysis is more of a pragmatic pursuit, and the prototypical use case is reviews of things or creations. It's a business tool and not science. As general language/literature has no set object, I don't see how we could even interpret long term "analyses" of it.

It's good that the authors are at least trying to create some generally interpretable categories mapped into language, even if they may not be (yet) fully convincing, as other commenters point out.


I'd be curious to see how sentiment analysis of contemporary writing has changed over time. That would be easier to interpret than this metric too.


Has there been a surge, or were previously such things suppressed? Also, most books are marketed via publishers which means there is filter (read: bias) in what gets published and what does not.

It's an interesting idea (i.e., analyze books) but the context is going to influence the results.


Is this Flynn effect in action?


I blame it all on that one Beck song, plus that one Radiohead song


Loser/Creep were songs of that time. Loser is kinda optimistic though.


I would posit increased population, shift to cities, fewer close friends, less family relationships, less community, declining interdependency, lower standards of living, and less hope for the future all contribute to depression and so cognitive distortions.


Corresponds with the large drop in testosterone levels seen in the past 4 decades, which the authors did not consider in their discussion. Revise and resubmit


Maybe tangential but I have had the sense for the past ten years that something is wrong with human cognition.

When I look at the popular ideas on all sides of the political spectrum, the trend seems to have been toward ideas that are not only more one dimensional and extreme but more irrational and incoherent. I regularly come across online comments that are borderline word salad, a blathering incoherent mess of the sort that would in the past have immediately led to questions about schizophrenia.

It doesn’t seem to be a specific idea so much as a decline in the lucidity and coherence of cognition itself. The ideas are inane, but I can’t imagine such inane ideas taking hold to such an extent in earlier eras.

I have only two hypotheses that seem like they make sense: gamified social media and CO2 concentration impacting metabolism. I lean strongly toward the former because around 2010 is when algorithmic timelines started to be introduced and it was right around then that I remember a tangible sense of sharp decline. I had to include the latter for completeness, but I hope not as the latter would be far scarier.

Social media companies are the tobacco companies and opiate dealers of the information age. The more I see of them the more I am convinced they are an objective evil and create net negative value. The algorithmic weighting of content for engagement seems to be the real problem, but it’s at the heart of their business model now.


Both your hypotheses are pretty far fetched. Your observations are anecdotal and shouldn't lead to such bold claims.

> I regularly come across online comments that are borderline word salad

If this was an actual trend, which I wouldn't want to assume, don't you think there are simpler explanations to this? I'd immediately come up with two:

- The internet has gotten way more accessible (+ most of the new users aren't native english speaker)

- ML Bots


You can notice such behavior IRL as well.


I agree...it's easy to not notice I think, but I definitely have the same impression that the aggregate mental health of internet users is degrading in various ways. The mind ingests a massive number of variables as it renders reality, we have pretty decent (if primitive) knowledge of the flaws in this device, and the number of changes in the system in the last few decades is substantial...who knows wtf is really going on inside the 7 billion black box reality machines out there! What we know for sure is that there is major malfunctions going on anywhere one looks, and that's only the ones we are able to see.

I think running a similar analysis as this (and other analyses) on the last 10 years of popular mainstream subreddits would yield some interesting results.

Or, perhaps a new meta-social-network of some sort will appear on the scene that can point a portion of our massive biological compute power at the phenomenon and try to gain some understanding of it. I worry that if something isn't done, the system might start falling apart.


The more I see of them the more I am convinced they are an objective evil and create net negative value.

With invoking the words, "absolute evil", you seem to also take part of your observed trend:

When I look at the popular ideas on all sides of the political spectrum, the trend seems to have been toward ideas that are not only more one dimensional and extreme


Heavy metal toxicity is some sort of error. Lead, in particular, is required for glucose transport into axons,the failure of which leads to synapse loss and mental illness.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: