Hacker News new | past | comments | ask | show | jobs | submit login
Users post more falsehoods after others correct them: study (news.mit.edu)
139 points by rbanffy on May 25, 2021 | hide | past | favorite | 160 comments



Maybe the world has changed but when someone says something stupid and someone else calls them on it, backing it up with facts, and they double down on the stupid... shouldn't there be a population of people out there that see this as a desperation move on the part of the original poster? Or a sign of closed-mindedness?

Correcting someone in a public venue isn't for the audience of one that is spouting stupid things, it's for the rest of the world that is reading the thread. So a fact-based correction that causes more stupid to be posted is doing it's job, really.

If the response happens to reference the facts that were posted, well, now you've got a conversation going.

I would also like to point out that stupid is in the eye of the beholder sometimes. Something can sound stupid on first blush yet turn out to be true.


I think a lot of people can't see through the noise. Find a video of two pundits arguing, and I guarantee that you can find videos which alternately claim that each side "destroyed" the other. It's not just that people can't integrate all the different information out there. (this is part of the problem, too, of course) Frankly, I think we have underestimated the degree (or at least I underestimated it quite a bit) to which many people never really understand the validity of an argument, but simply adopt it when it becomes "mainstream." Of course the problem now is that there is not really a "mainstream." There are many small competing and conflicting "mainstreams." And so people adopt many of these ideas.


>I think a lot of people can't see through the noise.

What if it's not about noise, but about complexity?

We live in a world that is orders of magnitude more complex than any other population of humans has ever experienced.

This complexity means that it's more difficult to know what is 'true' or not, and even that truth is not as universal as it once seemed. Are eggs good for you or bad for you? What's the best way to raise your kids? How do you deal with homelessness? Every single one of those questions have answers that are simple, obvious, and wrong. Every one of those questions needs to be answered starting with "it depends." We cannot treat truth in these cases as absolute, because we know better than that now.

Perhaps it's not surprising that we are not well adapted to dealing with that complexity.


Is the world really that much more complex? Our ancestors ate eggs, raised children, and dealt with the homeless. We can muddle through with suboptimal solutions.


It's even worse than that. I went to a evolution vs creationism debate once when I was in college (it was put on for the students there). Everyone in that auditorium was an atheist and everyone was a young earth creationist. The difference was in which speaker had just made a really good sounding point.

The people on the extremes each believe that their side won but as far as I could tell all the people in the middle believed in both as they crossed some "sufficiently cool" threshold.


Everyone was a creationist AND everyone was an atheist? It's a little unclear what that sentence was supposed to say.


What it sounds like is that the person relating the tale mistook cheering for a particular line delivered in a debate performance with agreement (even if momentary) with the broader position it meant to advance.


You understood my statement.

Also, seriously? This is just a smart sounding way to say that you think I'm dumb and that whatever is in your mind is the real truth.

If you're going to put words in my mouth at least be charitable enough to disagree with me before or after your translation.


> You understood my statement.

Well, I have and stated a tentative understanding, but if I correctly understand your present response you seem to disagree with it. So...maybe not?

> Also, seriously? This is just a smart sounding way to say that you think I'm dumb and that whatever is in your mind is the real truth.

No, there’s a difference between “your description sounds like you probably mistook X for Y” and “you are dumb”. Obviously, its a disagreement about the part of the description that seems to report an interpretation rather than observable facts.

> If you're going to put words in my mouth

I didn’t put words in your mouth.

> at least be charitable enough to disagree with me before or after your translation.

“sounds like ... mistook <thing> for <other thing>” is explicitly both a report of a tentative understanding of what you were trying to say and a disagreement with it, along with a specific alternate interpretation. So, inasmuch as I should have disagreed with you...I did.


Hi Verdex, I read your comment and don't understand what you meant either.


I think it's fine, opinions area fluid, there's little inherent truth.

Atheist too because it s the most likely, but I dont feel bad an unprovable fallacy gets some traction: after all maybe a loving designer is looking at me while I pooptype this.

The US gains more raising credit card consuming drones than revolutionary philosophers


Reasoning takes effort and people are lazy. So why not listen to my favorite <insert individual of note in society>. /s

IMO, it sounds like a lack of critical thinking skills. Instead of pondering validity, as one does to "think critically," as you stated, some people adopt a line of thought from their authority (be it a pastor at church, Tucker Carlson/Rachel Maddow/etc on TV, POTUS, news outlets, etc).


People aren't guided by critical thinking. They're guided by their trust network.

They believe whatever the smart person they trust the most.

I guarantee that you and I do that too. On occasion, we can and do critical thinking, and that's when we make progress.


One of the most important posts I read on here was on weirdness budgets. Most people want to speak with people with opinions +/- 5% of their own. If you go beyond that limit, people will ignore you and seek feedback from those within their circle.

If you are self-directed and discover your own truth, your going to have to overcome a lot of social stigma to get your points across. And in a comment on a message board where someone might not interact with you again, its guaranteed to fail.


> And in a comment on a message board where someone might not interact with you again, its guaranteed to fail.

So true! Though like advertising, the more you see it the more it sticks?


I think this is actually a large reason that the Baader-Meinhof phenomenon exists.

In this framing of Baader-Meinhof, it is something that appears constantly, but as it is to weird you ignore it. Then as you discover more things related to it, eventually it's close enough to your Overton window for you to be able to be ready for it.

Then once you learn it you see it everywhere.

It's interesting that you tie this with advertising, one of the things I learned watching The Social Dilemma was that the way that platforms like YouTube work is that they will attempt to funnel you into the most profitable content types.

For instance if you are watching Gaming content on YouTube, it will recommend PewDiePie, a Gaming YouTuber well known to every once in a while say racist things. Then if you watch enough PewDiePie, you'll start seeing more racist channels. At some point you'll get Jordan Peterson(whom himself is mostly harmless...). If you watch one Peterson video your recommendations will flip from whatever you were doing previously to 80% his content, and if you continue down the rabbit hole it will get as bad as you can imagine. YouTube does this because the people susceptible to watch these videos are also susceptible to buy things from advertisements, so they are their most profitable customers.

YouTube has unknowingly created a funnel that can move the Overton Window to turn people into extremists all so that they can make money from selling healing crystals to them.


A fun one is flat Earthism. People trying to deny it are so full of confidence because their entire lives everyone has been telling them it's round, but they never actually confirmed that themselves and don't know how. They make up "obvious" evidence like "Just stand on a hill and look at the horizon to prove it's round!". When people are too confident they're right, they tend to assume critical thinking is no longer required.


Nobody has the time to actually test everything they believe to be true, even if they knew what an appropriate test would be.


Sure. Not saying we should for normal life. But if you're going to argue with someone about a fact, then you should. Not necessarily personally of course, you could learn about other people's tests and how trustworthy they are and all that.


Often the words "I, you, my, your" are as bad as swear words when correcting someone. One sure way to anger someone and have the double down is to hurt their ego by using one of those words (e.g., your code has a bug, vs the code has a bug). Avoid those words and disagreements become a lot more productive.


Some years back, before I sold my business and retired, I tried something and I've stuck with it ever since.

I pretty much never say 'should', in the sense that someone 'should do' something. Instead, I use the word 'could.'

"Well, you could try doing it this way."

"As you're not having any success with this method, you could use this method."

The difference in responses was great enough for me to develop the habit and stick with it. I'd call it a success.

There are other things I don't say. I don't say "I understand." Instead, I say something similar to, "I can relate." Again, the differences in responses that I get from people have made it worth developing the habit.


Are you suggesting we could all try doing this too?


If you want people to be more receptive, you could try it.


Yes, I could. (Not sure if you missed the original joke...)

But I really appreciate your insight here. I've already suggested it to someone else and will try to use this myself.


It works all over the place...

Your buddy is over at your house and they're bitching about their significant other.

"Well, you could try listening to your SO's complaints, weighing them for validity, and consider acting on them."

Your drunk buddy will almost certainly take that better than if you told them they should shut the hell up and listen to the valid criticism levied against them by their spouse.


See the 1% rule:

https://en.m.wikipedia.org/wiki/1%25_rule_(Internet_culture)

The vast majority of people are lurkers and only read the content.


It's certainly a sign of closed mindedness.

AFAICT most discussion on the internet is pretty closed-minded though, people have their preconceptions, and they argue them. Rarely is anyone enlightened, or are minds changed. It is common to see such a person be corrected on a point of fact but then go on to repeat the same falsehood in another place or on another day. This is likely because to them the narrative is important, the emotions, rather than correctness.

> Correcting someone in a public venue isn't for the audience of one that is spouting stupid things, it's for the rest of the world that is reading the thread.

I hope so, but I'm not sure it works like that either, as the legions of spouters seem to grow by the day :/


I suspect "spam filter heuristics" also contribute to close mindedness. If something is repeated ad nauseam and the observer finds it false every time then they mentally downgrade the reliability to "known bullshit, not worth listening to". The process is agnostic to any objective truth and could apply to stubborn flat-earthers taking "against intuition and the ability to easily comprehend like a flat map" as false.


Something missing in this type of analysis is the future actions of the person being corrected. One might say something wrong, be corrected, then argue (because being corrected makes you feel bad and angry). But the next time they say something, they might consider it more, or maybe not be as extreme, as they want to avoid the bad feelings caused by negative feedback.

It’s easy to see this in action on this very site — nobody likes to be downvoted, so the conversations stay somewhat more civil than other sites.


I thought that is exactly what the researchers were looking at


“Truth is so obscure these times, and falsehood so established, that unless we love the truth, we cannot know it.” - Blaise Pascal

Unfortunately, the odds are against us. Lying is inherent in human nature. Even if you point out the facts, most people will be deluded again because the majority drowns everything, especially in an anonymous environment such as the internet.


But what are the facts? WMDs in Iraq and the myriad of other lies our governments throw at us. Fool me once shame on you. Fool me ten thousand times and shame on everyone.


>WMDs in Iraq

Some interesting reading regarding this:

https://en.wikipedia.org/wiki/Iraq_and_weapons_of_mass_destr...

It turns out that there really were some WMDs after all. I'm not sure they were enough to justify war, but I don't suppose that matters at this stage.


I don’t see any hard evidence there, only speculation on chemical weapons. But maybe I missed it?


There's a bunch of reports (keep scrolling and reading). I'm not sure what you want for 'hard evidence'.

You'll see things like - "Since 2003, coalition forces have recovered approximately 500 weapons munitions which contain degraded mustard or sarin nerve agent" (among others).

As I said, I'm not really sure that justified the war, but they did exist. They also had a bunch of yellowcake and stuff like that.


> Maybe the world has changed but when someone says something stupid and someone else calls them on it, backing it up with facts, and they double down on the stupid... shouldn't there be a population of people out there that see this as a desperation move on the part of the original poster? Or a sign of closed-mindedness?

What you describe is pretty ordinary behavior. To the extent that standard books on negotiation and communication (going back decades) caution you about it. If you want to change someone's mind, throwing facts at them is, on average, a very poor strategy.


> Correcting someone in a public venue isn't for the audience of one that is spouting stupid things, it's for the rest of the world that is reading the thread.

Whenever I have seen this happening, the rest of the audience generally viewed the negative feedback as "unnecessarily punishing" the original poster. From their perspective, the original poster was "just human", and the one who corrected was just trying to "show off".


> shouldn't there be a population of people out there that see this as a desperation move on

This part of the population has left twitter a long time ago...


> Maybe the world has changed but when someone says something stupid and someone else calls them on it, backing it up with facts, and they double down on the stupid... shouldn't there be a population of people out there that see this as a desperation move on the part of the original poster?

Did you miss the entire Trump presidency? People love that sort of thing. Facts are difficult, inscrutable things that have to be dug out of observations and carefully safeguarded. They often tend to be disappointing. Whereas lies and fantasies? Those are theatre.

Oh, and algorithmic timelines make this worse: correcting someone is promoting their original views to other people.


The problem is that there is too much information. For (almost) any position, there are some facts that support that position. There are other facts that oppose the position. The evidence (the sum of all the facts) often leans one way or the other - either supporting or opposing the position. But someone arguing in bad faith can often find enough facts to look somewhat convincing, which lets them persuade at least some others that their position is correct.


Our interpretation of what actually supports what is incorrect because of our limited perspectives


> The researchers observed that the accuracy of news sources the Twitter users retweeted promptly declined by roughly 1 percent in the next 24 hours after being corrected. Similarly, evaluating over 7,000 retweets with links to political content made by the Twitter accounts in the same 24 hours, the scholars found an upturn by over 1 percent in the partisan lean of content, and an increase of about 3 percent in the “toxicity” of the retweets, based on an analysis of the language being used.

Rule of thumb, you can disregard any social science paper where the effect size is single digits.


As imperfect measures as they are, statistical significance measures like the p-value are more epistemologically sound than any arbitrary rule of thumb-threshold on the size of the effect.


> statistical significance measures like the p-value are more epistemologically sound than any arbitrary rule of thumb-threshold on the size of the effect.

Keep in mind that the GP isn't saying the effect doesn't exist if it's in the single digits, but that it is inconclusive and/or insignificant. Insignificant in the human sense, not the statistical sense.

A 1% increase in this behavior? Irrelevant to almost everyone.

This, of course, is not even getting into the issue of the reproducibility crisis, much of which did rely on p-values. While I personally am happy to do p-tests, the skepticism of small effects is well founded. Were someone else to try to reproduce the effects and fail, the standard defense is that the results are sensitive to the methodology used. It's much easier to invoke that defense if your effect is 1% vs 20%.


P-values are based on a normally-distributed sampling distribution which presumes the samples are randomly-chosen. It's hard to see how random sampling can apply very well here.

Suppose we did a comparison between the p-values of biased researchers desperate to publish, versus a conservative heuristic that doesn't believe small effects. Particularly for social science experiments like this, which would you bet on being able to assess repeatability better?


Statistical significance is a good measure of how sure we are there is a correlation, or in this case with an RCT, causation. But any layperson can look at a 3% effect and conclude, yeah, that's probably not that big of a deal. Or not, depending upon your preferences! No judgement. It's not something that requires a degree to determine, just an assessment of one's own values and the effect size in that context.



My rule of thumb is to disregard snarky know-it-all comments by random people on the Internet.


Oddly enough, this comment seems to reinforce the findings we just read about.


But now you've created three Borromean competing claims.


I hadn't heard of Borromean Knots before you made this comment and got a great read out of it. Thanks!

For others following up on this thread, Borromean Knots seem to be an extension of psychoanalytic theory that tries to describe social interactions through an object oriented model that highlights the differences between symbolic, real, and imaginary claims:

1. https://en.wikipedia.org/wiki/Borromean_clinic

2. https://larvalsubjects.wordpress.com/2009/12/08/borromean-kn...


russell's paradox: the hn comment


Your comment does not provide any useful information at all. A merely personal attack.


My guessing is that their measurement of partisan lean of content and toxicity of retweets may have measurement error way larger than 1% or 3%, not to mention other noises in the stastistical modeling.


Why would real effects either be 0 or >= 10%? What would cause such a discontinuity across every single effect?


It isn't that simple. We want to know what percentage of tweets have property X given event Y. We can estimate that using a sample and error bars.

Note that the quotes specifically avoid this inference because the error bars would be silly. 500 million total tweets a day can't be talked about based on 7,000 hand categorized tweets.

But non staticians don't have any reference point for how powerful a predictor it is so a short hand is < 10% probably means in the error bars so less believable.


Taking your question non-rhetorically: because the error bars are wider than 10%[eg, 0, which finds a floor of 4% for anything based on polls], so anything less than that is consistent with p-hacking/random noise[eg see 1], and this is pervasive enough[2] that social science papers should be assumed garbage until proven otherwise.

0: http://slatestarcodex.com/2013/04/12/noisy-poll-results-and-...

1: https://www.explainxkcd.com/wiki/index.php/882:_Significant

2: https://en.wikipedia.org/wiki/Replication_crisis


It's not that no effects occur in this range, but if they do, they are not discernable from random fluctuations, noise and other statistical (non-)effects.


It can be real but unimportant. An extra drink every week might increase my liver cancer risk by 4%, but that's negligible if it's the difference between a 1 in 1 million chance and a 1 in 960,000 chance.


> The study was centered around a Twitter field experiment in which a research team offered polite corrections, complete with links to solid evidence, in replies to flagrantly false tweets about politics.

I wonder if it would be different if it had come from someone they knew in real life. I guess I shouldn't be at this point, but I'm always surprised that the people posting the misinformation aren't terribly embarrassed about it when it's revealed.

> “We might have expected that being corrected would shift one’s attention to accuracy. But instead, it seems that getting publicly corrected by another user shifted people’s attention away from accuracy — perhaps to other social factors such as embarrassment.”

> “Future work should explore how to word corrections in order to maximize their impact, and how the source of the correction affects its impact,”

I think this is important work, but I'm pessimistic about anything that will really be effective.


> I guess I shouldn't be at this point, but I'm always surprised that the people posting the misinformation aren't terribly embarrassed about it when it's revealed.

When things become hyperpartisan, people can't see this revelation. When people read articles like this, I wouldn't be surprised if the reaction is mostly "Yes, I can't believe those people who disagree with me do this," and not a reflection on their own behavior.

You can see this if you follow any political argument online. No one believes that the someone on an opposing side could have a valid argument, or could correctly point out their mistake. If a third-party observer tries to point out a mistake, people will usually cast them as the enemy and accuse them of spreading falsehoods.

Walter Lippmann's excellent Public Opinion points out how it's difficult for people to be both interested in a topic and neutral. This excerpt touches on the issue:

> "It has been said" writes Walter Bagehot, [Footnote: On the Emotion of Conviction, Literary Studies, Vol. Ill, p. 172.] "that if you can only get a middleclass Englishman to think whether there are 'snails in Sirius,' he will soon have an opinion on it. It will be difficult to make him think, but if he does think, he cannot rest in a negative, he will come to some decision. And on any ordinary topic, of course, it is so. A grocer has a full creed as to foreign policy, a young lady a complete theory of the sacraments, as to which neither has any doubt whatever."


It’s very tempting, as in your insightful quote there, to form opinions on everything. However it would be nice if more of us said “I don’t have enough information to have an informed opinion here” more often.

And I include myself in that.

(Edit - yes I am a middle class Englishman!)


I'd point out that Bagehot wrote that in 1871. People don't change, though communications media sure do.


"We might have expected that being corrected would shift one’s attention to accuracy."

Really? Who would think that? Science requires an open mind, sure, but it doesn't require you to be some sort of idiot when formulating your hypotheses. To a first approximation, nobody responds to being corrected with a polite thank you and a shift to focus on accuracy, and everybody knows that.

Whatever model of humanity they're operating with is less realistic than homo econimus. Are they using homo vulcanus?


Exactly... I find that a lot of the folks posting the most misinformation have basically zero interest in whether it actually is true.


That's excessively specific. Most people, no further qualifiers, have zero interest in whether what they are posting is actually true. Most people are just socially signalling their preferred in group by what news they propagate. Labeling things "misinformation" is just another dodge for having to engage with whether or not something is true, or has a grain of truth in it that you might not like.


> I wonder if it would be different if it had come from someone they knew in real life. I guess I shouldn't be at this point, but I'm always surprised that the people posting the misinformation aren't terribly embarrassed about it when it's revealed.

That's an interesting possibility, and I have a small anecdote in the context of the COVID-19 pandemic that is slightly related.

My mother, at the older end of the lately maligned "boomer" generation, would often rant with me on the phone about how obviously fake or ridiculous hoaxes were being spread on WhatsApp groups that she participated in. Even though she doesn't have a scientific background, she has a good instinct for the general ideas and how science works, and a very well-honed bullshit detector. After a while, maybe out of lockdown boredom, she started refuting the hoaxes when she could find good sources or science-based arguments to do so... and eventually, some of her friends and acquaintances who did tend to fall for false information, now ask her beforehand if something they have just been shared makes sense or not!

Note however that this is not in the US, so at least she doesn't have to contend with overarching partisan identity lines regarding belief in this or that.


The US political lines make this horribly hard to do unless you ACTUALLY know the participants well - just take a look at the HN thread about the electric F150 to see acres of people unwilling to even consider that someone could own a pickup truck without being "in the wrong group".


> I'm always surprised that the people posting the misinformation aren't terribly embarrassed about it when it's revealed.

Alternatively, the people posting misinformation were terribly embarrassed, posting less on Twitter as a result, while the bots kept posting at their original schedule, thus decreasing average quality as a result.

The paper seems to be lacking information about absolute tweet counts, so it's hard to tell what the change in their relative measures means in practice.


My impression from being on twitter daily, is that some people (plenty) love to play stupid on Twitter, never mind their wisdom. I mean I find pilots, neurosurgeons, C-level execs, Harvard grads, etc, acting ridiculous, mostly related to politics but not only. I think its as pure trollism as it can get. And they never learn - I see someone laying down facts, using reason, logic etc.. then they are back spewing lies the next day. If you wanna turn the nicest person into a monster, just leave them on twitter for a week or two.


> I wonder if it would be different if it had come from someone they knew in real life. I guess I shouldn't be at this point, but I'm always surprised that the people posting the misinformation aren't terribly embarrassed about it when it's revealed.

Anecdotally, they don't see it as truth being revealed, but me being "brainwashed", a "linear thinker" or "slave to orthodoxy."


I have seen quite a lot of "corrections" that are agonizingly smug technicalities, flaming strawmen, and the like. Those attempts to "debunk" whatever usually make me question what else this group has been playing word-games about.

Often, it comes with some intensity, then a completely hypocritical stance on something else and again I wonder about the integrity.

It can be done, but it must be done cleanly, unimpeachably, and with ground given when "your side" is wrong.


It's worth pointing out that this research was conducted by arguing with people on twitter via a bunch of automated bots. It's not really that surprising that people will [be slightly more likely to] double down if they notice that the person trying to talk them out of their beliefs is part of a legion of literal robots.


Plot twist: it's all just bots arguing with each other


Especially when those same groups already complain about bots invading their discussions.


> I have seen quite a lot of "corrections" that are agonizingly smug technicalities, flaming strawmen, and the like.

I'm reminded of Paul Graham's hierarchy of disagreement. [0][1]

[0] https://en.wikipedia.org/wiki/Paul_Graham_(programmer)#Graha...

[1] http://www.paulgraham.com/disagree.html


I think we need to take a look at how we deal with the same issue in face to face interactions. Calling out someone who is incorrect in a group setting often goes poorly - you seem obnoxious and they need to double down on their rhetoric to save face. Even doing so "politely" just makes you seem more like a dick. While that may sound irrational, think about how you'd react at a conference where one person in the audience shouted "that's not true! this arcane reference which no one in the audience has evaluated claims that's just a myth!" in general you'd think they were at best misguided if not trying to be deliberately provocative. What are the odds that you would go home that night and look up the reference the heckler cited?

Far more effective is to take someone aside and tell them in private that they are incorrect, especially when combined with acknowledgement of what they got right. In that context you are not attacking their social position and thus the stakes are lower. While it is hard to simulate such a position of privacy and trust in an online setting, private direct messages on most platforms are a good start. Anecdotally, people seem much more receptive to a respectful direct message and conversation seems much more cordial and focused on pursuit of knowledge. While I have no evidence that this leads to long term positive changes in posting behavior, I'd be willing to wager it has a better long term outcome.


This is an interesting idea, but their conclusion could be a coincidence.

They might have inadvertently just been covering a standard slide into extremism from normal users, and the fact correction response might not have had an effect on the users.

Also, political facts / misinformation is very hard to judge accurately. For example, just a few months ago, Politifact was calling the "lab leak" hypothesis a "Pants on fire" falsehood. Now, it is being presented as a possibility. https://townhall.com/tipsheet/juliorosas/2021/05/20/politifa...

This study also appears to have had no control group. Odd.


> Also, political facts / misinformation is very hard to judge accurately. For example, just a few months ago, Politifact was calling the "lab leak" hypothesis a "Pants on fire" falsehood. Now, it is being presented as a possibility. https://townhall.com/tipsheet/juliorosas/2021/05/20/politifa...

Townhall.com is a right-leaning conservative publication, both by their own admission [0] and according Media Bias Fact Check [1]. Due to the political nature of the source, some may ignore Townhall entirely. Rather than cite a secondary source, I would cite Politifact directly [2].

[0] https://townhall.com/aboutus

[1] https://mediabiasfactcheck.com/townhall/

[2] https://www.politifact.com/li-meng-yan-fact-check/


Note that the full story is far more complicated than this headline suggests. Some of the same authors show that "gently nudging users to think about accuracy increases quality of news shared".

Paper: https://www.nature.com/articles/s41586-021-03344-2

Tweet summary from PI of both studies: https://twitter.com/DG_Rand/status/1372217700626411527?s=20


Here's the paper: https://dl.acm.org/doi/pdf/10.1145/3411764.3445642

Here's the Snopes "debunking" of one of the claims: https://www.snopes.com/fact-check/ukraine-clinton-foundation...

I wouldn't call that unequivocal. Snopes re-wrote the original claim to say it was referring to the Ukrainian Government, and then said it was false. (The tweet did make some other claims, but they were discussed in a separate article).

I think if you're going to convince people, maybe use a fact-source that matches their political leanings.


This is essentially a textbook example from

Chapter 5 of Probability Theory: The Logic of Science

http://www.med.mcgill.ca/epidemiology/hanley/bios601/Gaussia...

"...The new information D is: ‘Mr. N has gone on TV with a sensational claim that a commonly used drug is unsafe’, and three viewers, Mr. A, Mr. B, and Mr. C, see this. Their prior probabilities P(S|I) that the drug is safe are (0.9, 0.1, 0.9), respectively; i.e. initially, Mr. A and Mr. C were believers in the safety of the drug, Mr. B a disbeliever. But they interpret the information D very differently, because they have different views about the reliability of Mr. N. They all agree that, if the drug had really been proved unsafe, Mr. N would be right there shouting it: that is, their probabilities P(D|SI) are (1, 1, 1); but Mr. A trusts his honesty while Mr. C does not. Their probabilities P(D|SI) that, if the drug is safe, Mr. N would say that it is unsafe, are (0.01, 0.3, 0.99), respectively.

...

Put verbally, they have reasoned as follows:

A) - Mr. N is a fine fellow, doing a notable public service. I had thought the drug to be safe from other evidence, but he would not knowingly misrepresent the facts; therefore hearing his report leads me to change my mind and think that the drug is unsafe after all. My belief in safety is lowered by 20.0 db, so I will not buy any more.

B) - Mr. N is an erratic fellow, inclined to accept adverse evidence too quickly. I was already convinced that the drug is unsafe; but even if it is safe he might be carried away into saying otherwise. So,hearing his claim does strengthen my opinion, but only by 5.3 db. I would never under any circumstances use the drug.

C) - Mr. N is an unscrupulous rascal, who does everything in his power to stir up trouble by sensational publicity. The drug is probably safe, but he would almost certainly claim it is unsafe whatever the facts. So hearing his claim has practically no effect (only 0.005 db) on my confidence that the drug is safe. I will continue to buy it and use it."


You lost a bit of formatting (an overline) in your copied quote that happens to be really important: "their probabilities P(D|SI) are (1, 1, 1)" should be "their probabilities P(D|S̅I) are (1, 1, 1)".


What are the odds that the Twitter accounts that the researchers base their findings on were also bots? It’s an interesting thought to entertain with intertwined mutual feedback loops of assisted automated systems all assuming each other to be human.


Yes, I didn't see any indication from the article that they verified that the users in question were human, or that they were not paid to tweet misinformation. The effect was pretty small, it seems like even a small number of dedicated bad actors could account for it.


The results remind me of the following passage from Dale Carnegie’s How to Win Friends and Influence People:

Shortly after the close of World War I, I learned an invaluable lesson one night in London. I was manager at the time for Sir Ross Smith. During the war, Sir Ross had been the Australian ace out in Palestine; and shortly after peace was declared, he astonished the world by flying halfway around it in thirty days. No such feat had ever been attempted before. It created a tremendous sensation. The Australian government awarded him fifty thousand dollars; the King of England knighted him; and, for a while, he was the most talked-about man under the Union Jack. I was attending a banquet one night given in Sir Ross’s honor; and during the dinner, the man sitting next to me told a humorous story which hinged on the quotation “There’s a divinity that shapes our ends, rough-hew them how we will.” The raconteur mentioned that the quotation was from the Bible. He was wrong. I knew that. I knew it positively. There couldn’t be the slightest doubt about it. And so, to get a feeling of importance and display my superiority, I appointed myself as an unsolicited and unwelcome committee of one to correct him. He stuck to his guns. What? From Shakespeare? Impossible! Absurd! That quotation was from the Bible. And he knew it. The storyteller was sitting on my right; and Frank Gammond, an old friend of mine, was seated at my left. Mr. Gammond had devoted years to the study of Shakespeare. So the storyteller and I agreed to submit the question to Mr. Gammond. Mr. Gammond listened, kicked me under the table, and then said: “Dale, you are wrong. The gentleman is right. It is from the Bible.” On our way home that night, I said to Mr. Gammond: “Frank, you knew that quotation was from Shakespeare.” “Yes, of course,” he replied, “Hamlet, Act Five, Scene Two. But we were guests at a festive occasion, my dear Dale. Why prove to a man he is wrong? Is that going to make him like you? Why not let him save his face? He didn’t ask for your opinion. He didn’t want it. Why argue with him? Always avoid the acute angle.” The man who said that taught me a lesson I’ll never forget. I not only had made the storyteller uncomfortable, but had put my friend in an embarrassing situation. How much better it would have been had I not become argumentative.


An interesting and apropos passage, though it might be improved by a caveat that while the source of the Sir Ross' quote was an unimportant detail, falsehoods on Twitter are often both the primary content of a tweet and highly relevant to important social and political issues. Many tweets are outright slanderous.


You will win more friends with a "never correct others" policy. However, what if belief in the misinformation has deadly consequences? Steve Jobs died due to him seeking "alternative" cancer treatment. Had someone convincing spoken up when acupuncture was being discussed around him, someone who changed his mind into recognizing that acupuncture is a sham, Jobs might still be alive.


How To Win Friends and Influence People doesn't actually advocate never trying to convince people of an opposing viewpoint. Carnegie argues that you'll never be able to convince people of something by correcting them directly, but that you can persuade people if you express genuine interest in their opinion, listen well, ask questions, and allow the other person to save face by presenting the correction as something they came to on their own. Whereas, if you correct someone directly, no matter how solid your facts, they are more likely to feel attacked and double down.


> You will win more friends with a "never correct others" policy. However, what if belief in the misinformation has deadly consequences?

where do you think the line should be drawn? dying is pretty clearly terrible.

what if someone is going to invest all of their money in a ponzi scheme? or put 50% of their savings into GME? or invest in a high expense ratio index fund?

we've got to ignore some mistakes and errors on the part of others.


I'm glad you've decided to let me arbitrate this matter. Here is my official misinformation heuristic, three questions to ask oneself before acting:

1) Is this misinformation likely to convince and harm a great number of people?

2) Could belief in this misinformation result in severe health or emotional damage, for a single person or more who believe it?

3) If yes to either, are you an expert on the issue, relative to the person or group making the claim?

If you're indeed an expert, try to save the person or people from their downfall.

If you're not an expert, read up on the issue before meddling. God forbid it's you who is wrong.

And, of course, if no significant harm is likely, ignore it.


Great example. This is also an issue many doctors treating terminally ill patients. Of course, it is true that the patient will die in a month or two. What should a doctor do? Tell the truth to the patient to screw him emotionally? Or keep silent? Or lie to the patient?


Apples to oranges. You providing this as an equivalent example to the current state of politics-based censorship is a more telling embodiment of the situation.

The example here is demonstrably provable by pulling out a copy of Hamlet and flipping to Act Five, Scene Two. When Twitter shadow-bans accounts and hides hashtags of discussion they don't agree with politically, they aren't able to do this. Their thought process is clouded by the illusion that they're positively not-wrong...but when questioned or presented with counter-evidence, they defer to Hitchens's razor (a flawed methodology) and put wax in their ears.


Surprised to see that the article makes no mention of the closely related backfire effect in psychology. From [0]:

> given evidence against their beliefs, people can reject the evidence and believe even more strongly.

[0] https://en.wikipedia.org/wiki/Confirmation_bias#backfire_eff...


> Not only is misinformation increasing online

I am really starting to despise this word because it is so very imprecise. Yes, there are cases where people are making up facts to support and influence but very seldom do even the experts make statements with scientific precision and nuance that some topics deserve. This word is often being used to politically dismiss any opinion you don't agree with.


Fortunately on HN people will instead politely downvote such a response and possibly flag it without posting anything whether false or not.


Often "correcting falsehoods" is like trying to talk someone out of believing in God by explaining evolution. Or trying to argue with someone who believes in "systemic white supremacy" by pointing out that Asian-Americans earn more money than white Americans.

I don't know what to do in these situations. You can't talk people out of their values but it is better when people express their values in a moderate way that aligns with empirical facts. Probably there's some polite way to express your disagreement without saying "actually here's why you're wrong".


I think you have to step out of the frame, and twist it sideways to pinpoint a source of disagreement that is not value laden, not a landmine, but can lead them in the direction you want them to go.

Steve Waldman was my entry point to this idea in his blog post on lenses and double cruxes:

https://www.interfluidity.com/v2/7216.html


The Socratic method is good for that. Instead of saying "you are wrong", you ask a series of questions that induce a contradiction in the other person. This is often deemed as trolling or sealioning when the discussion is unwelcome (which often is), but when people are open-minded it is a respectful way to argue.


I'm one of those who call Socrates a troll. The problem isn't just that it's unwelcome but that it's unproductive. It proves only that the person isn't capable of supporting their own premise, not that the premise is wrong. It doesn't lead to truths on its own, and doesn't point in the direction of improved hypotheses.

Socrates then makes the assertion that he knows nothing, and is therefore immune to such treatment (and is thus superior). He's not putting himself on the line -- exactly the kind of thing that trolls do.

If Socrates asks you what "virtue" is, what can you say except, "I dunno. Why are you asking? What is it you actually want to know?"

Modern Socratic method isn't really all that similar to what Socrates actually did. It's intended to be cooperative, rather than adversarial. It's nominally based on the dialogue in Meno, which is really more about epistemology than about pedagogy (and which draws Socrates to some weird conclusions about past lives).

Even so, it's not really meant to be argumentation. It's not between equals. The teacher leads the student to "discover" the truth that the teacher already knows. Not just knows, but knows so thoroughly that they can guide the student around all of the possible mis-steps.

I'm all for respectful dialogue, but that's not really what either Socrates nor the modern pedagogues who take inspiration from him are doing. I'll be honest that I've got disagreements with the notion of respectful dialogue as well, but they're off-topic here.


> It proves only that the person isn't capable of supporting their own premise, not that the premise is wrong. It doesn't lead to truths on its own,

But that's the only we can aspire to! Any statement exists because somebody is stating it. You cannot really "have" a truth that is not held by anybody; that means that you still have to find it. The Socratic method thus serves to find a person that is able to hold a certain premise, by sieving away all the people who are not. Notice that this does not yet mean than the premise is true, but it is a necessary condition.

> and doesn't point in the direction of improved hypotheses.

I do not know of any systematic method that does that. Do you? It seems to be a purely creative, not inductive, process.

Regarding the "trollishness" character of Socrates I agree with you. If Socrates was born again today, we (the society) would kill him again.


You're correct that science proceeds by creativity, and that it's not at all the rigorous process we often imagine it to be. There are plenty of contemporary philosophers of science who will point that out.

Feyerabend's approach is literally called Epistemological Anarchy. Not a lot of people really follow Feyerabend in that, not because it's wrong but because it doesn't feel very helpful. If all Socrates wants is for us to admit that we're not rigorous, all I can say is, "Yeah, sure. Thanks for telling me what I already knew."

I don't know the truth. Fine. I don't have a truthful way of finding out the truth. Also fine. The track record of science at finding things that are useful isn't really evidence of anything. That, too, is fine.

I suppose Socrates might deserve some special credit for being the first to realize that. Here ya go, here's a Socrates Snack. But it really is kinda old hat to me, even if the people practicing "scientism" still haven't realized that.

They are, perhaps, the ones who really would benefit from Socrates' work, but all I ever got from Socrates' dialogues is "No, that's a pretty stupid assertion right on the face of it, do I really need to spend 50,000 words watching this guy realize that it's stupid?"

Socrates sieves out everybody. Nobody can hold any premise. Which just leaves me right back where I started.


> It proves only that the person isn't capable of supporting their own premise, not that the premise is wrong.

This is a good outcome. It shows everyone else who sees the discussion that they are speaking on a topic that they do not understand, and should be ignored.


Using this tactic it's not difficult to prove that nobody completely understands any topic. Which is true, but unhelpful. It's very easy to show that Socrates doesn't understand the topic he's talking about, and should also be ignored.

What's needed is a different kind of interaction where we acknowledge what we don't know and find ways to reduce our ignorance. There's legitimate disagreement about how to do that; science is not nearly as rigorous a mechanism for that as we're led to believe.

But Socrates' route only leads to "Let's not bother". Which is, arguably, a worthwhile position to consider as well, but Socrates himself never actually brought it up. He'd be perfectly to apply the same approach to Radical Skeptics, prove them wrong, and then go back to find somebody else to bug.


This is way harder than it sounds. You can ask the followup questions, but too often, they'll get answered. Contradictions, even pointed out, aren't contradictions to them. Hitting a contradiction in the first place requires that the person being questioned applies a consistent model to the questions, and most of the time, they're not. And getting over that requires admitting that they're wrong.

When I am relating such conversations to my SO, it's often with the (modified) phrase "you can drown a horse in water, but you can't force it to drink."


> The Socratic method is good for that.

In most social situations, it is the opposite of that.

Or at least, the way most people perceive the Socratic method is the opposite of that.

In the communications books I've read, the emphasis is to ask questions only out of genuine curiosity, and on top of that, find ways to signal it (including with body language). Never ask questions to make a point. Never ask what could be perceived as a leading question. Even if you are genuinely curious, but don't signal it well, chances are high the other party will interpret it as you trying to make a point and will respond poorly.

In sum: Have a conversation. Express your perspective. Ask questions only if you don't understand.

Bad Question: "If that were true, how would it explain Y?"

Instead: "The trouble I'm having with that perspective is that it doesn't seem to square well with Y."

The latter is expressing your perspective, and is seen as a contribution to the conversation. The former isn't.


> In the communications books I've read, the emphasis is to ask questions only out of genuine curiosity

Then these books are presenting a pastel-colored, declawed, decaffeinated version of the Socratic method. This may be useful and sane advice, but Socratic method it is not. Recall that Socrates was an annoying, unkind person known as the "gadfly", who was hated by many people, and who was ultimately condemned to death by his thought-provoking questions. On the dialogues, you'll see that he does not follow at all the childish advice of your "communications books".


BTW, the whole thread began with:

> Probably there's some polite way to express your disagreement without saying "actually here's why you're wrong".

And you said "The Socratic method is good for that."

And also:

> This is often deemed as trolling or sealioning when the discussion is unwelcome (which often is), but when people are open-minded it is a respectful way to argue.

And then in your response to me you say

> On the dialogues, you'll see that he does not follow at all the childish advice of your "communications books"

So let's discuss:

You suggest the Socratic method is useful when people are open minded. You then go ahead and assert the books are childish, without even knowing which books I'm referring to. Nor is it even clear what "childish" means - it's essentially a statement void of meaningful content. If you advocate for the Socratic method, claim it is useful primarily when people are open minded, and in this whole discussion you exhibit close-mindedness, how often do you think the method will be useful at large? I have little hope in it, and I lost too many years of my life practicing it. It was only when I experienced failure after failure did I learn that the problem wasn't with others, but with my approach. Instead of conveniently labeling people (e.g. "close minded"), I needed to improve myself.

Fully open minded people are rare. 10% would be a serious overestimate. A lot more are open minded in some areas, but open minded on most things? Extremely rare - even amongst academics. Moreover, it is usually the areas in which they are close minded that need to be discussed and addressed. I would rather search for a method that has a higher chance of working on "close mindedness" because that is the majority of situations. And in my observation, such methods work even on open minded folks.

And this isn't even getting to the issue where the person asked for a "polite" way, and you emphasize that Socrates was anything but.


I was not being very deep here. It is mostly a matter of context.

The Socratic method is widely used in a scientific or technical context, where a person is expected to answer to outlandishly skeptical claims against their proposal. This is considered routine if it is (sometimes implicitly) agreed upon; but to an outsider it may appear to be extremely aggressive and impolite. Mathematics is an extreme case of this, were normal mathematical dialogue often takes this form, with one person trying desperately to find holes in the proof of another. But what is a good taste discussion style between mathematicians cannot be safely applied to "regular" people who will surely react as if you were attacking them in bad faith. This is when the hemlock mob comes to kill you.


Indeed, if you want to get the results Socrates did, by all means, follow his advice. Not all your interlocutors will be able to locate hemlock quickly, though :-)


> Or trying to argue with someone who believes in "systemic white supremacy" by pointing out that Asian-Americans earn more money than white Americans.

White Supremacy is the belief that white people are inherently superior to everyone else and should dominate, regardless of how much money anyone makes. There are plenty of poor white supremacists.

Did you mean systemic social privilege? https://en.wikipedia.org/wiki/Social_privilege

>I don't know what to do in these situations.

Empathy and understanding.



> The term white supremacy is used in some academic studies of racial power to denote a system of structural or societal racism which privileges white people over others, regardless of the presence or the absence of racial hatred. According to this definition, white racial advantages occur at both a collective and an individual level (ceteris paribus, i. e., when individuals are compared that do not relevantly differ except in ethnicity). Legal scholar Frances Lee Ansley explains this definition as follows:

>By "white supremacy" I do not mean to allude only to the self-conscious racism of white supremacist hate groups. I refer instead to a political, economic and cultural system in which whites overwhelmingly control power and material resources, conscious and unconscious ideas of white superiority and entitlement are widespread, and relations of white dominance and non-white subordination are daily reenacted across a broad array of institutions and social settings.

This makes sense to me, and doesn't seem related to how much money Asian Americans make.

Have you heard of redlining? It might make "systemic white supremacy" make more sense in an American context. If you've heard of Martin Luther King Jr, this is why he was assassinated.

https://en.m.wikipedia.org/wiki/Redlining


> "The study was centered around a Twitter field experiment in which a research team offered polite corrections, complete with links to solid evidence, in replies to flagrantly false tweets about politics."

Aside from accounting for the unstated biases of their identification of "false tweets", what about offering falsehoods to correct tweets?

That is, might the real conclusion be "Users post more in general when other users post conflicting views"? Or is this not the case?


I'd question if the abruptly clipped nature of the conversational medium (twitter) leads automatically to more defensive responses. Some of these communication tools lack completely the nuances of human to human conversation.

For example, Pre-COVID, you are having a nice cup of Joe with a chocolate cake at a local coffeshop, sitting in your favorite chair. You happen to engage in a conversational with a stranger, but someone with whom the conversation was intriguing, albeit, completely out of your regular circle.

If the stranger, with an attentive face and no negative emotions, in a stable voice, provides some completely contrary information to something you think that you know how might you take it?

Now, as a thought exercise, instead imagine the same as a simple blurb on twitter. No context, bereft of humanity, no communication that this person is a friend, enemy, or merely reposting talking points from whichever biases newsmedia they follow.

Would you think that the latter would be more likely to result in a negative response, only from the medium difference (and what it lacks)?


Isnt this obvious? We live in a hyper-connected world where sentiments can take a dive in a minute when something starts trending. We also live in a world where every well-funded organization has a propaganda arm.

Take Israeli propagandists as an example. What do you think happens when a hashtag like #SheikhJarrah starts getting attention and activity measured in the thousands per hour? You think they will just ignore these threads? Of course not. Troll networks get notified. Volunteers in all kinds of pro-Israeli organizations are mobilized and directed to specific conversations. Suddenly, it seems like every active thread is getting attacked by personalities each responding with very similar claims and very similar approaches.

I believe this happens across the board and not just with political/ideological issues. You are not going to have an online platform allowing people to share information widely without attracting the interests of propagandists eager to improve/maintain the images of their clients.


On the stack exchange podcast back in the day, Joel Spolsky mentioned that there was a desktop messenger style app of some sort where people would post links to forum posts that needed some coordinated correcting of the record on Israeli affairs.


Yeah. I’m sure there are all kinds of tools today — maybe even publicly accessible.

Imagine how this is used today to both distort the facts and and distract the public from topics import to Israeli crimes in Gaza/Jerusalem.


> Among other findings, the researchers observed that the accuracy of news sources the Twitter users retweeted promptly declined by roughly 1 percent in the next 24 hours after being corrected.

1 percent? How do you measure a 1 percent decline in news source quality? This sentence throws doubt at the whole idea, IMO.


The paper is online: https://dl.acm.org/doi/pdf/10.1145/3411764.3445642

> To measure users’ subsequent behavior after receiving the correc- tion, we focused on three main outcome variables. Most importantly, we considered the quality of news content shared by the users. We quantifed the quality of news content at the source level using trust- worthiness scores of news domains shared by the users based on a list of 60 news domains rated by professional fact-checkers (this list contains 20 fake news, 20 hyperpartisan, and 20 mainstream news outlets where each domain has a quality score between 0 and 1) [39]. A link-containing (re)tweet’s quality score was defned as the quality of the domain that was linked to. (Quality scores could not be assigned to tweets without links to any of the 60 sites.)

Citation [39] is Pennycook,G. & Rand,D.G. “Fighting misinformation on social media using crowdsourced judgments of news source quality.” I am not sure what “crowdsourced” means here because I didn’t read all the citations.

But it sounds like they have a plausible metric. It’s always very dumb to criticize science based on a press release.


I'm not sure what you are arguing here. Do you think 1% is a significant finding? What was the margin of error?


I don’t really know if it’s a significant finding. My point was that you asked “how you measure a 1 percent decline in news source quality?” and suggested that your personal confusion about such a thing throws the entire paper into question. Instead of having such strong judgments, why not just read the paper?

I am personally not sure how astronomers measure gravitational redshift with such apparent accuracy. But my ignorance does not mean astronomers are a bunch of frauds. It means I haven’t done the required reading.


Doubt is not a strong judgement. I was hoping someone had the details and would clarify. I appreciate that you did that as you've turned my doubt into certainty. They did not precisely measure a 1% change in news source quality.


This thread epitomizes the paper.


Gravitational redshift didn't convince you?


Lecturing people doesn't work because there is no trust.

We should not assume that authority derived from credentials or the use of factually supported logic is sufficient to convince, without trust it is merely anecdotical.


The key if you want to spread a big lie as quickly as possible is to make it as outrageous and threatening as possible. Tell people someone is going to hurt them, or destroy the country or whatever, and they’ll be way too upset to check facts and will spread the lie like wildfire before the fact checkers can catch up.

If you read some news story or editorial, ask yourself: how did it make me feel? Was it upsetting and threatening? Did it demand immediate action? Am I afraid or angry right now?

When that happens, double check.


Across the board, correcting and confronting people, no matter how you do it, leads them to double down, in most cases. Source: reality.


I think this an extremely important topic. But I don't think it's so much THAT users are corrected, as it is HOW users are corrected.

> To conduct the experiment, the researchers first identified 2,000 Twitter users, with a mix of political persuasions, who had tweeted out any one of 11 frequently repeated false news articles. All of those articles had been debunked by the website Snopes.com.

And here lies the issue. We don't "correct" the issue with a discussion out of genuine curiosity, we "correct" the issue by making an appeal to authority. Like the XKCD #386 comic, we're an obsessive dog licking at it's wounds -- we can't go to sleep. Someone is wrong.

I haven't encountered a single "Flat Earther". I have encountered one genuinely "anti-vaxer". But the way the discussion goes on the internet, I'd expect they're behind every tree.

When people rush over with a link on "Snopes", and then smugly sit back thinking, "Checkmate" -- it worsens the issue. The reason it makes matters worse is because we're acting like Dwight Schrute: "False. CNN has not purchased an industrial washing machine to put a spin on stories. News stories cannot be placed inside a washing machine."


Massive global communities were a mistake. We're not cut out for them.



Examples of these pieces of misinformation include the incorrect assertion that Ukraine donated more money than any other nation to the Clinton Foundation, and the false claim that Donald Trump, as a landlord, once evicted a disabled combat veteran for owning a therapy dog

Both of these examples are the kinds of things that people trot out to show "I don't like Clinton" or "I don't like Trump". To correct someone on them is perceived as an attempt to correct the core assertion, the dislike, via correcting said bit of trivia. This will result in increased hostility.


I've never even heard of that claim about Trump, but rather the multiple times he's been in court for deceiving prospective black renters and/or evicting them illegally.


Any form of refutation can be taken by conspiracists as evidence of the conspiracy.


In essence, don't feed the trolls, ignore them.


I mean, I’ve been corrected on a false claim before (I claimed that the medical industry was going to charge an arm and a leg for the covid vaccine; I was completely, completely wrong) and I felt pretty chastised by the experience. I resolved to be more responsible about checking facts and making assumptions. But we have good mods on hacker news. Maybe that’s what Twitter needs too?


oh no they don't!

/Python argument session sketch


Whenever Bernie Sanders or some other highly polarizing politician makes some sweeping claim, I have found that comments can be effective,such as on Reddit, for at least setting the record strait or putting it in the correct context or showing counterexamples.


>Yes, in some ways. A new study shows Twitter users post even more misinformation after other users correct them.

I can't help but feel like the academics studying this "problem" are blinded by hubris. Even the byline is exemplary - what's being described is a discussion.

When you gatekeep science in the public square with "fact checking" you inevitably end up with a politicized orthodoxy. The opinions and majority consensus of our academic institutions are not beyond reproach, and there have repeatedly been instances where the messaging was misleading or false - look no further than the discourse surrounding covid starting early last year. Latest example being the lab origin hypothesis - a cooky, right wing, xenophobic conspiracy theory, until it wasn't. Fortunately media outlets are finally backtracking on their politicized "fact checking" in this case: see the editor's note here [0] for example.

0. https://www.vox.com/2020/3/4/21156607/how-did-the-coronaviru...


The actual issue is that 90% of "falsehoods" aren't proven as false by the WOKE gatekeepers of modern discussion...but with certainty, they label and dismiss them as "falsehoods" when the cognitive dissonance of their personal politics kicks in.

Then they say "the burden of proof lies on you", but considering that digging up proof on these types of things (in the short term) often relies on further discussion and research, it's kind of a catch-22 to continue the debate.

Latest case in point: Covid was a super-virus created in a lab with funding personally approved by Fauci (who is now enriching himself from the debacle)....and made it's way out of the lab and into the general population.

Next case in point: The 2020 election machines actually were hacked, and results were tampered with. Gasp! Censor that wrong-think!


> Then they say "the burden of proof lies on you".

> The 2020 election machines actually were hacked, and results were tampered with.

I'll assume you believe the things you cite. Geez, "cognitive dissonance of their personal politics" indeed. The "evidence" given by Giuliani et al about the hacks are so fucking flimsy, only the Trumpers can say they're legitimate. They claim "this footage shows" even without no smoking gun shenagians to be seen in them.

If the "legitimate" election results should've gone to Trump, that would mean team Biden tampered with hundreds of thousands of ballots. That scale of tampering is very hard to hide, but so far no solid evidence has been unearthed. Is it a few people, or many people? If it's many people, somebody usually blurts out something and the conspiracy is unravelled. If it's just a few people, how do they manage to tamper with hundreds of thousands of ballots? Footage of people handling small boxes can't account for the amount of fraud that's being alleged.


> the researchers observed that the accuracy of news sources the Twitter users retweeted promptly declined by roughly 1 percent in the next 24 hours after being corrected

How do you measure accuracy in single digit percents? That seems impossibly precise.


Before: 100 users, 100 accurate retweets.

After: 100 users, 99 accurate retweets.

The study followed 2000 and I can only assume they all retweeted multiple times, like 10, so you get 20000 retweets. Seems plausible to me.


Maybe it’s aggregated. If someone retweets seven stories, and four are deemed ‘accurate’, that’s about 57%.


"they retweeted news that was significantly lower in quality and higher in partisan slant, and their retweets contained more toxic language."

Change the word "retweet" to "publish/broadcast", and that describes most "mainstream" journalism.

I stopped when they referenced Snopes as a "fact checker"....the same Snopes that "fact-checked" a satirical article in the Babylon Bee about whether Brett Kavanaugh should prove via DNA that he's not the son of Hitler. [1]

The authors should write an article on the more fundamental issue of 'truthiness' -- it was once a joke by Stephen Colbert, but now people actually talk in terms of "my truth" and "her truth" -- not _the_ truth.

If gender is a state of mind, then why not race, or even species? With objectivity lost, then they're surely unlikely to find its vestiges on Twitter.

[1] https://www.snopes.com/fact-check/democrats-demand-kavanaugh...


>If gender is a state of mind, then why not race, or even species?

Why do you need gender and race to be objective?

As for species, we've gotten that wrong too https://www.amazon.com/gp/aw/d/019501958X


  If gender is a state of mind, then why not race, or even species? 
Sorry, mule. You have to choose /s


> If gender is a state of mind, then why not race, or even species?

Race is absolutely a state of mind, more so than gender. All else being equal, a person who would identify as "POC" in the USA would simply be seen as "white" in many other Western countries.


> Race is absolutely a state of mind, more so than gender.

Yes ...

> All else being equal, a person who would identify as "POC" in the USA would simply be seen as "white" in many other Western countries.

I'm struggling to think of what you mean here? The 20th century saw the transition of various southern european ethnicities into being "white" where they might previously not have been. And also a brutal war in the ruins of Yugoslavia among ethnicities that outsiders would find hard to distinguish and label together as "white".

Race is a state of mind - both in oneself and in the eyes of other people.


> I'm struggling to think of what you mean here?

An example would be how a broadly non-American audience commented to a video entitled "People Guess Who is White In a Group of Strangers" (though, to be fair, some Americans commenters were equally shocked) [0].

You are absolutely right about the huge variety of ethnic identities and conflicts in different cultures, and how they cannot be understood under the mores of a different culture. That is the spirit of my comment, offering a counterpoint to the implication that the idea of "race" is somehow objective.

[0] https://old.reddit.com/r/ShitAmericansSay/comments/7d4aff/sa...


2+2=5


> I stopped when they referenced Snopes as a "fact checker"....the same Snopes that "fact-checked" a satirical article in the Babylon Bee about whether Brett Kavanaugh should prove via DNA that he's not the son of Hitler.

Are you suggesting that when something that is false is being circulated as true, fact checkers should ignore it if it originated on a satire site?


It's amazing how many people think that the Babylon Bee isn't satire, while there's little confusion over the Onion. Both state clearly that they're satire. Does Snopes fact-check the Onion? Do you think they should?


If an Onion article gets widely circulated on social media in such a way that people aren't realizing it is satire and people ask Snopes about it, then yes, they should do a fact check article on it stating that it was satire that originated at The Onion.

> It's amazing how many people think that the Babylon Bee isn't satire, while there's little confusion over the Onion.

I think there are a few things that contribute to this.

1. The way sharing works on many social media sites is that when you share a link to a story at some other site the social media site embeds the story in your post. It links back to the original site, but many people who see your post will just read the embedded story rather than click to go read it on the original site. They see the satirical article by itself rather than in the context of a whole site full of satirical articles, making it easier to not realize it is satire.

2. The Onion is more well known.

3. Due to things like Q, the Bee's satire is making claims that are often less wild than things that people already believe, reducing the chances that they will realize it is satire.


People on the left don't realize that the "fact checkers" are an extension of the propaganda arm of the left.

For anyone who disagrees, take a look at the "fact-checker" Twitter profiles here and tell me with a straight face they aren't literal mouth puppets of the extreme left and/or the communist party: https://www.truthorfiction.com/our-team/


Well except what is True, is more like what it currently politically correct. Stating that the Virus was created in the Wuhan lab got you banned, until last week. Stating that the election was rigged. Same. We will see which way the Arizona audit goes.


Exactly. Could they beg the question any harder? "Sometimes people post more blasphemy even after I show them the relevant passage in my book of scripture."


Three thin.

"'...perhaps to other social factors such as embarrassment.'"

There is a lot of interesting philosophy around this. In 2019 Partially Examined Life [1] did an interview with Francis Fukuyama on his book "Identity: The Demand for Dignity and the Politics of Resentment (2018)"

I was reading the HN post on research from Newcastle University on eating with your mouth open published in the Guardian. A HN user shared their own violent thoughts in reaction to this behavior. What's more, this recalled for me a former friend who is from Newcastle, who I'd very much like to punch in the nose. Though, not really. (I'm not a violent person, I just grew up in the 80s when the answer to everything was a puch in the nose. I liked the thought because it seemed ... suited to the medium?)

Tie it up in a bow and throw it all away. That's part of human nature. One word in this article was sufficent to bias me against it, where another time I would dismiss it as Twitter-quality research and move on.

Then I read this article on Twitter-quality research. I'm a proponent of words matter. Take this passage from the article:

“The difference between these results and our prior work on subtle accuracy nudges highlights how complicated the relevant psychology is,” Rand says

Complicated or Complex? Recall the maxim: a Swiss watch is complicated. A cockroach is complex.

Really? Do researchers read other people's research? Complicated-Complex distinction. It's all the rage. This research is $#!t. Frack it! AGGGG!

[1]: https://partiallyexaminedlife.com/2019/02/11/ep209-1-fukuyam...


Ok. That last bit was totally sarcastic pantomime. (and I don't know why my first line, "Three things" got borked into "Three thin."


> The study was centered around a Twitter field experiment in which a research team offered polite corrections, complete with links to solid evidence, in replies to flagrantly false tweets about politics.

yeah i don’t really trust or believe these “researchers”. case and point from “fact checkers”:

https://twitter.com/GlennKesslerWP/status/125626793122004992...

https://twitter.com/GlennKesslerWP/status/139716616659076711...

and snopes in particular is very biased. i remember when they labeled a claim by aoc as “factually untrue but morally correct”




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: