I have been feeling depressed for a few days pondering the general credibility crisis that prevales over human communications as of late.
The spread of low quality content or plain disinformation is just an aspect of the problem. We have those who stand to gain from, politically or financially from influencing the global conversation to their advantage, and we've seen an endless parade of examples, that have elevated Astroturfing and manipulative deceit from a curiosity into an exact science and art form: Cambridge analytica, Facebook and their HR firm, Bannon and Breitbart, the Russian troll farms, twitterbots and fake profiles, the inexplicable fact that FB knows and tells advertisers exactly how many times their ad was shown and bills them accordingly, but they need to develop new technology to figure out exactly how this very same features were used to needle in an election, sow discord and amplify animosity.
OTOH I can read and hear all sorts of opinions from all kinds of experts on every field who I no longer trust, since I don't know if they are using their audience for profit and ever so subtly change the perception of those who listen: it puzzles me to no end that nowdays Microsoft is considered open source's best new friend, the same Microsoft that no more than 15 years ago was covertly funding SCO's Lawsuits vs Linux.
I feel like advertising has seemed to the very fabric of human communications, not to better humanity but for the short term gain and selfish goals of a few who can afford such services.
The internet used to be an Electronic Frontier where everybody could be who they really were and speak their minds. Now it's a poisoned cesspit where everybody lies about everything and those who do it better get to sell you stuff. <//3
That and the fact that various parts of the 'expert class' has largely discredited itself; from the replication crisis to the death of physics as a field .... we don't need to blame bots; the trust level is low.
It's always been that way. Seriously, the most valuable skill the internet has helped me foster is my bullshit o-meter. The actual problem that I see first hand is most people dont have that skill exercised at all. They've been living in small enough groups that they have high levels of trust between the members. That has been projected onto the rest of the internet, they are far to trusting and perform zero research before they start repeating what they've read or heard. And were not punishing that action, your reputation for information quality should go down if you repeat clearly false information.
Absolutely tons of false positives, but the bar is set very very high for me to actively voice my support or share information. That might even be the more important skill because of shear volume of information being generated and shared across the world. Very rarely do I voice my disagreement with stories or ideas, not supporting or sharing them seems to be good enough. Very few things are true/false, most knowledge is drifting around on an infinite scale between [0,1.0].
Yes. It's a nice balance around being open to new ideas, researching controversial ideas and taking my tine to register myself as supporting or disagreeing with an idea. Its critical thinking with healthy skepticism, but being unafraid to entertain my imagination concerning very controversial hypotheses. But the real butter might just be how slowly I let ideas sink into the agree/disagree, or perhaps that my middle ground is far larger as in I entertain a wide variety of conflicting ideas while only sharing and voicing my support for a small set of ideas that I'd truly stake my reputation on.
During the English revolt, the censorship of the press was suspended, and people could publish anything. And they did. And the number of inaccuracies spread rapidly.
It became common to argue that England had once enjoyed a rough democracy during the Anglo Saxon days, even though there is no evidence of that.
It became common to argue that studying the Bible was unimportant, compared to the importance of being moved to speak by the Holy Spirit.
A number of establishment figures thought they could stop the spread of error simply by writing books pointing out the errors -- which seems very similar to what is happening now.
After the King was killed, and the official Church limited in its legitimacy, a problem arose that no one had the legitimate authority to determine if someone was the Second Coming of Jesus Christ, so more and more people began to claim that they were, in fact, the Second Coming of Jesus Christ. And the competition among these so-called Second Christs somewhat resembles fights among modern day influencers on YouTube.
In the end, the public became exhausted with the way nothing seemed to have any legitimacy, and which point the public became nostalgic about having a King. And this made it inevitable that eventually the monarchy would be restored.
Ditto Neal Stephenson's "Anathem" which posits an alternate-Earth history with a future of widespread info war (quoted at https://news.ycombinator.com/item?id=14554592 ) and the evolution of a special quasi-priest class of people whose unique ability was to filter out the crap and find true things on the Internet.
In Anathem, to deal with spam the advanced pre-collapse society intentionally built machines to send out an intended message along with millions of tweaked variants to combat spam by making everything untrustworthy at first glance.
I personally think this state of affairs is the solution for the misinformation regime we find ourselves in. We need to combat bots with more bots that tweak and churn the messages promoted by everyone else to forcibly lower the superficial credibility of all information.
My hope is that by doing this, we can supplant the misinformation regime with a white noise regime that is no better or worse than pre-Internet communications for superficial (aka unsigned) traffic.
FWIW, this strategy is commonly used to counter traffic analysis of communications channels (e.g. encrypted military links), so it's not a new idea and it does work.
Ha that is interesting, but does that really apply to our fake news problem nowadays? Let's say a fake news site creates an article "Obama died in a hospital visit 3 weeks ago and was replaced by a robot."
Should we now make 150 different websites that spread 150 slightly different versions of this, who would gain from that?
I'm just thinking out loud.
Oh I think your point is that one of those 150 links would have to contain the truth and the rest of the 150 would slowly edge towards it. Kind of like this:
"Obama was injured and then replaced by a robot"
"Obama was injured and then was given robotic implants to heal"
"Obama was injured and given a pacemaker"
"Obama visited the hospital for a routine checkup, minor cold revealed"
"Obama did not visit the hospital 3 weeks ago, he was at a campaign rally"
Would you really say that you have helped the internet/humanity if you did that to every fake news link?
Even if this is so sophisticated that it autogenerated new domains, new content... people would just revert to following CNN/Foxnews/[standard outlet]. Then the people who read these fakenews links will have an even harder time to figure out who to trust. Or is it maybe the goal to push people towards mainstream news outlets? I can only imagine that as a result of such an approach.
I think the idea is, that when it is obvious that all information from unverified sources is false, then people will start to rely on the (cryptographically) signed, accredited sources when they want the 'real deal', and not let themselves to be misled when these signatures are missing. Some level of white noise is needed (enough to enounter multiple versions of the fake news) for people to recognize the value of checking the signature.
But maybe the white noise mechanism is not needed. There may currently be enough erosion of trust by 'black' noise to give platform builders the incentive to add the authenticity methods to their products and see widespread adoption of their use.
Foxnews proving to me that their latest article is actually from them only helps people who already trust that source.
The reason why Assange etc. post signatures is because they dont have control over Twitter. The ownership of the domain already is a form of authentication/signature that is more than sufficient for just about everybody and source authentication definitely is not the main problem that fakenews is about. Verifiying that the author is who you think it is, is probably the smallest, most insignifant part of fakenews. Much more central is that the content isnt false.
How do we prove that something is false? We usually can't, so we could at best try to find flaws in their thinking or quotes that are wrong and say 'probably false'. That's what fact checker sites are doing, they give out grades. In my opinion the approach of fact checker sites is the best we can do so far, the problem however is now identical to mainstream news: Corruption.
These fact checkers inevitably mess up or maximize their grading to achieve goals for their ideology of purse, which has arguably already happened and now we dont trust factcheckers anymore either.
Maybe this is an uncomfortable thing to say but this entire escapade with fakenews may just be a natural cycle that happens when corruption becomes too much and competition is emerging. So if we accept that reasoning then fakenews is just one ugly side effect but there are also good side effects, like new news sites emerging which may use outrageous new content to get viewers or superior ethics as their selling point. Hopefully the latter prevails but the cycle of gaining / losing trust will continue for as long as human beings are fallible.
> That's what fact checker sites are doing, they give out grades.
Hah! So also in Anatham, there are other machines that do this. The design the author wrote into the story involved two species of machines that work at full speed with 100% uptime to both revise and tweak the facts of a story and then, separately, to assign grades. Basically a world-wide generative adversarial network.
From the attacker's point-of-view, in order to deliver a false message they are forced to try to fight through a gauntlet of independent machinery that will first generate a bunch of alternatives and then will look at any particular story and assign a grade with knowledge that it's probably being attacked. That could be a very tough filter to consistently navigate, especially if our attacker is trying to conduct a broad campaign of misinformation.
From the victim's point-of-view, every piece of information they read now is associated with a score provided by their fact checking filters -- and there is no reason not to have multiple layers of grading filters.
"The world is going to crack wide open. There is something on the horizon. A massive connectivity. The barriers between us will disappear, and we’re not ready. We’ll hurt each other in new ways, we’ll sell and be sold, we’ll expose our most tender selves only to be mocked and destroyed.
We’ll be so vulnerable, and we’ll pay the price. We won’t be able to pretend that we can protect ourselves anymore. It’s a huge danger, a gigantic risk, but it’s worth it, if only we can learn to take care of each other, then this awesome destructive new connection won’t isolate us. It won’t leave us in the end so totally alone." - https://medium.com/@chrstphrmllr/you-are-not-safe-bebb0538e1...
The problem was that they got attention. I still dispute any relevant impact besides the talking points for political campaigns. Which rivals and probably exceeds the dishonesty of some trolls.
> it puzzles me to no end that nowdays Microsoft is considered open source's best new friend
Nobody should see it as anything other as trying to regain lost developers that went to greener pastures. Microsoft wanted to be Apple and in doing so removed any advantages their platforms offered. And the quality of win10 is abysmal.
> The problem was that they got attention. I still dispute any relevant impact besides the talking points for political campaigns. Which rivals and probably exceeds the dishonesty of some trolls.
Can't judge the impact, but there is no question of their ubiquity. The bulk of these laura freedoms, vets for tumps, deplorable sandys and similar accounts with hundreds of thousands tweets are Russian operators. Often you'd look at their likes, and their full of Cyrillc tweets they had to amplify for the Motherland due to some minute home front need. Sometimes, I'd tweet them a humiliating comment in Russian that Bing would never be able to translate, and get blocked within seconds.
Twitter today is likely close to 50% accounts being bots, and 80% content generated/circulated by bots. They also have everything they need to shut the bots down, but it will halve their user base and likely reduce engagement metrics to 1/10th.
>The internet used to be an Electronic Frontier where everybody could be who they really were
come on - on the internet nobody knows you're really a dog is quite an old cartoon at this point. It might be it was a place where everybody COULD be who they really were but it was also from the first a place where people could pretend to be who they weren't.
I think this is just the cycle of deep "disruption" - naive optimism/hype spurring development without regard for how the general public behaves, followed by speculative investments that lead to mass adoption, diluting ethics in the sell out, which attracts unsustainable exploitation... adoption continues to increase as trust falls... then, eventually, enough people burn out that cynical realism reaches the critical mass required for cultural/legislative regulations. Finally, we end up with a mediocre sustainable system - a stable playing field for the next disruption.
Things look bleak to the early adopters that saw dreams morph into nightmares... but this too shall pass.
Re: Microsoft, one of the early tests of a Russian fake news source was to fake that Windows 10 sends highly personal information like webcam video, keystrokes and mic audio to the mothership.
It got a bunch of traction on here and took off big time on Reddit, and people would quote it as proof in comments for months.
The spread of low quality content or plain disinformation is just an aspect of the problem. We have those who stand to gain from, politically or financially from influencing the global conversation to their advantage, and we've seen an endless parade of examples, that have elevated Astroturfing and manipulative deceit from a curiosity into an exact science and art form: Cambridge analytica, Facebook and their HR firm, Bannon and Breitbart, the Russian troll farms, twitterbots and fake profiles, the inexplicable fact that FB knows and tells advertisers exactly how many times their ad was shown and bills them accordingly, but they need to develop new technology to figure out exactly how this very same features were used to needle in an election, sow discord and amplify animosity.
OTOH I can read and hear all sorts of opinions from all kinds of experts on every field who I no longer trust, since I don't know if they are using their audience for profit and ever so subtly change the perception of those who listen: it puzzles me to no end that nowdays Microsoft is considered open source's best new friend, the same Microsoft that no more than 15 years ago was covertly funding SCO's Lawsuits vs Linux.
I feel like advertising has seemed to the very fabric of human communications, not to better humanity but for the short term gain and selfish goals of a few who can afford such services.
The internet used to be an Electronic Frontier where everybody could be who they really were and speak their minds. Now it's a poisoned cesspit where everybody lies about everything and those who do it better get to sell you stuff. <//3