Hacker News new | past | comments | ask | show | jobs | submit login

How is decentralized twitter supposed to solve misinformation?

It mostly makes moderation (or censorship) impossible.




I m surprised every time i see people here asking for the evil that is censorship.

Dorsey wants a decentralized system exactly because he won't be held liable for not censoring people. It makes sense business-wise and it's a win-win for humanity.

Twitter has become a trigger-happy-people-fest because each group knows they can weaponize censorship against each other. With systems that are uncensored , nobody complaints, e.g. people don't make a fuss that email is not censorious enough


All moderation is censorship, to some degree. Even a voting system like HN's has censorship-like qualities: it is specifically designed to make unpopular comments less visible than popular ones. And that is fine, I wouldn't want to only have forums or social media with zero moderation.

Censorship only becomes a problem when it is _unavoidable_, that is, you cannot choose to live outside of some entity's censorship influence. This is where things get complicated when you have giant entities like Facebook or Twitter, which are for many people difficult to avoid (and there certainly isn't an alternative to them).

The problem arises when these entities need to choose which messages to show you. Even without putting any conscious bias in this algorithm, there is a system there which prioritizes some messages over others in a completely opaque way, which is often game-able (bad-faith actors can abuse the system to spread their propaganda, for instance). So moderation is not merely necessary but _unavoidable_, simply because there is more content than you can be presented with, and unfortunately the huge power of moderating large social media networks rests in the hands of a handful of engineers.

Decentralization is an interesting solution here because it would make the social graph and the recommender system separate entities, with clients being able to choose which recommender/moderation system they subscribe to. The need for censorship wouldn't be gone, but with people being able to choose their own censor, the massive power imbalance is not quite as bad.


I think censorship is no longer censorship when it happens purely on the client side, it’s just filtering. As long as the tweets all continue to exist out there somewhere, someone can write a client to display them.

This will, I predict, make a lot of people unhappy because “omg there are Nazi tweets on my blockchain” but personally I’m happy with it.


i think there is a difference between suppression of minority opinions and censorship, which completely bans them. it depends on definition of course, but in its most common use, the word censorship is not subject to gradation


>I m surprised every time i see people here asking for the evil that is censorship.

I'm sadly not surprised. While HN does attract hobbyists like me interested in technology and how it affects society, it also attracts a lot of those who dream of building and controlling the next big data silo for personal profit. Permanently liberating an entire mode of communication into an uncensorable distributed system that can't be easily replaced due to the network effect (such as your example of email) precludes that path to profit.


> It makes sense business-wise and it's a win-win for humanity.

Does it make sense business-wise? What business will want to be on a system that has no moderation? It's just asking for your brand to be accidentally or intentionally associated with undesirable things.


google is in the email business

tons of businesses send promotional emails , even though they know the protocol is also used for sending porjn


And the only reason businesses still use email is Google's development of advanced and almost effective content moderation.


The difference between Google's moderation and Twitter's moderation is, in the Google case, that the user is in control of what is "spam" and what is not (and can still consult the "spam" if any).


you mean spam filtering, which is different from moderation. google won't censor "hateful emails"


>I m surprised every time i see people here asking for the evil that is censorship.

You're posting this in a place (HN) that itself is not averse to deleting posts, shadow banning, manipulating the rank of articles, etc. Some have argued that HN is a good discussion environment precisely because of those things, though I don't necessarily agree. What I'm getting at is that you have reason to not expect sympathy for your position here.


On HN, everyone sees the same things - the same front page, the same comments, and so on. On sites like Twitter and Facebook, this is not the case. What's good for a site like HN isn't necessarily what's good for a site like Twitter.


It's stupid to be 100% against censorship. Some things should be censored. Like child abuse imagery, for example.

The questions are where do you draw the line, and who gets to decide where the line is drawn?


This seems more like an appeal to emotion to justify the creation of censorship infrastructure than an actual solution to a problem.

I haven't heard of unsolicited CP being a common spam issue. And those who exchange CP intentionally can and do use encryption which makes such filters pointless.


Then you are not familiar with the Fediverse's patterns of establishing various levels of tolerance on different networks of instances for loli content (blah blah it's fiction blah blah) because otherwise it federates onto people's timelines and they're revolted. CWs do a lot to soften this.


The original argument seemed about illegal content. If you're just talking about unwanted porn, well, that's an argument for better local content filters, which is different from eradicating content at the source.


> Like child abuse imagery

you would call that a filter, like spam, and of course it's essential and can be deployed in multiple points (before sending, between servers, at the client). Both users and server instance owners can have control of what they allow. There is little disagreement that these filters should exist, so i expect all server to implement them.

Censorship is about allowing e.g. governments to request takedowns globally. most servers will reject such lists.


I'd like to see moderation work like adblocking. You subscribe to whatever moderation feed you like, and it blocks tweets that the moderators decide to block.

If you don't like what the moderators are doing, you can select a different moderation feed, or start your own.

This way we decouple the moderation from the platform.


I like this idea. Another feature that could be helpful for users who are eary of group think thought bubbles is to have some kind of analytics on the most common types of words, phrases, and topics that the moderator censors.


Works out quite well if you want to get out of the business of moderating.


>How is decentralized twitter supposed to solve misinformation?

How is this not a laughable expectation? You may as well expect them to "solve evil".


Moderation's easy in a decentralized model: run your own server, and refuse to federate with servers that are run by (or used by) people you consider to be assholes. Server admins can easily band together to contain (or isolate) "bad" servers as most of the Fediverse has done with Gab.


I get the feeling that is the idea. They want a "neutral" platform where they have no responsibility.


> How is decentralized twitter supposed to solve misinformation?

There will be a space where different people and organizations can build filters that rate information for veracity and other traits. Over time these filters will develop a track record. People will be able to filter their feed through such filters.

Centralized orgs can already do this, but they make a central and opaque choice about it. This means that there is a lot of pressure from various interest groups to make sure that the hidden filter is biased towards their favored viewpoint, and the users cannot really see what the result of this fight is.

It's like how it probably wouldn't be a good idea to have just one news organization that served the whole US.


> It mostly makes moderation (or censorship) impossible.

Making total censorship impossible is mostly congruent with the goals of society in my opinion. Censorship has done immense damage, e.g. Lysenkoism in the USSR (to pick a particular example).

However there is still the option for users to voluntarily opt in to certain kinds of filters. Most likely most people would voluntarily opt in to a filter that rejects child porn, for example. It's an open question whether a decentralized system with filters would lead to better overall environment, but I think it's worth trying.


Same concern - I'm working on getting some info removed from mastodon. it's terrifying how hard it is to get someone (volunteer mods) to actually remove something...

This sort of case makes me an advocate for centralized access to moderate/regulate/also be accountable... but then if there's this magic backdoor to data access and removal...?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: