> Who here thinks twitter is a platform for rational discourse?
That's the elephant in the room here. The site formerly known as Twitter is optimised to maximise engagement, and conflict typically generates much more engagement than co-operation. It'd be like trying to have a friendly discussion to work out your differences with your opponent in a boxing ring, surrounded by large crowd who have been whipped up by the venue into baying for a fight. I sometimes wonder if it is even possible to build a sustainable internet platform which somehow rewards cordial good faith discourse and penalises the mean and intolerant (and by sustainable I mean immune to the tendency for these platforms to eventually pivot to maximising profits above all else).
I've noticed that certain Twitter behaviours are popping up on Bluesky. It's being fought against by the early adopters, but fundamentally, the mechanics of a platform are going to massively influence how arguments happen. It still allows replies and quote-reposts to divorce a comment from its context. It's still global and public by default so users are always in performance mode. What it doesn't have yet, that Twitter does, is sorting of comments by engagement. That always had the effect of presenting responses to an argument with the most inflammatory takes first.
Reddit where upvotes are weighted by your similarity or difference to the target uploader or commenter. People who have different sentiments, identities, writing styles, etc. will boost your post faster than people who are similar. Downvotes count for less, in the same vein. Discourse approaches a mean of respectability and sentiment. A few years down the road, sell analysis of users' corpus as de facto background checks.
Or
Something something LinkedIn.
I suppose both approaches have their own problems.
It's possible to build the platform you describe but not possible to make significant progress. Joe User won't visit that site, boring! They'll visit the exciting dramasite for all the sweet gossip.
These site - twitter, facebook, etc... they aren't dividing humans per se by way of intent. They are black mirrors to human nature. The algorithms say, "human, what entices your attention?" "Drama!" "Fights" "Polarizing topics that people 50/50 disagree/agree on" . And so the algorithms delivered what the humans attention recommends - polarizing topics. And so we are now more polarized.
Because this is rooted in human nature, and you can't change human nature, the solution is something needs to be legislated. You can't ban free speech. You can't ban how long people spend online, also freedom, any more than you can ban gambling or alcohol or drug addiction. So then it comes down to something like recommendation algorithm ethics (hah! can you imagine? But why is that not a thing?!? we have an intense AI ethics community but that other AI that is the recommendation AIs that powers sites, it's all crickets as far as rules and ethics). Well, we all know why, money. But "medical ethics" while being a field is also legislated as to unethical medical practices lead to severe consequences, perhaps a "tech ethics" type field would help improve such algorithms, or "tethics" for short. Congress grilled Mark Cuckerburg for all the suicides his tech stack causing, I figure if we're talking literal deaths here maybe have a bit more regulation>?
That's the elephant in the room here. The site formerly known as Twitter is optimised to maximise engagement, and conflict typically generates much more engagement than co-operation. It'd be like trying to have a friendly discussion to work out your differences with your opponent in a boxing ring, surrounded by large crowd who have been whipped up by the venue into baying for a fight. I sometimes wonder if it is even possible to build a sustainable internet platform which somehow rewards cordial good faith discourse and penalises the mean and intolerant (and by sustainable I mean immune to the tendency for these platforms to eventually pivot to maximising profits above all else).