Seems like this is gonna get overruled, but who knows with this supreme court nowadays. Aside from that issue, I'd like to discuss why social media always seems so impotent in their response to misinformation. Seems Youtube, Facebook, etc, are all letting people post misinformation, it gains traction through whatever means and is shared to millions of people. Then after the damage is done they swoop in and block it after the damage has been done.
I think they could instead limit the possible exposure by building tiers/gates a post has to move through to truly go to a mass audience. First it would start with a mass audience of 10^1, mostly direct connections. If no keywords were triggered for misinformation, and none of the eyeballs on that post marked it as misinformation or offensive, then send it to 10^2 people outside their immediate social circle. Repeat until it hits 10^4, 10^5 or whatever and then it hits a tier of truly mass audience. Perhaps here it requires a real review by a moderator if it has keyword matches or isn't from a trusted source. Then it is let out to reach everyone because its just a kitty cat video, or about the best way to stain a table, or all the community content we actually want.
This approach is closer to an A/B test for a social post. If it is posted and highly offensive within a small group of close connections, then it never ripples out of those echo chambers. Again it's about uniqueness of the post, so grandma can click reshare or whatever, when she looks at her history she see's it in her timeline, but it's never massively put on others feed until its gone through above.
Ironically, the social platform killing it right now works somewhat close to this in terms of showing you others posts. My feed shows me a lot of big posters, but like 10% of the time I am reached by people i've never viewed, with a post with like 5-10 views. Then they measure engagement (how long did you watch, did you like, etc), before they decide to push it to more people. (My summary of their algorithm, not theirs). I'm advocating for similar but to gate for offensive/misinformation. Seems like it would work like a sieve where good/unoffensive content would rise to the top "in general" and the bad stuff would stay with a small amount of eyeballs.
What damage though ? Let's consider Covid and The Capitol riot.. In both cases, a$$-hat-ery by the vocal, idiotic minorities, whether it be meeting up to 'storm' the capitol only to get arrested after you realize inside the capitol that 'oh sh*t this doesn't do anything...' or after you die from Covid because you thought the vaccine had chips in it .. What damage is done in those cases ?
Ok we lost 1 police officer in DC due to some very controversial-linking to the riot itself (he had like a heart attack). And thousands of people die from not being vaccinated, but the majority of us do not..
I'm just having trouble seeing at least from these two 'disinformation' scenarios.. how damage is really being done to the rest of us, and I know the Covid one will be the far-greater point of contention with the argument not being vaccinated or convincing others to not be is dangerous to you/me .. I get that will be more of an argument..
But I'm just not able to see the 'damage' that really warrants so EASILY going down the path of censorship
What you have described is nothing short of a horror dystopian hell scape. My God, listen to your self man!
> If no keywords were triggered for misinformation, and none of the eyeballs on that post marked it as misinformation or offensive
Keywords? Who decides the keywords? What are you going to do when normal words get a coded meaning. Remember “milk” and the “ok” sign. 4chan is going to have a field day with this! What about the ((( echos ))). You gonna write a regex for potential use of non letter characters?
Marked as misinformation. What, you’re going to ask the ministry of truth for input? Do you think this will achieve anything except build an echo chamber and increase division? The same for offensive. What’s going to happen when all posts by trans people are marked as offensive? Or that doesn’t count cause you’re ok with it. Should we establish a Ministry of Morality, perhaps with a morality police like the saudis to tell us what’s offensive and what’s not?
Everything you wrote is, I don’t even know how to call it. It’s evil. It’s evil, that’s what it is. You have though up an evil system whose only outcome will be oppression, division and resentment.
I think they could instead limit the possible exposure by building tiers/gates a post has to move through to truly go to a mass audience. First it would start with a mass audience of 10^1, mostly direct connections. If no keywords were triggered for misinformation, and none of the eyeballs on that post marked it as misinformation or offensive, then send it to 10^2 people outside their immediate social circle. Repeat until it hits 10^4, 10^5 or whatever and then it hits a tier of truly mass audience. Perhaps here it requires a real review by a moderator if it has keyword matches or isn't from a trusted source. Then it is let out to reach everyone because its just a kitty cat video, or about the best way to stain a table, or all the community content we actually want.
This approach is closer to an A/B test for a social post. If it is posted and highly offensive within a small group of close connections, then it never ripples out of those echo chambers. Again it's about uniqueness of the post, so grandma can click reshare or whatever, when she looks at her history she see's it in her timeline, but it's never massively put on others feed until its gone through above.
Ironically, the social platform killing it right now works somewhat close to this in terms of showing you others posts. My feed shows me a lot of big posters, but like 10% of the time I am reached by people i've never viewed, with a post with like 5-10 views. Then they measure engagement (how long did you watch, did you like, etc), before they decide to push it to more people. (My summary of their algorithm, not theirs). I'm advocating for similar but to gate for offensive/misinformation. Seems like it would work like a sieve where good/unoffensive content would rise to the top "in general" and the bad stuff would stay with a small amount of eyeballs.