Hacker News new | past | comments | ask | show | jobs | submit login

Your position is incoherent. You say that the EU itself should hire people to point out misinformation, and then rail against government censorship.

The EU is asking Twitter to define their own moderation policies and enforce them.

So which is it? You want government to do this or private companies?




My position is not incoherent at all, or at least I don't think it is. Let me be more explicit: if EU deems the problem of misinfo is actually such an important thing on X, then it should itself go ahead and "fact-check" the said misinfo on X through Community Notes. This is not government censorship, because other people can also get involved in the process. I don't see how these 2 things are contradicting at all. Granted we're not talking about removing content, which realistically should only be done in very few scenarios: illegal content(CP, that sort of stuff). I'm pretty sure if actual misinfo were to be posted and debunked through a system like community notes, the outcome in the eyes of the public would be better than simply removing said material(which reinforces skepticism and negative attitude towards the authorities/companies among the skeptics).

>The EU is asking Twitter to define their own moderation policies and enforce them. The EU wants a lot of things that aren't feasible. This is one of them. Elon stated plenty of times that the platform should not censor or moderate more speech than necessary. This position turns out to be "harmful" in the eyes of EU, because like the bureaucrats they are, they need everything under control or determined to be under scrutiny a priori(by labeling speech by certain criterias). Imagine a new form of content that does not fall under any current regulation/moderation policies. By default it should not be problematic, because it's not "illegal". Thus my point: what EU deems misinfo/illegal is a slippery slope and will never, ever be satisfied. Unless of course we include a whole lot of nothing-speech.

>So which is it? You want government to do this or private companies?

Neither, or both. Depends if you think my answer is actual "moderation"(or censorship/removal of speech) or providing context in the form of fact-checking(community notes or whatever). If governments want to "fact-check" they should go ahead and do it. Otherwise if they simply just demand that the private company or they themselves(through some intermediaries) want speech removed, that's plain text censorship.[A little note: it's worse than that, you de facto have an actual fascist collusion between the state and private enterprises].


I see your point, and I guess it's not incoherent after all.

For myself, I see a grey area of information, which, while it may not be illegal, can be considered as harmful to people.

If the EU wanted to censor everything they disagree with, they would declare it illegal. That would be overreach in my opinion.

I however don't see it as unreasonable to ask that companies providing a platform take steps to protect their users from some kinds of information and are clear about what steps they will take to do so.

Maybe a slippery slope there, but equally having no moderation carries its own real harms.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: