Requiring "trusted flaggers" should clash with freedom of press so i don't see this passing through parliaments.
Transparency of algorithms is unenforceable to irrelevant.
Things like "no personalized recommendations" are dead in the water- things like ai chat don't personalize, they just use the chat history, which can be stored locally.
Mandatory data sharing breaks international trade agreements
A lot of the other stuff is standard stuff that every website does
There stuff in the regulation that is simply laughably worded:
>Providers of online platforms shall not design, organise or operate their online interfaces in a way that deceives or manipulates the recipients of their service or in a way that otherwise materially distorts or impairs the ability of the recipients of their service to make free and informed decisions.
It's not that different from existing ad-hoc systems for dealing with CSAM, like the IWF in the UK. And it will be within the exemptions of Article 10 ECHR. The only country that might object is Germany.
But they allow a government to directly remove content.
This is potentially evil, no matter how trusted a state (thinks) it is.
The status of ‘trusted flagger’ under this Regulation shall be awarded, upon application by any entity, by the Digital Services Coordinator of the Member State in which the applicant is established, to an applicant that has demonstrated that it meets all of the following conditions:
(a)
it has particular expertise and competence for the purposes of detecting, identifying and notifying illegal content;
(b)
it is independent from any provider of online platforms;
(c)
it carries out its activities for the purposes of submitting notices diligently, accurately and objectively.
You do know that the government can already remove whatever they want? Why is this evil at all compared to what we already have? All it does is give us more due process and clearer rules than before when it was up to private actors to do it.
For example, do you think it is more evil that Google can ban you for any reason without telling you why, or the government being able to ban you while stating exactly why and you having legal rights? I think the first is worse than the second, so moving from the first to the second is a positive development.
Expanding and institutionalising the current situation will only serve to entrench it even further and make it practically impossible to move away from. I would not call this a positive development. Not even in the slightest.
That is not at all what I'm proposing and I'm honestly baffled as to how you ever came to that conclusion.
Surely you can imagine the possibility of us coming up with systems and processes that are both non-anarchic as well as different from what we have today.
I took your comments to be against the government being involved in this process at all, and I cannot imagine a way that system would work. If you have an alternative system that is not sans government I would be interested in hearing it.
Considering you wanted the government to confine people to their homes a year ago for not receiving the covid vaccine (even if they had natural immunity), I'm not at all surprised that you want the government to be more involved. Some people are just wired to want to have a parental figure that tells everyone else what to do.
The user you are replying to has an authoritarian nature.
See this thread where he extolled the virtues of confining people who didn't get the covid vaccine to their homes, and insisted that natural immunity wasn't a thing:
If they tell Google that you have illegal content then Google will remove it, that is how it works today everywhere. What kind of world do you think we live in?
Google can challenge the content flagging as well. These trusted flaggers can only flag content for Google to review, they can't remove it themselves, as far I understand it. So it is exactly the same as before, just that it is formalized.
Edit: Think the Twitter files, with how the American government flags content on Twitter and then Twitter bans it. USA already has that trusted flagging system, but under the hood so you don't see it. I don't see why moving that to the open would be any worse.
> Think the Twitter files, with how the American government flags content on Twitter and then Twitter bans it
Yes this is pretty much what it is, and now it is a legal requirement, and the request by the government have to be prioritized and processed 'without delay'.
And if the government abuses the requests, then the website can complain to ... the government.
I don't understand why you think this is normal. it is not. This "government unable to stop itself" is precisely the reason why press freedom was written into constitutions
Much of EU doesn't rank low in corruption. What this means is, every 4 years the new government will be re-staffing the "national censorship service" (Digital Services Coordinator ) with its cronies. The countries which need free press the most will be affected for the worse
> Much of EU doesn't rank very low in corruption. What this means is, every 4 years the new government will be re-staffing the "national censorship service" with its cronies.
But Google could challenge that to the EU court, and the EU court isn't full of Bulgarian cronies. Or do you think that the EU court would take the side of Bulgarian cronies, really?
If Bulgaria wanted to censor the internet they would already have their own laws to do it. As you said, only extremely authoritarian countries censors the way you describe here, I doubt Bulgaria would get away with it, they would get kicked out of EU if they start to create a CCP style big brother state.
(I used Bulgaria as an example since it is ranked the most corrupt country in EU)
Google could just reject their requests and force them to go to EU court. The EU court is the only reason this law would have any more bite than a local law, without it you just have Bulgaria doing their own thing and they don't have enough weight to do that.
Expecting technology to fix - or even be immune from - preexisting corruption is just too idealistic. There's no silver bullet for changing how a collection of people behave in the way you want.
It is also worse in that it creates a framework for special interest groups to act with special status for "detecting" and removing content without due process.
Any group with an ideological bent, or an agenda can set themselves up as agents for monitoring "dangerous" content. They may even do that job for the stuff we want actually policed, but they'll also do things like remove all negative mentions of Xi Jinping, or any rumblings against the new Soma product...
A company does it on their own service. You go to another. A government sanctioned agent can do it to all services.
How can a business survive for example if it is banned by google? What do you “choose” then?
This is just capitalist utopia, capitalism tends toward monopolies/duopolies by itself - it is precisely the governments job to create an order in which fair competition can continue. Having a more or less democratically elected body help with the tyranny of these few megacorps is absolutely essential.
A few days go, in the front page thread "Opted out, yet tracked: Are regulations enough to protect privacy?", the top comment contained this common dismissal: "Until that happens, just passing more and more laws is going to be meaningless."
Yet Big Tech makes significant effort to try to delay and thwart the passage of these laws.
How do we reconcile this with the common HN commenter dismissal something like, "No one cares. Things will never change."
This is perhaps a nice fantasy for people who have bet their futures on "tech" companies, but how long does this fantasy last. Current reality indicates Big Tech cares. The lobbyists and lawyers hired to help Big Tech fend off governments can confirm it. Why should Big Tech care, if the language in proposed regulations is, to borrow the top comment's term, "laughable".
Has any country besides Sweden expressed reservations about "free speech".1
A country like Estonia might oppose regulating "tech" companies but it seems to be an outlier.
1. She thinks that courts should be the sole arbiters of what should be taken down. It is funny to contrast this with Kagan's comments in the US Supreme Court's recent hearing on Section 230 in the Gonzalez case. When petitioner's counsel mentioned regulation in Europe, Kagan was quick to point out that was not the work of courts. According to Kagan, courts lack the expertise. Several justices seemed to think that requiring compaints about content be reviewed by courts, e.g., on a motion to dismiss, was too burdensome on defendants, nevermind the courts. No doubt Big Tech loves it when legislatures and courts go around in circles on how to grapple with the novel problems "tech" companies have created. The question is how long can this delay be relied on.
HN comments often take a fatalistic tact, more or less advocating "Just give up." Personally, I would bet against anyone "giving up" on regulating "tech" companies. It's a slow, ugly process, thanks in no small part to the work of Big Tech to make it a mess. We are just getting started. I would be more willing to take a bet that many "tech" companies will "give up" when faced with having to comply with the quantum of regulation that legitimate businesses do.
Giving up is a great strategy, then we can let the market self regulate -- keep in mind that Tik Tok has to compete viciously with Facebook on who can design a more trusted platform :) In the end, we all win if we just let things play out.
> That said, the bill makes it clear that platforms aren’t liable for illegal user-generated content, unless they are aware of the content and fail to remove it.
Is there any clarity as to what "aware of" constitutes here? For example, telephone providers are aware that people are using their service for illicit things but knowing which account / phone call is illicit suddenly makes that all less clear.
Honestly, I'm not quite as bullish on the ability to regulate safety on the internet, considering a not insignificant amount of the privacy violations is promulgated by ad networks collecting information on behalf of intelligence agencies to work around those pesky constitutional provisions. The history of the internet is filled with "make you safer" legislation that achieves questionable results at best.
> Only if the flagged content is evidently manifestly illegal can such notices give rise to ‘actual knowledge’. According to the text of the Digital Services Act (section 63), “Information should be considered to be manifestly illegal content and notices or complaints should be considered manifestly unfounded where it is evident to a layperson, without any substantive analysis, that the content is illegal or, respectively, that the notices or complaints are unfounded.”
So glad we're leaving this up to a vague obviousness standard. Not like copyright and other kinds of illicit content is notoriously hard even for experts to decipher and get right.
American copyright laws already makes these platforms delete anything that could hint at being copyright infringement, I don't see how this could make things worse.
Because copyright infringement is only a small piece of what this will regulate. Think illicit forms of pornography, terrorism.
Moreover, I imagine this will likely apply to community moderation which makes things even trickier. Viewing it from the copyright lens feels extremely narrow.
I used to run a live video streaming website (a la Twitch.tv) for a specific niche of live content. But once the pirates discovered the site they started streaming live sports and other copyrighted content. I started getting DMCAs from MLB, NFL, EPL. All very threatening emails with PDFs of legalese containing screenshots of my (small, niche) website streaming their content.
I would always just immediately shut off and ban the streamers that were mentioned.
> Is there any clarity as to what "aware of" constitutes here?
Running a large scale service is completely different I think though. My understanding is that most DMCA notices are automated and a good fraction of those not actually valid. Similarly, these systems from the provider side already have automated DMCA takedown mechanisms. The bigger problem is what happens when the counter party files a DMCA counter notice - now you are aware there's potentially infringing content but you're not allowed to take it down. Of course, I'm sure this law takes the DMCA into account.
The trickier part though I'm referring to is not DMCA but community moderation. Someone flags a picture of your child bathing naked as child porn. Is the provider now on notice as having been informed? How do they validate the circumstances of the photo to make sure it's yours? These aren't easy questions and looking at it through the DMCA lens is insufficient because this law goes way beyond that. The section 230 lawsuits in front of SCOTUS right now are not dissimilar to what DSA is trying to regulate and yet I don't see extra clarity here.
My theory is that we are seeing and will continue to see these uh, VLOPs, take an interest in end-to-end encryption because it allows them to not be "aware of" anything users are doing on their platform. "Sorry officer, we just transmit encrypted bits. No moderation squad needed."
The story from marketing will be that these companies care about users' privacy. But in this case, what the platforms are doing is trading safety of non-users for privacy of users. The ability to share bits in an unmoderatable ecosystem, where the potential audience is Very Large, will create a blue ocean for underground communities who produce, procure, and curate illegal content. This in turn will endanger yet more people who end up being a non-consenting subject in the illegal content.
I'm curious about the outlawing of shadow banning.
As a former content moderator, I found shadow banning to be remarkably effective for our most pernicious actors, who would otherwise quickly realize that their account is banned and create multiple new ones.
It's fine if you don't mind losing legit users and keep the more motivated spammers.
Phone numbers have been so abused that it's a leap of faith to give it to an additional service hoping they wont go to town with it. Then you'll find a number of services that will attribute virtual numbers that can receive SMS, so it won't stop motivated spammers/scammers (tbf nothing will ? but now you have them "verified"))
All in all it feels like a losing proposition to me, but I don't know your situation.
It's not transparent so it's easily abused with the victim possibly not even knowing they've been punished, let alone having any recourse. We saw that with Twitter shadowbanning people in ambiguous ways for political reasons to manipulate the public opinion.
This might sound funny since I'm shadowbanned on here myself :P But I probably deserved it.
The internet is about to get a lot more balkanized and more heavily regulated. That doesn't make it safer, it just puts government back in the driver's seat for who gets to decide what is allowed where, sans those messy election things.
> Proponents of the legislation say the bill will help bring an end to the era of tech companies’ self-regulating. “I don’t want the companies to decide what is and what isn’t forbidden without any separation of power, without any accountability, without any reporting, without any possibility to contest,” Verdier says. “It’s very dangerous.”
No, the reason is quite clearly the EU legislation.
It's largely predictable how companies will react to legislation, and it's 100% clear afterwards, and it's very much the fault of the EU for sticking to the path of cookie popups.
No. When you visit your friends house, do you get to make the rules and tell them, what food you get for dinner, how you like your back rubbed, etc? No, if you don't like it, don't visit them. Forcing ridiculous laws on them does not lead to good.
The difference in your analogy is you know what your friend is doing. If you went to your friend's house and they secretly spied on you, you'd surely think less of them when/if you found out.
The number of popular websites that now notify that they're tracking you across the Internet is more than 1 in 10 million. They don't have to show any banner if they're not tracking you.
Is your contention that if I click 'no' on the GPDR nag screen it won't sell/harvest my data? The few times that I've looked at the T&Cs more closely, they simply say that some cookies are 'essential' for the website and that I can't opt out from them. I took that to mean that clicking anything was accepting their T&Cs to some degree- hence, I refuse to click any GPDR nag screens ever.
If clicking 'no' is effective, it raises the question of why can't that just be done automatically for me by my browser, sparing me the obnoxious nag screens
> If clicking 'no' is effective, it raises the question of why can't that just be done automatically for me by my browser, sparing me the obnoxious nag screens
Because it's in the website owner's best interest to make the rejecting process be as cumbersome and annoying as possible. The whole thing is a show of bad faith.
Essential cookies refer to cookies required for the technical operation of the site. There is no requirement to notify a user of essential cookies. Here are some examples of the type of functionality covered by essential cookies:
- Persisting a shopping cart.
- Storing your login session.
- Identifying the node that should handle your requests.
Site operators are forbidden by the law from using essential cookies for tracking purposes.
If a site operator is classifying tracking cookies as essential, or using their essential cookies for tracking then they are very likely acting in violation of the law.
Sure and GDPR allows users to make an informed decision as to whether they wish to use a particular website or app.
Without it the user cannot know whether an app or website will track them without first visiting it. The act of loading the site can expose the entirety of your browsing activity to the site operator. Whether you would have agreed to those terms or not isn't factored.
No. We have to stop being freaks about our fricking data. Meaningless data like gender or height should be treated like water, it goes everywhere, it gets on the floor, and it doesn't matter!
Was it better for you to click through popups and site redirects and dozens of JS trackers ?
GDPR brought a lot of new UI to existing sites, but I don't get the feeling that we were on a good trend from the start, nor that the sites that care about their users got a lot worse because of it.
I mean, at that time we were getting out of Flash inserts and the whole industry didn't seem to ever stop to think about what they were doing. The very notion of user consent was foreign to many agencies before GDPR.
PS: even looking at the sheer amount of "subscribe to our newsletter" splash dialogs out there make the clumsiest cookie popups look cute to me.
I completely agree. The GDPR had the best of intentions, but did not materially improve data privacy online. Meanwhile it has absolutely made the UX of the web worse.
I know what the responses will be: "its a great law with poor enforcement". Perhaps that's true, but if so what makes us think additional EU tech regulations will be any better enforced?
I personally saw drastic improvements. I remember marketing adding or rotating trackers every 3 months as they got pitches from random companies.
Those basically vanished the day we had to keep track of which company stored their data in which region, and where in the privacy policy list we had to slot them.
User data still goes to Google or Adobe, but getting rid of all the weird scammy players is a win in my book.
And it also helped justify investing in stronger internal data management and analysis instead.
Basically adding random third party libraries suddenly had a material cost, and I don't see any other initiative that could have realistically reached that result.
> I watched a panel on AI (machine learning) at a conference hosted by the European Commission.
> 9 people on the panel
> Everyone agreed that the USA was 100 miles ahead of EU in machine learning and China was 99 miles ahead
> In any case, everyone agreed that in the most important technology of the 21st century, the EU was not on the map.
> The last person on the panel was an entrepreneur.
> He noted that the EU had as many AI startups as Israel (a country 1/50th the size) and, btw, two thirds of those were in London that was heading out the door due to Brexit.
> So basically the EU had 1/3 the AI startups of Israel (this was a few years ago)
> So the panel discussion turned to "What should the EU do?"
> And the more or less unanimous conclusion (except for the entrepreneur) was "We are going to build on the success of GDPR and aim to be the REGULATORY LEADER of machine learning"
> I literally laughed out loud
> Being the "Regulatory Leader" is NOT A REAL THING.
> Imagine it is the early 20th century and imagine that cars were invented and that the USA and China were producing a lot of cars.
> The EU of today would say "Building cars looks hard, but we will be the leader in STOP SIGNs"
> This is defeatism, this is surrender, this is deciding to be a vassal state of the United States and China in the 21st century.
> The EU is already a Web 2 vassal to the US tech companies (none of its own, so it has to try to limit their power)
- The U.S. was one large, unified market. Europe was fragmented, linguistically and economically.
- The Cold War meant a lot of DOD money was going into computing and technology, with a lot of it concentrated in Silicon Valley (initially at Berkeley and Stanford, and the national labs in the area).
- Later on there was research being done into networking (to help ensure redundancy in the event of a nuclear war).
So you now have an area with a rich talent pool (and pipeline), with the appropriate manufacturing facilities, and transportation infrastructure for export. In addition, as the industry was maturing, financial regulations were loosened (remember, many banks used to only operate in one state), so access to capital was easier.
I'm here wondering if they split the companies into two buckets (or the article is missing details). Companies with 45 millions + users and companies with less than 45 millions users.
So a company that has 22 million users will have the same regulatory burden as one that has 10? That's insanity.
Censorship Bills are garbage for democracy. Anyone not reflexively alarmed by the dishonest newspeak phrasing, specifically surrounding "safety", needs to brush up on history. There's nothing more axiomatic in a democracy than the fact that a government cannot be left to censor election speech. Not to mention anything else.
"The DSA will effectively outlaw shadow banning (the practice of deprioritizing content without notice), curb cyberviolence against women, and ban targeted advertising for users under 18. There will also be a lot more public data around how recommendation algorithms, advertisements, content, and account management work on the platforms, shedding new light on how the biggest tech companies operate"
Although it comes very late, better than never. I think this bill is fantastic. It brings important decisions on how platforms work from tech companies to the public, which is where they ought to belong in the first place.
The EU regulations need a lot more enforcement and agencies really need to go after companies trying to exploit loopholes.
For example, consumers are entitled to return goods and services purchased online for 14 days, with some exceptions. Almost all app stores include some weasel wording how that 14 day rule doesn't apply, or they employ some kind of intermediate currency to get around the rule (eg. you have to buy Minecoins to buy content in Minecraft).
The result is that a lot of digital markets are a wild west where the consumer protection rules don't apply (eg. if you buy a fake mod in Minecraft, you just lost 5€ and there is nothing you can do about it)
Another way companies get around the 14 day return policy is to just not offer services starting today, but you have to buy services 14 days ahead of time, so when the service starts you no longer have a right to reverse the contract.
IANAL but the above won't fly in court.
At least the last part, 14 day return policy applies from the time you receive the goods/service, not when you give a business money for that good/service.
E.g. you can return preordered goods, for which you sometimes wait a month.
Even steam allows returning a game in 14 days (and you played at most 2 hours, which is fair).
Exactly, so I have no idea what the grandparent meant about app stores and "weasel wording". Digital goods are exempted from the rule (rightfully so, because you could consume it immediately).
A larger list of exceptions:
14-day cooling off period does not apply to all purchases. Some of the exemptions are:
* plane and train tickets, as well as concert tickets, hotel bookings, car rental reservations and catering services for specific dates
* goods and drinks delivered to you by regular delivery – for example grocery shopping delivered by your local supermarket
* goods made to order or clearly personalised – such as a tailor-made suit
* sealed audio, video or computer software, such as DVDs, which you have opened (unsealed) upon receipt
* online digital content, if you have already started downloading or streaming it and you agreed that you would lose your right of withdrawal by starting the performance
* goods bought from a private individual rather than a company
* urgent repairs and maintenance contracts – if you call a plumber to repair a leaking shower, you can't cancel the work once you have agreed on the price of the service
> Another way companies get around the 14 day return policy is to just not offer services starting today, but you have to buy services 14 days ahead of time, so when the service starts you no longer have a right to reverse the contract.
The supplier might think that that circumvents the law, I suspect that the courts might think otherwise. Some loopholes only exist in potentia and evaporate under court scrutiny, but no one can be certain which they are until a a case is brought.
This is just plain wrong. The 14-day return policy is meant for physical goods in the first place. It was not meant as a trial period, but a protection because you can‘t actually see the product when buying online.
Also, the 14-days do not start with the purchase date, but with the delivery date.
If you buy a digital product online, it is the equivalent of a physical purchase in a physical store.
So nobody needs to „get around“ the policy. They just need to let you know it does not apply.
And i do think it is fair.
If someone rips you off, that‘s a different story, there is laws for that and those cases aren‘t meant to be covered by this policy.
There seems to be a fair bit of misunderstanding of this based on the various comments so far. Here's what the EU themselves say on one of their websites [1] which should clear things up (emphasis in original).
First, something that applies to nearly everything:
> Under EU rules, a trader must repair, replace, reduce the price or give you a refund if goods you bought turn out to be faulty or do not look or work as advertised.
Second, for things bought online or outside of a shop there is an additional right:
> If you bought a product or a service online or outside of a shop (by telephone, mail order, from a door-to-door salesperson), you also have the right to cancel and return your order within 14 days, for any reason and without a justification.
They later explain this more fully:
> In the EU you have the right to return purchases made online or through other types of distance selling, such as by phone, mail order or from a door-to-door salesperson, within 14 days for a full refund. You can do so for any reason – even if you simply changed your mind.
But they also say the 14-day cooling off period does not apply to all purchases. They give this of some of the exemptions:
▪ plane and train tickets, as well as concert tickets, hotel bookings, car rental reservations and catering services for specific dates
▪ goods and drinks delivered to you by regular delivery – for example delivery by a milkman
▪ goods made to order or clearly personalised – such as a tailor-made suit
▪ sealed audio, video or computer software, such as DVDs, which you have unsealed upon receipt
▪ online digital content, if you have already started downloading or streaming it and you agreed that you would lose your right of withdrawal by starting the performance
▪ goods bought from a private individual rather than a company/trader
▪ urgent repairs and maintenance contracts – if you call a plumber to repair a leaking shower, you can't cancel the work once you have agreed on the price of the service
and also say "Please note that this list is not exhaustive.
Also,
> The cooling off period expires 14 days after the day you received your goods. For service contracts, the cooling off period expires 14 days after the day you concluded the contract. If the cooling off period expires on a non-working day, your deadline is extended till the next working day.
Uh-oh. When I hear that, I'm assuming that it will:
1. Going to be more tightly controlled
2. Going to be more strongly censored
3. Probably not going to be actually safer for people like me.
> This article is from The Technocrat
Is it now?... that does not bode well.
> If you use Google, Instagram, Wikipedia, or YouTube
Oh, you mean _that_ Internet. 4 sites which get a huge part of the traffic. Well, the first, second and fourth of these are quite unsafe: They surveil your activities for commercial manipulation purposes and also let the US (and maybe other) governments get some of that information.
As for Wikipedia, its editorial/censorship/moderation policies are variegated and complex, and while I'm not well-read about that, it does seem that they have at least some sort of a mainstream-politics bias.
> The DSA will require these companies to assess risks on their platforms, like the likelihood of illegal content
Lots of things can be illegal, especially in world states with more restrictive laws. That doesn't sound very safe.
> The DSA will require these companies to assess risks on their platforms, like ... election manipulation,
Ah, now we're getting somewhere. So this is formalizing the drumming-up-hysteria-about-Russia shenanigans we've seen in recent years. Once there were witches and gremlins and leprechauns who caused mischief, now it's those evil Russian hackers, which were sent by evil Putin, since why not, right? Just recently we read in the Twitter files how the twitter people were pressured by the US government to come up with supposed Russian meddling, and they were panicking since there wasn't any, so they had to cook something up.
> Perhaps most important, the DSA requires that companies significantly increase transparency
That's good, but about what?
> ... through reporting obligations for “terms of service”
Uh, that's not so interesting. Plus, they still get to have outrageous "terms of service". Those things shouldn't be enforceable anyway, it's not like you can seriously negotiate those terms.
> hate speech, misinformation, and violence.
And who decides which information is valid and which isn't? Also, what if governments engages in misinformation or violence, as they often do? I'm pretty sure it's going to be the "information we don't like", which is sometimes misinformation, and sometimes - not.
> You will be able to participate in content moderation decisions that companies make and formally contest them
Such platforms should probably just be recognized as semi-public so that commercial companies can't censor them without a court order.
, you're going to start noticing changes to content moderation, transparency, and safety features on those sites over the next six months.
> Who are you to decide what's misinformation anyway?
> That sounds like something misinformation terrorist would say.
...
> First, we'll censor any use related to social taboos. Then we'll censor anything we desire. If anyone complains, we'll accuse them of wanting to engage in and promote social taboos.
I'm all with you on government regulations. They're not a solution for any problem and there have be other ways to solve problems. Like giving everyone more opportunities to get good education without being in debt for the rest of your life.
At the same time as person who do hold damn russian passport and see propoganda working well for warmongering I wish everyone took this issue more seriously.
> Ah, now we're getting somewhere. So this is formalizing the drumming-up-hysteria-about-Russia shenanigans we've seen in recent years.
Like it or not, but Russia is a country with a lot of experience of maintaining bot networks and has almost unlimited budget to spread whatever misinformation they think useful for them. Fortunately corruption take its toll, but even with majority of budgets stolen they can still maintain hundreds of thousands fake social accounts. And Twitter is actually full of them.
With modern diffusion models and LLMs (and especially now when tech like ChatGPT exists) they can easily deploy such accounts in any region and make them use any language. Propoganda at scale never been easier.
And it doesn't cost much to run either. Few mobile phone farms, bunch of automation software, few VPN services and 500 students working part time can generate as much of online activity as population of a small country. Propoganda at scale never been cheaper.
PS: Unfortunately maintaining giant bot networks is much easier and cheaper than maintaining functional military.
or rather, Twitter agreed with your claim while under the previous ownership, even though internally they know that wasn't the case.
Now, does the state of Russia operate significant bot networks elsewhere? Possibly. But I have so far only heard about their existing from sources fed by the US government, or allied with it, rather than independent ones.
> Russia is a country
don't conflate the country with the state please.
> Russia... has almost unlimited budget to spread whatever misinformation they think useful for them.
Surely, you jest. Russia is extremely weak in propagandizing. And its budgets for spreading its views are nothing compared to even just the US as a state, and if you count private corporations, and the effect of Europe, then the imbalance is even more severe.
PS - None of this is to say that the Russian regime is admirable, peace-loving, international-law-abiding etc. (which it isn't.)
> And who decides which information is valid and which isn't? Also, what if governments engages in misinformation or violence, as they often do?
Yep. All the sociopaths who lied about nonexistent WMDs for 8 years have never been censored in ANY platform until today. In fact, most of them were apparently 'amplified' by pre-Musk Twitter.
anonymity for political speech has been known and debated for four hundred years in the West. Of course some actors come down on one side or the other. Twitter v0.1 was supposed to be a failsafe for that; now its 2023.
Transparency of algorithms is unenforceable to irrelevant.
Things like "no personalized recommendations" are dead in the water- things like ai chat don't personalize, they just use the chat history, which can be stored locally.
Mandatory data sharing breaks international trade agreements
A lot of the other stuff is standard stuff that every website does
https://commission.europa.eu/strategy-and-policy/priorities-...
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A...
There stuff in the regulation that is simply laughably worded:
>Providers of online platforms shall not design, organise or operate their online interfaces in a way that deceives or manipulates the recipients of their service or in a way that otherwise materially distorts or impairs the ability of the recipients of their service to make free and informed decisions.