This contains the text of the requirements. Search for "Grooming" and you'll find some particularly horrifying chunks regarding "scanning of conversations for content." Plus, not only known existing CSAM, but also identifying new material. Because don't you trust machine learning that can't tell the difference between a cat and a ferret to know what is and isn't CSAM? That's a very real path to "You have a photo of your child in the bathtub and the police smash down your door."
I like this bit, though:
> The processing of users’ personal data for the purposes of detecting, reporting and removing online child sexual abuse has a significant impact on users’ rights and can be justified only in view of the importance of preventing and combating online child sexual abuse.
I assume you meant this piece with "scanning of conversations":
>As mentioned, detecting ‘grooming’ would have a positive impact on the fundamental rights of potential victims especially by contributing to the prevention of abuse; if swift action is taken, it may even prevent a child from suffering harm. At the same time, the detection process is generally speaking the most intrusive one for users (compared to the detection of the dissemination of known and new child sexual abuse material), since it requires automatically scanning through texts in interpersonal communications. It is important to bear in mind in this regard that such scanning is often the only possible way to detect it and that the technology used does not ‘understand’ the content of the communications but rather looks for known, pre-identified patterns that indicate potential grooming. Detection technologies have also already acquired a high degree of accuracy, although human oversight and review remain necessary, and indicators of ‘grooming’ are becoming ever more reliable with time, as the algorithms learn.
That whole paragraph sounds a lot more terrifying and intrusive. At least they admit the flaws, but I haven't quite had the best experience with the EU's way of handling this stuff.
> “becoming ever more reliable with time, as the algorithms learn“
I mean considering Amazon has put in kajillions of dollars on their learning algorithms and yet now in 2022 those algorithms still aren’t even smart enough to figure out trivial stuff, like that I’m probably not gonna buy a second air fryer immediately after buying a first one, and stop advertising them to me. What hope do we have for this new algorithm?
It just showcases either the ones making the decision are naïve or they have an ulterior motive. They tried to justify it with the following footnote:
>For example, Microsoft reports that the accuracy of its grooming detection tool is 88%, meaning that out of 100 conversations flagged as possible criminal solicitation of children, 12 can be excluded upon review and will not be reported to law enforcement; see annex 8 of the Impact Assessment.
Of course, anyone could poke a dozen holes in this statement. "Possible", so there still needs to be a ton of review. Microsoft reporting on its own detection tool. Etc.
The ulterior motive is gaining total control over citizens in EU member states. Which is also why the EU digital ID [0] will soon be introduced and a Euro CBDC will follow in the coming years. It's quite likely the Euro CBDC wallet will be linked (part of) the EU digital ID.
I expect at some point the EU digital ID will be required to use when accessing the internet or perhaps when signing up for email, creating a Twitter account and such. This EU digital ID will
EU member states will become a carbon copy of China. People will worry what they can say out of fear that they will not be able to spend their hard-earned Euro CBDC.
> I expect at some point the EU digital ID will be required to use when accessing the internet or perhaps when signing up for email, creating a Twitter account and such.
I'm not going into the debate about whether there's an ulterior motive, but current eID solutions in various EU countries are considerably less draconian than what you're describing. The current solutions are going to tie into the new one, after all.
In Finland, as an example, we simply have a chip with a cert/key on ID cards. People without card readers or who don't want to use them are free to use their netbanking logins instead. Being able to use the same solution in other EU countries is of rather limited value to most, but it will ease up paperwork submission when travelling, moving to another member state, and so on.
Problem is that if an ID system is ever established and becomes ubiquitous, a quick change in legislature can force online services to leverage the ID. Protection of intellectual property, CSAM, a pretext is easily found. Happens faster than you can blink and than the free internet in Europe would quickly die. The best protection against that is to have alternative ID systems or just anonymous usage if no transactions are involved.
And a lot of services would be fine to use a state ID because they can tie their advertising to a real and unique person. To secure against that the smart move is to reject the little convenience here for longtime benefit.
Pretty much whole of EU already has some form of government ID (not sure if there are any exceptions, but if they are they are not many).
People are used to them, so it's really just a matter of time, before it goes digital.
I don't disagree with your observations, I just think its a forgone conclusion at this point.
That said I think the value lies in creating new non centralized (and less popular) solutions. They will probably never have mass appeal, but at least they will be there for people who want/need them.
Yes, sadly even with fingerprints in the newest version which is completely ludicrous and we have it because some countries needed to implement it for domestic self-gratification. There are regional differences but here nobody uses their government ID for online services.
Democracy didn't really protect use from intelligence agencies sniffing communication data, so I don't see how it would protect anything. Not an intrinsic fault of democracy since the EU still has deficits here.
The little fact that Microsoft then has the possibility to read random interactions that have been flagged by its own blackbox should really induce doubt here.
Maybe they aren't optimizing for conversions but instead for running out budgets? Amazon often has anti-consumer practices, that likely evolved that way because it improves bottom line, you'd think how hard they make it to search reviews or to filter reviews for a specific version of a product would be simple fixes, but likely testing has shown those things decrease sales.
>like that I’m probably not gonna buy a second air fryer immediately after buying a first one,
this comes up all the time, but one of the primary indicators that {person} will buy a second {item} of type {X} increases when they have already bought one.
However it does seem to me that I have never done this, even when I am unsatisfied with item, I soldier on for at least a year or two before saying aw screw it, perhaps the machines would be more impressive if they could recognize what people will and will not ever buy a new expensive item immediately again after buying one.
Yeah that is a pretty naive algorithm though, especially if you can explain it in one sentence. After all that, that’s the best algorithm they got? If you buy one thing you might buy two. Pretty sure most of us could implement that in a line or two of code and save billions in research. And even if this whole research exercise was necessary to discover this fact, it’s still not a “smart” algorithm if there are a ton of false positives.
And it’s still a relatively trivial use case—-if it comes to potentially miring someone in legal issues, I hope the “algorithm” can do better than that.
I've been told (I don't know if it's) that showing you an ad for something you just bought reinforce your satisfaction and are thus significantly less likely to return the product.
It's about the weaknesses of men, that would gladly change a reduction of the chocolate ration into an increase, because it's their job to lie and alter the past, believing they are doing god's work.
Also, in 1984 surveillance is secret, not publicly stated in a formal document visible by everybody.
Better examples of 1984 in action: the DDR, the NSA.
"Big Brother" was a pretty big part of 1984, and I definitely have gotten "Big Brother" vibes from the EU these last few years, under the pretense of "we know better than you and we want to protect you".
Having a good will doesn't make it okay for the EU to become a helicopter parent, nor do I like the prospect of that good will evaporating and them having access to what is effectively more and more surveillance.
yeah, and what the Big Brother does to you if you don't follow rules is kept secret, by secret police agents, because
1 - the Big Brother doesn't physically exists
2 - people would revolt if they knew what really happens to deviants, so it's imperative they are kept in the dark (“Until they become conscious they will never rebel, and until after they have rebelled they cannot become conscious.” )
“If you want to keep a secret, you must also hide it from yourself.”
So 1984 does not apply at all in this case, because the document is public and everyone can read it and eventually revolt against it
Smash down doors, get embarrassed a few times, pass legislation to ban "roleplaying as a minor in text conversations." Then there's no reason to be embarrassed again!
More practically, I would assume "age of the users of a device" is a well-derived bit of information available to anyone who asks for it with the proper letterhead. Or who just helps themselves to it.
My initial thought was to give adults an adult service, but that also means adults need to provide information (which a lot of adults wouldn't be okay with) so they can filter the kids out. Then you'd still have to run the proposal on every service kids and adults have access to. Then on top of that, you still need to filter out cases where its just minors, but at the same time you can't filter all of them because it might be a case of sexual violence between minors.
Legislation of a similar tone is already the law in many places. Some places ban pornography of adults with flat chests. Some places ban lewd drawings of characters who aren't sufficiently curvaceous. Many platforms hosting content such as literotica and erotic audio recordings ban content that describes or roleplays as minors.
I thought most of these proposals were around doing device-local analysis and reporting (I suppose specific clients might not, but if they can mandate this at the device level, and make it default on iOS and Android, they're going to get almost all users.)
No reason an XMPP client wouldn't be forced to include this as well. Reminder that iMessage and Signal are encrypted communications but are surely a target of this sort of lawmaking.
That's one of them, yes. I've not had the time to read the whole thing today, and probably won't have time, so I'm trying to encourage people to dig through it a bit themselves and get a feel for just how "But the Children!" it is, as a reason to violate every bit of privacy they think they can get away with.
"...Thereby, the proposed Regulation limits the interference with the right to personal data protection of users and their right to confidentiality of communications, to what is strictly necessary for the purpose of ensuring
the achievement of its objectives, that is, laying down harmonised rules for effectively preventing and combating online child sexual abuse in the internal market..."
For all effects, today the 11 of May, is the day the EU tried to remove the right to privacy in communications in the name of protecting children. From the hundreds of measures and police actions they could take them seem to think this is the most important.
That's a very real path to "You have a photo of your child in the bathtub and the police smash down your door."
There's also the simple fact that the reporting of CSAM involves someone seeing CSAM. You see it and then you report it so it gets removed. Does the copy in your browser cache incriminate you? I can also imagine someone deciding they need to save it to forward to the police, whether or not that's useful.
Just the brazen "Yeah, privacy and stuff are important, But the Children!" phrasing of it. They're not even trying to pretend it's anything but an excuse anymore.
But the OP's sentiment still stands. Repetitional damage, stress, potential job loss, etc. just because you wanted to take a picture of your daughter or son in the bathtub playing with a rubber duck to send to your grandparents.
I mean my parents still have those pictures in photo albums and they have been shown multiple times to family friends or relatives - nobody has obvs objected - but there is nobody from a 3rd party with 0 context watching over their heads deciding if this is child porn or not. (Plus I think I look adorable with my duck shampoo and recently found it in the basement when doing cleanup <3). And I very much like those pictures as they are interesting moments of my life that I don't have memories of but are ME.
But to extend this - maybe my neighbours think that is super wrong, maybe my employer. In it's current form the regulation allows for a lot of private data spillage to other ppl.
Even if it's just - yes wrong flag by the alog -> someone gets to see me or my children naked - wtf I'd argue that's abuse.
I think the idea is that of the risk posed by automation within inadequate workflows.
Which, incidentally, is a case-relevant reality in areas you yourself mentioned. And similarly, following there have been case-relevant realities of «repetitional damage, stress, potential job loss» - in other areas that still have caused troubles to citizens after inadequate workflows, and sometimes even owing to imperfect automation.
> Nobody can smash your door here without proper authorization from a court.
This is an urban legend. My country Germany often prides itself to have protections here but that is a lie. In reality courts are overwhelmed so they basically sign any request the executive brings in. We had that for a twitter comment that hit the news a few month ago, but we also have that for people buying flowerpots online because they were suspected to grow weed. No joke. Surveillance is again very strong here.
It is true though that they mostly knock at least or they wait until you are not at home. But the barriers are very low for a supposedly enlightened society. And recourse against illegal searches are very limited as well and some end with quite some property damage.
Some geniuses thought it would be a prudent idea to evaluate the performance of police by the cases they close. So they really have to search for crime in any niche. State incompetence of the highest level in the highest positions and people have very little protection against that and if at some point there isn't enough crime...
The tend not to even say sorry if their search was wrong. Neither the executive nor any other government entity. The police is the least guilty although the system relies that they only request reasonable searches.
Anyway: this has nothing to do with what I wrote, which is not an urban legend at all!
Doors have rarely (never AFAIK in my country) been smashed by the police, not even in those cases where there were actual terrorists inside the building.
Because our police forces are not trained to be war soldiers and are not armed to the teeth.
Now that covid is waning, we're back to 'the kids' in order to allow governments absurd amounts of power over individuals. None of these excuses are put forth in order to further the argument, it's to create a trap for detractors and be able to label them as evil people, killing grandma, hating kids, supporting terror, etc etc.
One can believe in covid restrictions, but they would be naive if they think there wasn't a huge cost imposed on individuals and it's not immediately obvious that more restrictions meant less disease. Dismissing governments ability to utilize fear in order to enact powers they previously didn't hold, just because one fear is something held more dearly is how all of these things pass. Child abuse and covid are both real and both are and were used to give government power that most free societies would reject outright and without further question if neither existed.
COVID was unknown. It's perfectly reasonable to shut things down when there's an unknown disease going around, even if it turns out not quite apocalyptic.
One of the special power it grants him are no-bid contracts; among which are hundred of millions of dollars media organizations and public opinion polling of the population. He also reported the obligatory investigation that comes with invoking the emergency measures after the elections in October.
I can see where you're coming from with plumbing and literacy. Government monopolized water utilities and education.
But the greatest increases in life expectancy were from the late 1800s to the mid 1900s. The improvements have slowed since then, appearing to plateau in the last decade or so.
And it just so happens that in that time period, the government had way less control over the healthcare system in the US.
Literacy is actually decreasing in recent years, as education spending by the government increases. Higher education has never been more expensive, thanks to guaranteed student loans provided by the federal government.
If you're more talking about developing countries, then the correlation is far stronger with level of economic development than with government involvement. In fact government involvement is often inversely correlated, slowing development.
Economic growth caused these things to go up; the governments (which also existed in past periods of human existence) caused growth to be much slower than it could have been.
Don’t confuse government causing a increase in government spending on something with causing an increase in the actual supply of that thing. When governments restrict the supply of healthcare and housing (which they do, through professional licensing and zoning regulations), they can spend more on them without increasing supply.
What about in the case where many of these issues are almost always nails regardless of how convoluted or complicated they may look others?
Andrew Cuomo falsified and hid information regarding his mismanagement of elder care facilities. Hong Kong has been placed under de facto martial law by the CCP with the goal of crushing dissent. Various governments and government organizations have spied on their citizens through COVID tracking apps or secret purchases from databrokers. Barrack Obama recently delivered a speech at Stanford calling for an end to free speech on the net under the guise of "regulating social media" and "stopping the spread of disinformation".
Why is sacrificing rights to governments still considered a necessary solution to problems that governments create themselves? Have you learned nothing from history or those who lived through it long enough to write on the subject like Orwell and Arendt?
Hong Kong fighting against cccp takeover has nothing to do with covid. You can't spell Barack. And you misinterpret what he said. I've likely read Orwell and Huxley more than most. Again, you only seem to see bad faith.
Wait, When did Orwell live in Nazi Germany or the USSR? And if you're going to compare current day to Arendt's experience in the Holocaust, then well, we just disagree strongly.
The irony is that "the kids" is always the primary argument put forward to pass legislations that have little to do with "the kids", while the one time we had to worry about "the kids" and their mental well being, adults just decided to sacrifice them to avoid getting a bad flu.
Let's be clear, this has nothing to do with protecting kids and everything with backdoors. The idea here is that anyone that opposes this can be easily accused of being a child molestor.
I very opposed to the proposal of the commission - in fact I even went to a demonstration today against it. It's a horrible privacy invasion and very bad in many different ways, but...
But I think your take is not true. I can imagine that it might just be a really misinformed proposal to actually go against child abuse. I hope.
I think the poster above you is right, though. If you demonstrate against it, it is easy for someone to just say "Hey everyone, this person is for kids getting raped."
I don't quite understand, to what end? This is clearly a good step towards authoritarianism, but then what? People "disappearing" and more population control?
The world sometimes seems so ridiculous I can't believe it.
In the end it may just be about (corrupted) power. Those who are in power can use the data they gather through surveillance to stay in power, even without "disappearing". It's sufficient to be able to identify whistleblowers and have that be known in order for fewer people to blow the whistle.
the headline is horribly biased (could we change it to "think of the children!"?) but it seems to me if you want to protect children from pedos that you would need to read both of their email, so I don't understand how breaking privacy threatens children.
Mandating that the takedown go through this agency means states won't be able to hire private companies specialising in takedowns (and faster). Takedowns take way longer, which hurts the children.
>We found that the specialist contractors who take down phishing websites for banks would typically take six hours to remove an offending website, while the Internet Watch Foundation – which has a legal monopoly on taking down child-abuse material in the UK – would often take six weeks.
[...]
>So it’s really stupid for the European Commission to mandate centralised takedown by a police agency for the whole of Europe. This will be make everything really hard to fix once they find out that it doesn’t work, and it becomes obvious that child abuse websites stay up longer, causing real harm.
I would have though that in most (all?) EU countries, there's already laws in place to allow law enforcement to read private correspondence of suspects. From my reading, this law would also "allow the widespread scanning of people’s private communications" (from an article linked in the post https://edri.org/our-work/european-commissions-online-csam-p...), so not necessarily limited to suspects
I'd love to know what percentage of those CSAM numbers are inflated by 14-17 yrs old sharing their own pictures (and their SOs leaking to someone else w/o permission) versus what most of the people imagine when government yells "CP!!!"
> Approximately 200,000 people in 38 states are currently on the sex offender registry for crimes they committed as children. Some were put on the registry when they were as young as eight years old.
This might not be universally true soon. The Netherlands proposed changing the law so teens are okay sexting one another as long as there is mutual consent and it's not to make money[0]. However, it has been stalled for a few years now. (I initially thought it was accepted, but now things are forecasted to be 2024). I also suspect that despite it being illegal, very little action is undertaken while the above requirements are still met.
> Mutual, consenting sexting between two teenagers can't possibly be illegal, can it?
If those teenagers are minors and are sending sexual pictures of themselves, they are sending sexual pictures of minors. Without an exception in the law to specifically allow this, that can be and is illegal by the laws that criminalise possession or distribution of sexual pictures of minors. There should be an exception in the law for it, but until there is, there isn't.
> Mutual, consenting sexting between two teenagers can't possibly be illegal, can it?
It absolutely is.
It’s still consumption of illicit material depicting someone under the age of 18, which is a law being broken, even if it's by someone younger than 18.
Ask me how I know.
I'll give you a hint: I have an absolutely awful sounding thing on my criminal record from when I was 15.
This is the true face of the EU. Anyone who opposes it will be branded as a child molester. Even if the parliament rejects this proposal, the next power hungry authoritarian can simply try again, until eventually one parliament will pass it.
You can be either in support of total surveillance, or in support of abolishing the EU. There is no middle ground.
> Even if the parliament rejects this proposal, the next power hungry authoritarian can simply try again, until eventually one parliament will pass it.
Indeed, in the words of the previous President of the European Commission:
“We decide on something, leave it lying around and wait and see what happens. If no one kicks up a fuss, because most people don't understand what has been decided, we continue step by step until there is no turning back.” https://en.wikiquote.org/wiki/Jean-Claude_Juncker#1999
The current President of the European Commission has been a long time advocate for Internet control:
“However, in the digital community, her posturing for the EU’s top job has caused concern. In 2000, when Von Der Leyen was Families minister, she advocated for the mandatory blocking of child pornography online via a list of offending websites managed by police authorities. Germany’s Pirate Party claimed that the law would lead to censorship of the internet.”
“The outcry that resulted was dubbed the ‘Zensursula’ scandal, blending the German word for censorship (“Zensur”) and her name (“Ursula”). The move was eventually repealed after it being challenged broadly, including a petition that had garnered tens of thousands of signatures.” https://www.euractiv.com/section/digital/news/digital-brief-...
EU has nothing to do with it. Just look at UK, outside the EU still doing the same thing. It's a global trend. It's more or less irrelevant what kind of country you live in, people are pushing to end E2E encryption.
If you replace ‘EU’ with ‘US’ in that comment, its internal logic and validity does not change whatsoever.
So: “You can be either in support of total surveillance, or in support of abolishing the US. There is no middle ground.”
I would also guess that you could substitute the EU with any single EU country with the same result. Abolish Germany, abolish France, abolish Italy, abolish all parliaments, there's no middle ground.
Oh please, that's like saying "you either become rich or kill yourself". Ridiculous.
This kind of shit pops up everywhere, not only the EU. I have no idea what tf is driving it, probably the same thing that turned many countries into dictatorships.
Human nature, really, but how these assholes always get into power is a mystery to me.
I am still worried about facts, especially those carried out by agencies working secretively under practically no control by the people, that sometimes also spy on those same people they are sworn to protect.
Anyway, EU is not a State, so EU can't technically build a surveillance State.
Should we label everything "the true face of" (meaning they are evil) when the representatives of 450 million people lay out some document we don't like?
> And it becomes obvious that child abuse websites stay up longer
Versus
> That is to enable the new agency to undermine end-to-end encryption by mandating client-side scanning
This article needs to be clear about what these agencies are after: E2E encrypted messaging apps, or 'websites'. Because a messaging app is not a 'website'.
It's even worse. In Germany, police and agencies responsible to prosecute pedophiles that are active on those websites supposedly do not take those websites down, even after they gained control, at least in some cases.
I remember another article, which I can't find right now, with an interview, where a spokesperson explicitly said that taking down the sites is not their responsibility.
The EU is honestly just so awful when it comes to anything relating to technology. The number of insane proposals I’ve seen from them over the years… It’s like they’ve got the worst political takes on tech from each EU country and smushed them together into a big ball of awfulness.
When it comes to privacy invasion the EU has been a champion in curbing massively intrusive corporate practices for years now.
Some of it is awful yes, but other high visibility projects have been amazing.
I'm glad to be European when I look at the insanity that is the American tracking and personal data brokers industry.
Yes, at this point of the internet, the fact that most people have decided to depend on big platforms and not develop properly all the distributed alternatives available for many years, it's hard to blame anybody except ourselves.
I don't agree. "Ourselves"? Did I have a choice in my entire social network choosing to use Facebook and Twitter? Did I have a choice in LinkedIn being my best option for getting employment? No.
I talk about this stuff all the time. I blog [1] about it and complain about it to my family and friends until it gets annoying. And it _does not matter_ how much individual effort I put into this because people have no incentive to change.
Yes, I think it's hard for normal people to use those networks because at this point they are unable to replicate the network effects of centralized media, as in there is no simple way of going from 1to1 communication and build from there to many-to-many.
But I think we people who are into tech should have coordinated more among ourselves that at least we used and helped to grow until they are mature enough for regular people.
Yes, because linkedin is not the best way to get a job. Jobs are the one area you have better choices. Indeed is much bigger anyhow.
For social networks did you have a choice? Yes but your choices are join the network others are on, get others on networks you want or make friends on networks no one you know is on. This has always been like this.. if the group is going to a movie.. you can tag along, you can try to influence what movie, you can throw a party and get others to drop the movie idea.
In your case I would pick new friends and become the leader of that pack. Family I would go visit..
> Yes, because linkedin is not the best way to get a job.
In my admittedly short experience, LinkedIn and in-person networking are tied on this front.
> your choices are join the network others are on, get others on networks you want or make friends on networks no one you know is on
I think it's bad that we're expected to choose between existing communities and our privacy, our consent, and our dignity. The correct way to address this is regulation.
Well, there's nothing properly developed, that's why we are still here. The basic building blocks are all there, cryptography for privacy and identity. p2p networks for data transfer have worked in the piracy world for many years too.
What is missing is putting it all together and be able to replicate the network effects you get from centralized media, like if you are able to reach one person, be able to reach in a simple way all of his friends as well (supposing they want to be reached).
So Apple's snooping initiative produced exactly the result that was predicted? The article doesn't quite expand on this, but it sounds like the lawmakers and the agency are jolly happy to push their own variant of client-side scanning on everyone.
The whole story with client-side scanning resembles Snowden's description of NSA's workday: people are so used to the idea of surveillance day in and day out that it doesn't occur to them for a second that it might not be appropriate.
I get the impression that a lot of police forces are "offense" minded, and they are pushing for these kinds of measures. So they want way to break into systems and intercept communications.
On the other hand civilians are more "defense" minded, because they're not allowed to attack in the first place. So their preferred M.O. is to protect systems and encrypt communications.
A lot of governments apparently don't have as large/as influentual defense minded departments or units, else we wouldn't keep seeing this topic coming back.
>Finally, the proposed Regulation contains safeguards to ensure that technologies used for the purposes of detection, reporting and removal of online child sexual abuse to comply with a detection order are the least privacy-intrusive and are in accordance with the state of the art in the industry; they perform any necessary review on an anonymous basis and only take steps to identify any user in case potential online child sexual abuse is detected. It guarantees the fundamental right to an effective remedy in all phases of the relevant activities, from detection to removal, and it limits the preservation of removed material and related data to what is strictly necessary for certain specified purposes. Thereby, the proposed Regulation limits the interference with the right to personal data protection of users and their right to confidentiality of communications, to what is strictly necessary for the purpose of ensuring the achievement of its objectives, that is, laying down harmonised rules for effectively preventing and combating online child sexual abuse in the internal market. (from the proposal)
It doesn't seem to me the document goes very much into detail on the effects this will cause for privacy rights, both with its application and indirectly, and how it should be implemented, despite briefly bringing up encryption concerns from the opposition. I can't see this anonymized data collection work as claimed due to the sensitive nature of the subject at hand (private conversations), and I wonder about the enforcement rights it'd allow (imposition of "remedies", fines, periodic penalty payments, and "power to adopt interim measures to avoid the risk of serious harm") and the importance of rapid content deletion for small-to-medium online platforms and personal sites as it has been hinted in the past. I really do not trust companies to have a privacy minded approach nor I consider my government capable of doing so as it does not benefit it in practice. (the document does mention strong support from "law enforcement authorities")
Totally off topic ... I am to a certain degree curious what video hides behind this link. Maybe I'd even want to watch it.
But at the same time I am too turned off by the fact that I'd actually need to visit the link just to learn the title of this video to actually do the click.
rather than playing all these games later trying to catch and track it, i can't help but wonder if it could be better halted by training kids early on when and how to get help to stop it before it's produced.
otherwise it seems it will just be this technological cat and mouse game that destroys privacy rights and empowers bad actors to plant it maliciously (swatting 2.0).
At least for The Netherlands, this is what used to happen (and I still hope it is). School educated on various topics regarding power dynamics in relationships and the problems that can spawn forth from it.
Of course, some teens are going to get tempted and do stupid things either way. Imagine the least innocent thing you know about your peers back in those days, many will have likely been tempted to go off the deeper end in some way. It's difficult to give children safety and privacy when they actively seek to dare themselves and one another, not heed warnings, etc.
I suspect it worked in the sense that some teens took precautions. I don't believe my classmates had anything bad happen to them, but a fair few of them definitely put themselves into situations that could have been bad if the other party was a little more malicious (power imbalance, blackmail, nudes spread, etc.) I also suspect that what I heard was only the tip of the iceberg of their actual experimentations.
Like I said, some teens are still going to do stupid stuff despite being warned. It's difficult to give them both autonomy, security and privacy when they won't heed warnings. I like to believe not invading their privacy and simply teaching them is enough, but some others might feel the EU's need to be a universal helicopter parent to be justified when the consequences are too high.
"protecting kids" is a "way of the devil" done by weak bureaucrats towards creating tools for totalitarian control, as government can and should not be trusted with anything proactive or en mass!
Their true motives are far from protecting kids. Otherwise, they would install security cameras inside all corners in all churches in the continent, and, more importantly, enforce bodycam on pastors 24/7.
> Because that's how you protect people privacy, right???
I'm not promoting such practices. In fact, I was just trying to use the legislators logic: if they are truthful about their "protecting children" slogan, let them start their policing where the predators are most likely to be found instead of targeting the entire population.
> their policing where the predators are most likely to be found
well, the predators are more likely found in schools, which are way more numerous than (actively used) churches (and temples and synagogues and mosques...)
Does anyone know why the EU is doing this? Everyone knows that the children thing is a pathetic excuse, but why do they want surveillance? I highly doubt that messaging is a big risk to EU stability. So why make such an effort?
It seems that EU is pushing for China-style surveillance economy. Not sure who is behind it really, and it's starting to feel quite shady. No one really cares about EU, because it's supposed to be benign.
it's the same as the abortion issue. there are a lot of people who honestly sincerely earnestly believe in it. then there are those who share this belief but see it as part of something bigger.
this last group is the fucking fascist small dick energy idiots, who believe in order based on a hierarchy of people, classes, races, countries, etc...
Security and freedom are "platonic" ideals. None of those exists in the abstract, as a real world thing, and you can't find one without the other in the wild (in a Disney-like world, maybe, but not in the real world, in the presense of others, that is people that want to deprive you of either/both, and can benefit from doing so).
Trivially speaking, if some thugs can just come and beat you with no police or legal resources available to you, you don't have either pricacy or security. Both are at their mercy.
You could of course defend yourself, but then you're still getting your freedom through security: it's just that in this case you're obligated to cater get that security on your own.
So, we trade some freedom (giving state the ability to enforce laws, have police) in excange for security. And vice versa.
But in any case, my point above was different: that what TFA descrives is not a tradeoff between privacy and security, it is giving up privacy for no real benefit. If anything, losing encryption costs in both privacy AND security.
When people say "privacy and freedom are more important than security" it's in regards to privacy and freedom being curtailed by the government in order for the government to provide security. Don't be absurd.
Privacy is a form of security though (security for your own thoughts and actions in the own home, and in, on, and near your person). At the very least it is a domain that strongly overlaps with security.
And good security is what gives you the safety to be free.
If you want to sacrifice freedom for security, you might end up putting the cart before the horse.
And of course sacrificing privacy for security is at best balancing 2 different kinds of security. You're not necessarily gaining security.
In this case it means that there's no guarantee that children will actually netto be safer if people can scan private communications.
As usual, the solution lies in peer-to-peer apps where there is no centralized provider that will have any responsibility or oversight. Wait... isn't that what infringers already use?
As a minor who has been messaged a few times people wanting nudes, some of them ask for instagram or snapchat usernames (are they trying to get caught??) and some of them ask for wickr (which does seem to be e2ee, so less stupid of them). I haven't seen any others, but this hasn't happened many times.
I'm wondering what about deep-fake CP? With the above narrative about protecting the kids - that would not be kids harmed. Isn't that just going to then generate insane amounts of deep-fake CP if you can't get prosecuted and the abusers learn that?
My assumption is that you can't but I might be wrong - haven't read the whole document. So correct me if I'm wrong.
I'm probably wrong here but don't you need a ton of real underlying content to make a deepfake? Like the reason you only see convincing ones of world leaders and celebrities is that they have their face out there a ton.
By that logic you would need a ton of actual CP to create CP deepfakes.
Such laws is the rich projecting their own thoughts onto the populace. They believe that if the populace is allowed to talk to each other with secrecy, they'd inevitably conspire to take power from the landlords, because those landlords are already conspiring to do exactly this.
Think of the kids is just another excuse to break privacy now that terrorism and covid aren't interesting anymore. I wonder what's gonna be next? Russian propaganda and fake news?
This is not about efficiency. This is about trust.
We do not trust these enormous organizations we call governments with unlimited power, because they then become a threat to society itself. The 20th century is evidence of the threat of centralized power, particularly over the control of information and communication.
So when a government claims the power to read all conversations in order to prevent X from happening, X is irrelevant. I don't even care to hear their justification. They cannot be given this power. It's an existential threat to a free society.
We will soon have robot dogs patrolling our streets and barking at us while we scream in anguish out of our windows for food if this becomes law. Any rebellion or resistance will become impossible forever once we cant communicate in private. You do realise this?
This contains the text of the requirements. Search for "Grooming" and you'll find some particularly horrifying chunks regarding "scanning of conversations for content." Plus, not only known existing CSAM, but also identifying new material. Because don't you trust machine learning that can't tell the difference between a cat and a ferret to know what is and isn't CSAM? That's a very real path to "You have a photo of your child in the bathtub and the police smash down your door."
I like this bit, though:
> The processing of users’ personal data for the purposes of detecting, reporting and removing online child sexual abuse has a significant impact on users’ rights and can be justified only in view of the importance of preventing and combating online child sexual abuse.