Hacker News new | past | comments | ask | show | jobs | submit login

https://ec.europa.eu/home-affairs/system/files/2022-05/Propo...

This contains the text of the requirements. Search for "Grooming" and you'll find some particularly horrifying chunks regarding "scanning of conversations for content." Plus, not only known existing CSAM, but also identifying new material. Because don't you trust machine learning that can't tell the difference between a cat and a ferret to know what is and isn't CSAM? That's a very real path to "You have a photo of your child in the bathtub and the police smash down your door."

I like this bit, though:

> The processing of users’ personal data for the purposes of detecting, reporting and removing online child sexual abuse has a significant impact on users’ rights and can be justified only in view of the importance of preventing and combating online child sexual abuse.




I assume you meant this piece with "scanning of conversations":

>As mentioned, detecting ‘grooming’ would have a positive impact on the fundamental rights of potential victims especially by contributing to the prevention of abuse; if swift action is taken, it may even prevent a child from suffering harm. At the same time, the detection process is generally speaking the most intrusive one for users (compared to the detection of the dissemination of known and new child sexual abuse material), since it requires automatically scanning through texts in interpersonal communications. It is important to bear in mind in this regard that such scanning is often the only possible way to detect it and that the technology used does not ‘understand’ the content of the communications but rather looks for known, pre-identified patterns that indicate potential grooming. Detection technologies have also already acquired a high degree of accuracy, although human oversight and review remain necessary, and indicators of ‘grooming’ are becoming ever more reliable with time, as the algorithms learn.

That whole paragraph sounds a lot more terrifying and intrusive. At least they admit the flaws, but I haven't quite had the best experience with the EU's way of handling this stuff.


> “becoming ever more reliable with time, as the algorithms learn“

I mean considering Amazon has put in kajillions of dollars on their learning algorithms and yet now in 2022 those algorithms still aren’t even smart enough to figure out trivial stuff, like that I’m probably not gonna buy a second air fryer immediately after buying a first one, and stop advertising them to me. What hope do we have for this new algorithm?


It just showcases either the ones making the decision are naïve or they have an ulterior motive. They tried to justify it with the following footnote:

>For example, Microsoft reports that the accuracy of its grooming detection tool is 88%, meaning that out of 100 conversations flagged as possible criminal solicitation of children, 12 can be excluded upon review and will not be reported to law enforcement; see annex 8 of the Impact Assessment.

Of course, anyone could poke a dozen holes in this statement. "Possible", so there still needs to be a ton of review. Microsoft reporting on its own detection tool. Etc.


The ulterior motive is gaining total control over citizens in EU member states. Which is also why the EU digital ID [0] will soon be introduced and a Euro CBDC will follow in the coming years. It's quite likely the Euro CBDC wallet will be linked (part of) the EU digital ID.

I expect at some point the EU digital ID will be required to use when accessing the internet or perhaps when signing up for email, creating a Twitter account and such. This EU digital ID will

EU member states will become a carbon copy of China. People will worry what they can say out of fear that they will not be able to spend their hard-earned Euro CBDC.

---

[0]: https://www.politico.eu/article/eu-europe-digital-id/


> I expect at some point the EU digital ID will be required to use when accessing the internet or perhaps when signing up for email, creating a Twitter account and such.

I'm not going into the debate about whether there's an ulterior motive, but current eID solutions in various EU countries are considerably less draconian than what you're describing. The current solutions are going to tie into the new one, after all.

In Finland, as an example, we simply have a chip with a cert/key on ID cards. People without card readers or who don't want to use them are free to use their netbanking logins instead. Being able to use the same solution in other EU countries is of rather limited value to most, but it will ease up paperwork submission when travelling, moving to another member state, and so on.


Problem is that if an ID system is ever established and becomes ubiquitous, a quick change in legislature can force online services to leverage the ID. Protection of intellectual property, CSAM, a pretext is easily found. Happens faster than you can blink and than the free internet in Europe would quickly die. The best protection against that is to have alternative ID systems or just anonymous usage if no transactions are involved.

And a lot of services would be fine to use a state ID because they can tie their advertising to a real and unique person. To secure against that the smart move is to reject the little convenience here for longtime benefit.


Pretty much whole of EU already has some form of government ID (not sure if there are any exceptions, but if they are they are not many).

People are used to them, so it's really just a matter of time, before it goes digital.

I don't disagree with your observations, I just think its a forgone conclusion at this point.

That said I think the value lies in creating new non centralized (and less popular) solutions. They will probably never have mass appeal, but at least they will be there for people who want/need them.


Yes, sadly even with fingerprints in the newest version which is completely ludicrous and we have it because some countries needed to implement it for domestic self-gratification. There are regional differences but here nobody uses their government ID for online services.


As long as we have democracy, we can vote on any party we want, also on the ones that would take our own country out of EU.

So unless you can show me how a move away from democracy can happen, we won't turn into china soon.


Democracy didn't really protect use from intelligence agencies sniffing communication data, so I don't see how it would protect anything. Not an intrinsic fault of democracy since the EU still has deficits here.


> It just showcases either the ones making the decision are naïve or they have an ulterior motive

Its the same AI coolaid silicon valley has been peddling for years and using it to justify various crap. So you should not be surprised


The little fact that Microsoft then has the possibility to read random interactions that have been flagged by its own blackbox should really induce doubt here.


Maybe they aren't optimizing for conversions but instead for running out budgets? Amazon often has anti-consumer practices, that likely evolved that way because it improves bottom line, you'd think how hard they make it to search reviews or to filter reviews for a specific version of a product would be simple fixes, but likely testing has shown those things decrease sales.


>like that I’m probably not gonna buy a second air fryer immediately after buying a first one,

this comes up all the time, but one of the primary indicators that {person} will buy a second {item} of type {X} increases when they have already bought one.

However it does seem to me that I have never done this, even when I am unsatisfied with item, I soldier on for at least a year or two before saying aw screw it, perhaps the machines would be more impressive if they could recognize what people will and will not ever buy a new expensive item immediately again after buying one.


Yeah that is a pretty naive algorithm though, especially if you can explain it in one sentence. After all that, that’s the best algorithm they got? If you buy one thing you might buy two. Pretty sure most of us could implement that in a line or two of code and save billions in research. And even if this whole research exercise was necessary to discover this fact, it’s still not a “smart” algorithm if there are a ton of false positives.

And it’s still a relatively trivial use case—-if it comes to potentially miring someone in legal issues, I hope the “algorithm” can do better than that.


I've been told (I don't know if it's) that showing you an ad for something you just bought reinforce your satisfaction and are thus significantly less likely to return the product.

I really wonder if it's true.


This is so 1984. Even if this remotely worked, how would they tell between actual minors and adults roleplaying?


> This is so 1984

1984 has never been about this.

It's about the weaknesses of men, that would gladly change a reduction of the chocolate ration into an increase, because it's their job to lie and alter the past, believing they are doing god's work.

Also, in 1984 surveillance is secret, not publicly stated in a formal document visible by everybody.

Better examples of 1984 in action: the DDR, the NSA.


"Big Brother" was a pretty big part of 1984, and I definitely have gotten "Big Brother" vibes from the EU these last few years, under the pretense of "we know better than you and we want to protect you".

Having a good will doesn't make it okay for the EU to become a helicopter parent, nor do I like the prospect of that good will evaporating and them having access to what is effectively more and more surveillance.


> "Big Brother"

yeah, and what the Big Brother does to you if you don't follow rules is kept secret, by secret police agents, because

1 - the Big Brother doesn't physically exists

2 - people would revolt if they knew what really happens to deviants, so it's imperative they are kept in the dark (“Until they become conscious they will never rebel, and until after they have rebelled they cannot become conscious.” )

“If you want to keep a secret, you must also hide it from yourself.”

So 1984 does not apply at all in this case, because the document is public and everyone can read it and eventually revolt against it


The point is that NSA in much more big brother and has been around for a decade


I think of "1984" as a wider concept than just the conditions laid out in the book. I suspect most people do these days.


Smash down doors, get embarrassed a few times, pass legislation to ban "roleplaying as a minor in text conversations." Then there's no reason to be embarrassed again!

More practically, I would assume "age of the users of a device" is a well-derived bit of information available to anyone who asks for it with the proper letterhead. Or who just helps themselves to it.


My initial thought was to give adults an adult service, but that also means adults need to provide information (which a lot of adults wouldn't be okay with) so they can filter the kids out. Then you'd still have to run the proposal on every service kids and adults have access to. Then on top of that, you still need to filter out cases where its just minors, but at the same time you can't filter all of them because it might be a case of sexual violence between minors.

It's a headache no matter how I look at it.


Legislation of a similar tone is already the law in many places. Some places ban pornography of adults with flat chests. Some places ban lewd drawings of characters who aren't sufficiently curvaceous. Many platforms hosting content such as literotica and erotic audio recordings ban content that describes or roleplays as minors.


95% of men aged 20-30 will be flagged for telling their equally aged friends to "suck their dick".


A lot of people are going to start using XMPP/OMEMO.


I thought most of these proposals were around doing device-local analysis and reporting (I suppose specific clients might not, but if they can mandate this at the device level, and make it default on iOS and Android, they're going to get almost all users.)


Device-local, yes, but I thought it was only for apps under Apple's direct purview, ie, iMessage.


No reason an XMPP client wouldn't be forced to include this as well. Reminder that iMessage and Signal are encrypted communications but are surely a target of this sort of lawmaking.


With XMPP though you can use a smaller FOSS client on a desktop PC. iMessage and Signal force you to use a recent closed source client on a phone.


If the person you are chatting with doesn't follow suit, you'll still be analyzed, just on their end instead.


Try Molly for Signal on Android.


I probably won't, I don't have a signal account


That's one of them, yes. I've not had the time to read the whole thing today, and probably won't have time, so I'm trying to encourage people to dig through it a bit themselves and get a feel for just how "But the Children!" it is, as a reason to violate every bit of privacy they think they can get away with.


You have it here on pag 7

"...Thereby, the proposed Regulation limits the interference with the right to personal data protection of users and their right to confidentiality of communications, to what is strictly necessary for the purpose of ensuring the achievement of its objectives, that is, laying down harmonised rules for effectively preventing and combating online child sexual abuse in the internal market..."

For all effects, today the 11 of May, is the day the EU tried to remove the right to privacy in communications in the name of protecting children. From the hundreds of measures and police actions they could take them seem to think this is the most important.


That's a very real path to "You have a photo of your child in the bathtub and the police smash down your door."

There's also the simple fact that the reporting of CSAM involves someone seeing CSAM. You see it and then you report it so it gets removed. Does the copy in your browser cache incriminate you? I can also imagine someone deciding they need to save it to forward to the police, whether or not that's useful.


Why do you like that?

They can focus on ONE justification which is 'acceptable'. But one justification is all they need. They don't need more than one for total access.


Just the brazen "Yeah, privacy and stuff are important, But the Children!" phrasing of it. They're not even trying to pretend it's anything but an excuse anymore.


This is no different from the outrage with Apple and client-side scanning of your device.

Yes, we should absolutely do everything we can to protect the children. But losing our privacy is not and can never be an acceptable course of action.

There is a frighteningly short distance from giving up privacy due to CSAM, to a full-blown surveillance state.


Ironically this law will put the childrens futures at enormous risk


> You have a photo of your child in the bathtub and the police smash down your door.

Reminder: Europe is not US.

Nobody can smash your door here without proper authorization from a court.

And even then, they would knock.


But the OP's sentiment still stands. Repetitional damage, stress, potential job loss, etc. just because you wanted to take a picture of your daughter or son in the bathtub playing with a rubber duck to send to your grandparents.

I mean my parents still have those pictures in photo albums and they have been shown multiple times to family friends or relatives - nobody has obvs objected - but there is nobody from a 3rd party with 0 context watching over their heads deciding if this is child porn or not. (Plus I think I look adorable with my duck shampoo and recently found it in the basement when doing cleanup <3). And I very much like those pictures as they are interesting moments of my life that I don't have memories of but are ME.

But to extend this - maybe my neighbours think that is super wrong, maybe my employer. In it's current form the regulation allows for a lot of private data spillage to other ppl.

Even if it's just - yes wrong flag by the alog -> someone gets to see me or my children naked - wtf I'd argue that's abuse.


> But the OP's sentiment still stands. Repetitional damage, stress, potential job loss

This has never happened though!

Can you point me to a case in EU where a parent took a photo of their children and got in trouble?

So it's kinda like saying: but OP is right, we should all live underground, because aliens could fire lasers on us!


> Can you point me to a case ... took

I think the idea is that of the risk posed by automation within inadequate workflows.

Which, incidentally, is a case-relevant reality in areas you yourself mentioned. And similarly, following there have been case-relevant realities of «repetitional damage, stress, potential job loss» - in other areas that still have caused troubles to citizens after inadequate workflows, and sometimes even owing to imperfect automation.


> Nobody can smash your door here without proper authorization from a court.

This is an urban legend. My country Germany often prides itself to have protections here but that is a lie. In reality courts are overwhelmed so they basically sign any request the executive brings in. We had that for a twitter comment that hit the news a few month ago, but we also have that for people buying flowerpots online because they were suspected to grow weed. No joke. Surveillance is again very strong here.

It is true though that they mostly knock at least or they wait until you are not at home. But the barriers are very low for a supposedly enlightened society. And recourse against illegal searches are very limited as well and some end with quite some property damage.

Some geniuses thought it would be a prudent idea to evaluate the performance of police by the cases they close. So they really have to search for crime in any niche. State incompetence of the highest level in the highest positions and people have very little protection against that and if at some point there isn't enough crime...

The tend not to even say sorry if their search was wrong. Neither the executive nor any other government entity. The police is the least guilty although the system relies that they only request reasonable searches.


> Surveillance is again very strong here.

Now imagine elsewhere.

Anyway: this has nothing to do with what I wrote, which is not an urban legend at all!

Doors have rarely (never AFAIK in my country) been smashed by the police, not even in those cases where there were actual terrorists inside the building.

Because our police forces are not trained to be war soldiers and are not armed to the teeth.


> elsewhere

Have you noticed the tendency scattered around the world to freeze citizens' financial assets before trials?


Curious. I'd much rather these resources were spent on combating offline child sexual abuse.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: