How do you prevent photo's from your kids ending up in such a database? Perhaps you mailed grandma a photo of a nude two year old during bath time during a Covid lockdown — you know, normal parenting stuff. Grandma posted it on Facebook (accidentally, naively, doesn't matter) or someone gained access to it, and it ended up on a seedy image board that caters to that niche. A year later and it's part of the big black box database of hashes and ping, a flag lights up next to your name on Apple's dashboard and local law enforcement is notified.
I don't know how most people feel about this, but even a false positive would seem hazardous. Does that put you on some permanent watch list in the lowest tier? How can you even know? And besides, it's all automated.
We could of course massively shift society towards a no-photo/video policy for our kids (perhaps only kept on a non-internet connected camera and hard drive), and tell grandma to just deal with it (come back after the lockdown granny, if you survive). Some people do.
And don't think that normal family photos won't get classified as CEI. What is titillating for one is another's harmless family photo.
This is implying all the concerns about possible future uses of this technology are unreasonable slippery slope concerns, but we're on our like fourth or fifth time on this slope and we've slipped down it every previous time, so it's not unreasonable to be concerned
Previous times down this slope:
* UK internet filters for child porn -> opt out filters for regular porn (ISPs now have a list of porn viewers) + mandatory filters for copyright infringment
* Google drive filters for illegal content -> Google driver filters for copyrighted content
* iCloud data is totally protected so it's ok to require an apple account -> iCloud in China run by government controlled data centers without encryption
* Protection against malware is important so Windows defender is mandatory unless you have a third party program -> Windows Defender deletes DeCSS
* Need to protect users against malware, so mobile devices are set up as walled gardens -> Providers use these walled gardens to prevent business models that are bad for them
The first slippery slope for this was when people made tools to do deep packet inspection and find copyrighted content during the Napster era.
That was the first sin of the internet era.
Discussing slippery slopes does nothing.
Edit: It is frustrating to see where we are going. However - conversations on HN tend to focus on the false positives, and not too much on the actual villains who are doing unspeakable things.
Perhaps people need to hear stories from case workers or people actually dealing with the other side of the coin to better make a call on where the line should be drawn.
I don't think anyone here is trying to detract from the horrors and the crimes.
My problem is that these lists have already been used for retaliation against valid criticism. Scope creep is real, and in case of this particular list, adding an item is an explicit, global accusation of the creator and/or distributor for being a child molester.
My statement was to clarify incorrect statements of the issue. Someone was worried about incorrect DoBs entered by jilted lovers would get people flagged.
I just outlined what the actual process is. I feel that discussing the actual problem leads to better solutions and discussions.
Since this topic attracts strong viewpoints, I was as brief as possible to reduce any potential target area, and even left a line supporting the slippery slope argument.
If this was not conveyed, please let me know.
Matter of fact, your response pointing out the false positive issues is a win in my book! Its better than what the parent discussion was about.
But what I am truly perplexed by, is when you talk about "firs they came..." and "we're not biting".
Who is we, and why wouldn't YOU agree with a position supporting a slippery slope argument?
You seem to disagree with the actions being telegraphed by Apple.
This isn't a question about condoning child abuse. It's a question of doing probabilistic detection of someone possessing "objectionable content". Not sharing, not storing - possessing. This system, once deployed, will be used for other purposes. Just look at the history of every other technology supposedly built to combat CP. They all have expanded in scope.
Trying to frame the question along the usual slippery slope arguments implicitly sets up anyone critisicing the mechanism as a supporter of fundamentally objectionable content.
Sure, and i have no objection to what you are saying.
This thread however was where I was making a separate point that helps this discussion by removing confusion or assumptions on how Apple’s proposal works.
Sorry about the really long delay with answer, the week got better of me.
Your original post posited a reasonable question, but I felt the details were somewhat muddled. The reason I reacted and answered was that I have seen this style of questioning elsewhere before. The way you finished off was actually a little alarming: it'd be really easy to drop in with a followup that in turn would look like the other person was trying to defend the indefensible.
With my original reply I attempted to defuse that potential. The issue is incendiary enough without people willingly misunderstanding each other.
There’s a repository built from seized child porn.
Those pictures and videos have hashes. Apple wants to match against those hashes.
That’s it.
That’s it for now.