Hacker News new | past | comments | ask | show | jobs | submit login

No, they had backlash against using AI on devices they don’t own to report said devices to police for having illegal files on them. There was no technical measure to ensure that the devices being searched were only being searched for CSAM, as the system can be used to search for any type of images chosen by Apple or the state. (Also, with the advent of GenAI, CSAM has been redefined to include generated imagery that does not contain any of {children, sex, abuse}.)

That’s a very very different issue.

I support big tech using AI models running on their own servers to detect CSAM on their own servers.

I do not support big tech searching devices they do not own in violation of the wishes of the owners of those devices, simply because the police would prefer it that way.

It is especially telling that iCloud Photos is not end to end encrypted (and uploads plaintext file content hashes even when optional e2ee is enabled) so Apple can and does scan 99.99%+ of the photos on everyone’s iPhones serverside already.




> Also, with the advent of GenAI, CSAM has been redefined to include generated imagery that does not contain any of {children, sex, abuse}

It hasn’t been redefined. The legal definition of it in the UK, Canada, Australia, New Zealand has included computer generated imagery since at least the 1990s. The US Congress did the same thing in 1996, but the US Supreme Court ruled in the 2002 case of Ashcroft v Free Speech Coalition that it violated the First Amendment. [0] This predates GenAI because even in the 1990s people saw where CGI was going and could foresee this kind of thing would one day be possible.

Added to that: a lot of people misunderstand what that 2002 case held. SCOTUS case law establishes two distinct exceptions to the First Amendment – child pornography and obscenity. The first is easier to prosecute and more commonly prosecuted; the 2002 case held that "virtual child pornography" (made without the use of any actual children) does not fall into the scope of the child pornography exception – but it still falls into the scope of the obscenity exception. There is in fact a distinct federal crime for obscenity involving children as opposed to adults, 18 USC 1466A ("Obscene visual representations of the sexual abuse of children") [1] enacted in 2003 in response to this decision. Child obscenity is less commonly prosecuted, but in 2021 a Texas man was sentenced to 40 years in prison over it [2] – that wasn't for GenAI, that was for drawings and text, but if drawings fall into the legal category, obviously GenAI images will too. So actually it turns out that even in the US, GenAI materials can legally count as CSAM, if we define CSAM to include both child pornography and child obscenity – and this has been true since at least 2003, long before the GenAI era.

[0] https://en.wikipedia.org/wiki/Ashcroft_v._Free_Speech_Coalit...

[1] https://www.law.cornell.edu/uscode/text/18/1466A

[2] https://www.justice.gov/opa/pr/texas-man-sentenced-40-years-...


Thanks for the information. However I am unconvinced that SCOTUS got this right. I don’t think there should be a free speech exception for obscenity. If no other crime (like against a real child) is committed in creating the content, what makes it different from any other speech?


> However I am unconvinced that SCOTUS got this right. I don’t think there should be a free speech exception for obscenity

If you look at the question from an originalist viewpoint: did the legislators who drafted the First Amendment, and voted to propose and ratify it, understand it as an exceptionless absolute or as subject to reasonable exceptions? I think if you look at the writings of those legislators, the debates and speeches made in the process of its proposal and ratification, etc, it is clear that they saw it as subject to reasonable exceptions – and I think it is also clear that they saw obscenity as one of those reasonable exceptions, even though they no doubt would have disagreed about its precise scope. So, from an originalist viewpoint, having some kind of obscenity exception seems very constitutionally justifiable, although we can still debate how to draw it.

In fact, I think from an originalist viewpoint the obscenity exception is on firmer ground than the child pornography exception, since the former is arguably as old as the First Amendment itself is, the latter only goes back to the 1982 case of New York v. Ferber. In fact, the child pornography exception, as a distinct exception, only exists because SCOTUS jurisprudence had narrowed the obscenity exception to the point that it was getting in the way of prosecuting child pornography as obscene – and rather than taking that as evidence that maybe they'd narrowed it a bit too far, SCOTUS decided to erect a separate exception instead. But, conceivably, SCOTUS in 1982 could have decided to draw the obscenity exception a bit more broadly, and a distinct child pornography exception would never have existed.

If one prefers living constitutionalism, the question is – has American society "evolved" to the point that the First Amendment's historical obscenity exception ought to jettisoned entirely, as opposed to merely be read narrowly? Does the contemporary United States have a moral consensus that individuals should have the constitutional right to produce graphic depictions of child sexual abuse, for no purpose other than their own sexual arousal, provided that no identifiable children are harmed in its production? I take it that is your personal moral view, but I doubt the majority of American citizens presently agree – which suggests that completely removing the obscenity exception, even in the case of virtual CSAM material, cannot currently be justified on living constitutionalist grounds either.


My understanding was the FP risk. The hashes were computed on device, but the device would self-report to LEO if it detects a match.

People designed images that were FPs of real images. So apps like WhatsApp that auto-save images to photo albums could cause people a big headache if a contact shared a legal FP image.


Weird take. The point of on-device scanning is to enable E2EE while still mitigating CSAM.


No, the point of on-device scanning is to enable authoritarian government overreach via a backdoor while still being able to add “end to end encryption” to a list of product features for marketing purposes.

If Apple isn’t free to publish e2ee software for mass privacy without the government demanding they backdoor it for cops on threat of retaliation, then we don’t have first amendment rights in the USA.


I don't think the first amendment obligates companies to let you share kiddie porn via their services.


You misunderstand me. The issue is that Apple is theoretically being retaliated against, by the state, if they were to publish non-backdoored e2ee software.

Apple does indeed in theory have a right to release whatever iOS features they like. In practice, they do not.

Everyone kind of tacitly acknowledged this, when it was generally agreed upon that Apple was doing the on-device scanning thing "so they can deploy e2ee". The quiet part is that if they didn't do the on-device scanning and released e2ee software without this backdoor (which would then thwart wiretaps), the FBI et al would make problems for them.


Why would Apple want to add E2EE to iCloud Photos without CSAM detection?


The same reason they made iMessage e2ee, which happened many years before CSAM detection was even a thing.

User privacy. Almost nobody trades in CSAM, but everyone deserves privacy.

Honestly, this isn’t about CSAM at all. It’s about government surveillance. If strong crypto e2ee is the hundreds-of-millions-of-citizens device default, and there is no backdoor, the feds will be upset with Apple.

This is why iCloud Backup (which is a backdoor in iMessage e2ee) is not e2ee by default and why Apple (and the state by extension) can read all of the iMessages.


I didn't ask why they would want E2EE. I asked why they would want E2EE without CSAM detection when they literally developed a method to have both. It's entirely reasonable to want privacy for your users AND not want CSAM on your servers.

> Honestly, this isn't about CSAM at all.

It literally is the only thing the technology is any good for.


> they don’t own to report said devices to police for having illegal files on them

They do this today. https://www.apple.com/child-safety/pdf/Expanded_Protections_...

Every photo provider is required to report CSAM violations.





Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: