Hacker News new | past | comments | ask | show | jobs | submit login

Based on my reading, the first system is interesting in that the threshold is also cryptographically determined. Each possible hit forms part of a key, and until a complete key is made none of the images can be reviewed.

Should also be noted the second system sends no images to Apple or is reviewed by Apple employees in any way. It's just using the same on device ML that finds dog pics for example.

I think Apple PR screwed up here announcing all these features at once on one page, and not having the foresight they would be conflated. It also didn't help that a security researcher leaked it the night before without many of the details (iCloud only for example).

Context should also be included. CSAM has been happening for years on most (all?) photo hosts, and is probably one of the blockers to full E2E encryption for iCloud photos (I so hope that's coming now!).

Finally, nothing has really changed. Either iOS users trust Apple will use the system only as they have said or they won't. Apple controls the OS, and all the 'what if' scenarios existed before and after this new feature. As described, it is the most privacy preserving implementation of CSAM to date.




> Should also be noted the second system sends no images to Apple or is reviewed by Apple employees in any way.

You are directly contradicting Apple's public statement:

> Apple manually reviews all reports

( https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni... )

Moreover, it was found that in US v. Ackerman that the NCMEC's inspection of a user's private email was an unlawful violation of their fourth amendment rights specifically because AOL passed along the email based on a hit against the NCMEC database without inspecting it. Had AOL inspected it than the repetition of the inspection by NCMEC would simply be a repetition of the search performed by AOL which was permitted under their contract with the customer.

Not only does Apple state that they will "review" the images, they must do so in order to deprive the user of their forth amendment protection against unlawful search by the government.

> and is probably one of the blockers to full E2E encryption for iCloud photos

Apple is free to provide encryption and they are choosing to not do so.


I was referring the second system mentioned by the parent. NOT the CSAM system. Conflating these systems has been huge issue with the random articles coming out. They are distinct.

From the GP > The second, independent, system, is a machine learning model which runs on-device to detect sexually explicit photos being received on Messages.app. This system is designed to be run on children’s phones. The system will hide these explicit images by default, and can be configured to alert parents - that feature is designed for under-13s. This system doesn’t use the CSAM hash list.

> Apple is free to provide encryption and they are choosing to not do so.

Sort of. They cited the FBI a few years ago when they put the brakes on adding E2EE to everything. It would also create a huge target for legislation, 'Apple is helping CP people - think of the children!' The new method of CSAM will let Apple add E2EE, and fight off calls for legislation.


I fully agree that some people are conflating the imessage nudity detection to the detriment of the public dialog. Though I think there are also problems with imessage, including the fact that minors are not completely devoid of civil rights relative their parents, that the same mechanisms could be easily retargeted against adults (with some dictatorships taking on the role of the parent accounts), and that doing so normalizes our children to a pervasive electronic surveillance... I think it is a substantially different matter and one of less concern.

I don't think I agree on the "sort of"--

It is unambitious the effective law in the US, established in court, that if the tech company scanning is a product of government coercion than the scanning cannot be performed without a warrant.

In Ackerman Google went out of their way to testify that they were not being coerced by the government, but were scanning out of their own commercial interest (to avoid a public loss of reputation for hosting child porn on their systems).

If Apple wants to defend their failure to act in their user's best interest, they need to do so directly by calling out the governments coercion. Or otherwise, they ought to admit (as google did) that no meaningful government coercion exists and that their actions at the users expense are driven by their commercial interests.

There are two central possibilities: That the searching is coerced, in which case it is an unlawful violation of the user's fourth amendment rights, or that it is not coerced, in which case it is an activity that Apple is freely engaging with for the benefit of their commercial interests.

In either case-- facilitating an unlawful invasion of user's constitutional rights on behalf of the government, or voluntarily invading the privacy of their users for commercial benefit Apple is ethically in the wrong.

I don't doubt that this dichotomy, if widely recognized, would put apple in a difficult position-- but that's a cost of doing business.


>I think Apple PR screwed up here announcing all these features at once on one page, and not having the foresight they would be conflated. It also didn't help that a security researcher leaked it the night before without many of the details (iCloud only for example).

This sums up my entire feelings on the matter at this point.

It's become a bit of hysteria due to this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: