Hacker News new | past | comments | ask | show | jobs | submit login

What I came to realize on a broader scale (1) is that none of these automated system provide any disclosure or transparency. Neither for true nor false positives.

In Computer Security we are required to provide responsible disclosure when the privacy of users was (potentially) violated. When law enforcement has a search warrant for my place, I can at least notice that an intrusion happened and in many countries I can request the warrant.

Here, a false positive never gets any of the "Five Ws" answered: Who used this system (which government agency / company)? Why was I (mistakenly) flagged? Which particular piece of my data triggered the system? When did the system phone home? What is the intended purpose of this system?

And most importantly: What parts of my privacy were violated? Who had access to those, and how was the infringement of my rights remedied?

We have hardware fuses, trustzones and so on. It must be possible to have a auditable and tamper-proof system that triggers once such a system calls home and then discloses the "breach" to the user after X days.

If the public had access to that information and transparency reports about the success rate vs. false positivity rate, then society could evaluate the system is adequate. I'd wager many would question the "save the children" argument if they regularly heard about false positives and egregious access to private photos by law enforcement.

Many there are barely any false positives, I'd personally support a CSAM in that case. As it stands we won't know.

(1) Other less invasive system (Youtube strikes, many social media bans) also share the laid out concerns.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: