You can design a system where false positives are so rare they are insignificant and can be properly handled. The only reason we don't is because we don't think it's that much of a problem.
Maybe you underestimate machine learning, if you check HN on any given day it can do anything, probably be the next president.
But flagged known images was the point of Apple's algo, for example. Still everyone just went "forget abuse, my privacy is more important for me". Really at this point techbros deserved any dumb law that lets the government read their chats.
> Maybe you underestimate machine learning, if you check HN on any given day [...]
Maybe I actually know something about it.
> But flagged known images was the point of Apple's algo, for example.
You whined at me a little while ago about how all this was about "abuse, not just porn". Yet you're using that to justify a system that, as you describe it, could only find old, known images that have been circulating around the Internet and made it into a database. Meaning images of past abuse that cannot be prevented, by third parties who would not be caught by this.
Pick a threat model, because the measures you defend don't address the threats you claim justify them.
... and if you start talking about "grooming" or "normalization" or other silly bullshit that hypothetically might have a third-order effect, but probably doesn't have any meaningful effect at all in real life, I'm not going to bother to answer it.
> Still everyone just went "forget abuse, my privacy is more important for me".
Everybody's privacy is important to me. Including the privacy of the children whom you want to have grow up in an ever-expanding panopticon. Because this isn't just about stupid bullshit like your embarrassing disease. It's about people ending up in prison. It's about building infrastructure that can trivially and secretly be repurposed to hurt people, including children, in serious, life-changing, and potentially life-ending ways.
If you know then you'd agree that with the right setup ML can do this with a very high precision. We're talking about a highly customized system trained exactly for this one purpose not some chatbot.
> You whined at me a little while ago
You're the one whining here buddy-- remember this is about a law about to be forced on you that you find inconvenient ;) I find it suboptimal but in some sense it might be better than nothing.
> this was about "abuse, not just porn".
These are related. If you have this material, you obtained it from somewhere even if you didn't make it yourself. Some police work and it may lead to some dark web exchange marketplace and actual producers.
That said yes, there's difference. The EU law being discussed is probably more fit to counter realtime abuse, compared to Apple's algo for example.
> Because this isn't just about stupid bullshit like your embarrassing disease. It's about people ending up in prison.
I actually agree with these two sentences, but not in the way you probably intended.
> The Stasi were not a child-friendly institution.
I was waiting until Hitler gets invoked in a discussion about using tech to combat and prevent child abuse facilitated by tech, I was not disappointed.
> If you know then you'd agree that with the right setup ML can do this with a very high precision.
No, it cannot.
Not with a model that you can run on a phone, no matter how specialized it is. Serious ML takes actual compute power (which translates to actual electricity).
Not with a model that you can train on the number of positive examples that are actually available. Current ML is massively hungry for training data.
Not with any model that's out there on people's phones and therefore subject to white-box attack. Adversarial examples are not a solved problem, especially not in the white-box environment.
Probably not with any model. You would need maybe a 0.0000001 false positive rate. That rate falls asymptotically with both model size and training.
> that you find inconvenient ;)
The last refuge of the fanatic is to call anybody who raises inconvenient objections a pedophile.
> I was waiting until Hitler gets invoked in a discussion about using tech to combat and prevent child abuse facilitated by tech, I was not disappointed.
The Stasi did not have anything to do with Hitler, and did not exist at all until after Hitler was dead. They were not part of the Nazi apparatus. Your ignorance of history helps to explain your willingness to give dangerous powers to untrustworthy insitutions, though.
You would not need a perfect model, since there will have to be a human and due process in the loop.
> The last refuge of the fanatic
Between us two there's one with maximalist and absolutist views.
> call anybody who raises inconvenient objections
The actual objections are dealt with. Properly implemented (like Apple's algo) it's not spying, inconvenience and detriment of an individual is a fact of life in any society etc. We just trade personal attacks now.
> a pedophile
Putting words in my mouth.
> The Stasi did not have anything to do with Hitler, and did not exist at all until after Hitler was dead
Thanks for correcting me. So basically Stalin then. Wow, such difference.
I don't know where to start .It's not only that you assume I imply you are a pedophile, you also think that would be a derogatory word or something? Pedophilia is not a crime (like being gay), sexual abuse is.
You're reading too much into this. The smile is there because you called me whining and I did the same to you.
You can design a system where false positives are so rare they are insignificant and can be properly handled. The only reason we don't is because we don't think it's that much of a problem.