Hacker News new | past | comments | ask | show | jobs | submit login

So what stops someone from uploading an image that isn't NCII as an attempt to censor an image or get whoever shared the image in trouble? I imagine there needs to be human oversight in some way to stop abuse. But does that mean that in order to stop the spread of your personal NCII, you need to proactively share those exact same photos you are worried about getting shared?



Upload your photo here to check if it's on the Internet, for free!


You could upload part of the picture (with the nudity, or everything personally identifying, cropped out) and see if it matches part of any other picture online.


This frequently happens with GDPR data/delete requests. They will ask you to verify all the shadow data they collected about you is indeed correct before giving you a summary or promise to delete it.


PayPal won't delete an account I didn't knowingly consent (I'm sure there was some hidden terms hiding behind some dark UX pattern) to being created unless I prove my identity with information that isn't already recorded on the account. At least, not visible to me.

All that's there is a name and bank account link, I told them 'I am Oliver Ford' and that I'm happy to provide a secret shared by way of reference attached to a penny transfer to that linked account; but no - I must provide a copy of my passport and proof of address (which they don't know, and I can edit on the site anyway), obviously.


Google does the same with phone numbers. They ask for a number as a way of verification you never gave them in the first place. It does not make any sense.

Imo if one can log into an account they should have the right to request a delete/download for all associated data. Considering PII could very much be involved self service would even be better.


https://stopncii.org/how-it-works/ explains that "Your content will not be uploaded, it will remain on your device", and "Participating companies will look for matches to the hash and remove any matches within their system(s) if it violates their intimate image abuse policy."

In principle, both promises can be kept, with humans checking the matches (if any) against their rules. (In practice, I have no idea how it will work out.)


It's very likely that the image can be reconstructed from perceptual hashes. Perceptual hashes make two promises, too:

* that the original image can't be inferred from the hash, and * that similar images should get similar (if not the same) hashes

and these are in serious conflict, with what's happened with gradient-based methods the last 10 years.


Yup, you got it, the content itself will remain only in the device, the hashing is done in-browser, and the only part of the original content that makes it into the system is the hashes. Once a platform that is part of the program downloads those hashes and is able to match content, you need to apply some amount of verification. It’s on the participating companies themselves to review the content that matches the hash, to see if it actually violates their policies on NCII.


I thought the same thing. I wonder if there could be a way to have you submit a posed picture (to verify that it was taken live) and then use facial recognition to verify your face is in the content. It would probably be hard to do all that on device. It would also be a lot of hurdles to place in front of users, the vast majority of whom are just legitimate victims.

But if they don’t do something like that, I could see submitting all the Pepe the Frog memes in an attempt to stop the proliferation of material that FB probably should have been stopping anyway.


It’s a real challenge - you can try and do more pre-processing on a submitter’s device to try and avoid mistakes or malicious use of the system. The big limitations are:

1. How much processing time can you put on what might be a mobile phone - it’s using MD5 for Videos, but StopNCII did also try other things that were just too slow for phones.

2. StopNCII chose to do verification on platforms after a match rather than do it up front to prioritize the privacy of the user. This decision came after a lot of feedback from victims and experts who represent them. There are all sorts of techniques you can use to process server side (thus saving the person’s phone’s CPU), but it deliberately doesn’t retain enough context to make them viable.


>human oversight

Not at facebook, this will either be abused or useless


The Meta-Zuck will verify: https://youtu.be/-XWedbTIcV4


Facebook doesn't allow nudity anyway. The system could just serve to prioritize complaints and they wouldn't need to verify the identity of the person in the photo.


> I imagine there needs to be human oversight in some way to stop abuse

That doesn't seem to be working so well for preventing misinformation or threats of violence.


Or harmful ads for scam or viruses. Let's face it, Facebook did never actually care




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: