Hacker News new | past | comments | ask | show | jobs | submit login

A very sweet hack, but I think the concern was based on the example image provided in the link posted. While the face is blurred, there's still a lot of information you can glean about the person: their haircut, their neck, the clothes worn etc. -- so I'm guessing the threat vector here is that if you also have a general set of pictures from the same demo, you may be able to automatically identify who the blurred person is.

Blurring is better than nothing but the best picture when it comes to avoid being traced is the picture that was never taken.




Let's be real for 2 seconds here, this is pure nonsense. No court of law would do anything about "hey we arrested that guy because he has 2 eyes, a mouth and the same tshirt as that other guy who was protesting yesterday", if it comes to this you wouldn't even need a picture of blurred faces, just arrest whoever you want and provide forged evidences (or none) because that's exactly the same thing

And even then law enforcement are already filming them (cctv + from the air) and tracking their phones, the last thing you have to worry about is a 100% blurred face that no amount of technical power would be able to process or match back to you.


picture A of an individual, unblurred, protesting peacefully.

picture B of a blurred individual from later on in the same protest, wearing the exact same clothes, commiting questionable acts, is circumstantially incriminating.


bikeshedding? on hacker news? no way


You're overthinking it. Police already have their own camera people doing video surveillance in addition to CCTV and other surveillance tools. The sort of forensic analysis you mention is of course possible and is sometimes engaged in, but obscuring all such information would defeat the purpose of photojournalism altogether.


Yes. Someone who has access to many photos of the same set of people might well able to identify people on one photo, even though their faces are blurred on that particular one.

I'm not sure whether the large number of photos nowadays is a net negative, though. That's also what finally stopped Derek Chauvin.


It shouldn't blur, it should be a black box.

You could definitely take signals code, and run it over the set of test images and find which output matches closest to the target image.


What set of test images?

https://www.androidpolice.com/wp-content/uploads/2020/06/04/... is blurred by Signal. Suppose that you have all the photos that have been posted to Facebook, and that both of those women are on Facebook, and lastly that you have resources enough to run all of those through the Signal code. How would you match those other photos to the blurred part of this one?


Paging clearview.ai's enterprise sales department. Call for you on line 7...


Not just any black spot either. A black spot of random size larger than what you want to redact. That way you avoid leaking the size of what's being redacted. The size of what's being redacted can sometimes provide enough information to determine plausible contents: http://blog.nuclearsecrecy.com/2014/07/11/smeared-richard-fe...


Ok, but just to be clear, we're redacting faces here. There isn't much meaningful here other than an exceptionally rough indication of age/development.


The examples on the Signal website give you hair color, hair style, likely race, and the shape of the top of the protesters' ears. While it's not definitive, given that a fuller redaction is easy and has no disadvantages, I don't see why someone shouldn't try.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: