Hacker News new | past | comments | ask | show | jobs | submit login

I'm guessing a whole line of business will be created from software like this existing? Some sort of validation engine which would verify if the photo is realistic or shopped.

I understand that you can tell in some of the examples in the video, but as it gets better, it maybe really hard to distinguish real from fake.




This industry already exists (search for “photo forensics”) and I'm certain you're right to suspect it'll be booming soon. ML should also be good at faking some of the characteristics which current tools use so we must be looking at an arms race for years.

One of the big challenges I'm expecting will be the categories of attack: it seems plausible that we'd be able to limit abuse in the case where the original image is made by someone who isn't malicious by making it easy for viewers to find the originals (perceptual hashing with some sort of distributed ledger or signature system, and getting major services like Facebook to use it) but I haven't seen a convincing suggestion for how to deal with the case where the original is created by the attacker and thus any validation system would only show what they submitted it with. It seems like that'd fall back on much more failure-prone techniques — e.g. you could rely on public information to convince most people that, for example, Barack Obama didn't pledge allegiance to ISIS at a public rally but most people aren't going to have enough rock-solid documentation to prove a negative. If an attacker said that politician X was having an affair in hotel room it'd probably seem convincing to many people unless they screwed up and left proof of e.g. using stock footage, landmarks from a different city, wrong time of year, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: