Hacker News new | past | comments | ask | show | jobs | submit | kfuwbi2640's comments login

I’ve seen a number of attempts to identify deepfakes and other forms of manipulated images using AI. This seems like a fool’s errand since it becomes a never ending adversarial AI arms race. Instead, I haven’t seen a proposal for a system I think could work well. Camera and phone manufacturers could have their devices cryptographically sign each photo or video taken. And that’s it. From that starting place, you can build a system on top of it to verify that the image on the site you’re reading is authentic. What am I missing that makes this an invalid approach? I do understand that this would require manufacturers to implement, but it seems achievable to get them onboard. I even think you get one company like Apple to do this and it’s enough traction for the rest of the industry to have to follow suit.


Such a system would require all devices to be secure against key extraction. Otherwise the attacker need only choose the most vulnerable device, extract a signing key from it and sign their deepfakes with it.

It would also allow any device manufacturer to sign anything they like, as well as anyone who can coerce a device manufacturer to do so.


Apologies for a late response here (by HN standards where conversations last only a number of hours). I agree that an attacker could compromise weaker devices and sign their deep fakes with them. But then hopefully those keys Or that manufacturer would be blacklisted. In my mind, a company like Hauwei could implement this, but I as a consumer of media wouldn’t necessarily trust photos from their devices. But photos signed by an iPhone where Apple has a better privacy record, I could trust more.

Thanks for replying though, this does help me understand the challenges in a system like this.


It's not really a matter of privacy record. In general manufacturers don't do it on purpose.

For example, it was discovered that it's possible to extract keys from Intel SGX enclaves using certain speculative execution vulnerabilities. Intel SGX predates Spectre. It isn't a category of vulnerability they knew existed when they were designing it.

Vulnerabilities are regularly discovered in almost everything, iPhones included. Diligent vendors are quick to patch them, but an attacker only needs to wait until the next vulnerability is discovered and then extract the signing keys from a device that hasn't been patched yet.

You also have no way of knowing which keys they are -- if a million devices of a particular model have a known vulnerability then any attacker could extract the keys from any of them, and even blacklisting all of them (which would tend to dissatisfy their innocent owners) still wouldn't save you from an attacker using an unpublished vulnerability against a device you don't even know is vulnerable.

To put this another way, this is basically the same class of technology that Hollywood uses for DRM. Now, how many Hollywood movies can you say have not been pirated by anyone?


> it becomes a never ending adversarial AI arms race

SOTA model training now costs up to 7 figures. Creating deep fakes that couldn't be detected by some 8 figure SOTA model might not be worth it; definitely no regular Joe Deep Learning researcher could make one at some point. Then it will be done only for high value targets by some large corporate or government bodies.


this group at DARPA are looking for answers if you have one

https://www.darpa.mil/program/media-forensics


Cut the wires to the camera sensor and connect them to a device that converts a video signal into fake sensor outputs. Once such devices are being sold, that's available to anyone with a steady hand and a soldering iron.


> This seems like a fool’s errand since it becomes a never ending adversarial AI arms race.

Mission accomplished? https://xkcd.com/810/


I’ve seen a number of attempts to identify deepfakes and other forms of manipulated images using AI. This seems like a fool’s errand since it becomes a never ending adversarial AI arms race.

Instead, I haven’t seen a proposal for a system I think could work well. Camera and phone manufacturers could have their devices cryptographically sign each photo or video taken. And that’s it. From that starting place, you can build a system on top of it to verify that the image on the site you’re reading is authentic. What am I missing that makes this an invalid approach?

I do understand that this would require manufacturers to implement, but it seems achievable to get them onboard. I even think you get one company like Apple to do this and it’s enough traction for the rest of the industry to have to follow suit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: