Hacker News new | past | comments | ask | show | jobs | submit login

Not mentioned thus far anywhere in the article or in comments: potentially weaponizing this against deep fakes.

What's to stop cameras from making raw photos "radioactive" from now on, making deepfakes traceable by tainting the image-sets on which the models generating the deepfakes were trained?

This isn't my field. I'm certain there's a workaround, but I'd suspect detecting sufficiently well-placed markers would require knowing the original data pre-mark, which should be impossible if the data is marked before it's written to camera storage. I haven't even fully thought out the logistics yet, such as how to identify the radioactive data.

But am I missing something? I feel like this is viable.




Printer manufacturers have been doing this [1] for a long time.

1 - https://en.wikipedia.org/wiki/Machine_Identification_Code


Yep, one of the reasons color printers can't print black+white without yellow present.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: