Not mentioned thus far anywhere in the article or in comments: potentially weaponizing this against deep fakes.
What's to stop cameras from making raw photos "radioactive" from now on, making deepfakes traceable by tainting the image-sets on which the models generating the deepfakes were trained?
This isn't my field. I'm certain there's a workaround, but I'd suspect detecting sufficiently well-placed markers would require knowing the original data pre-mark, which should be impossible if the data is marked before it's written to camera storage. I haven't even fully thought out the logistics yet, such as how to identify the radioactive data.
But am I missing something? I feel like this is viable.
What's to stop cameras from making raw photos "radioactive" from now on, making deepfakes traceable by tainting the image-sets on which the models generating the deepfakes were trained?
This isn't my field. I'm certain there's a workaround, but I'd suspect detecting sufficiently well-placed markers would require knowing the original data pre-mark, which should be impossible if the data is marked before it's written to camera storage. I haven't even fully thought out the logistics yet, such as how to identify the radioactive data.
But am I missing something? I feel like this is viable.