At first glance, this seems like one of the more interesting projects to come out of Facebook AI. Justification: In the future, AI models will increasingly become interwoven with tech. It's not going to be so much "AI programming" as just "programming".
That raises an interesting question – one that has bothered me for a long time: Who owns copyright on training data?
As we saw with Clearview AI, a lot of data is being used without consent or even knowledge of the creators. And it's extremely hard to detect this usage, let alone enforce rights on it.
I might be misunderstanding this work, but it seems like this would give you the ability to mark your digital data in such a way that you could prove it was later used in a model.
Unfortunately, it's not that simple. You don't have access to the models (normally). And I'm betting that this work is somehow domain-specific, meaning you can't really come up with a generalized marker to imprint on all your data.
But this implies you might be able to mark your data with many such markers, in hopes that one of them will later be triggered:
We also designed the radioactive data method so that it is extremely difficult to detect whether a data set is radioactive and to remove the marks from the trained model.
The flipside is interesting, too: This might give companies yet another way of tracking users. Now you can check whether a given user was in your model's actual training set, and if not, fine-tune the model on the fly.
Is there any particular reason to think this won't become another cat and mouse escalation as training algorithms have built in protection against this (and other related training set manipulations, especially the poisoning one the article talked about)? That isn't to say it is useless, as most cat and mouse escalations prove to be quite useful as long as the mouse stays a little ahead of the cat.
In this case, wouldn't such a marker be able to be detected by looking at images of the same class and seeing if there are any common perturbation across them, adjusting the images by the common perturbation , and then training the neural network? Even if there isn't such a common perturbation across them, adjusting them by the false flag common perturbation generated shouldn't be any more destructive than this method would be.
If there was a way to make it dependent upon the initial image and the class, that would be much harder to detect, but would such a method be possible to detect since all images within a class would not have the common perturbation?
Not relevant to the main trust of the article but barium sulphate is not radioactive, it just efficiently absorbs X-rays. Radioactive markers are I believe most commonly used in PET scans, Wikipedia suggests flourine-18 as the common isotope used.
you are very correct - authors are doing a fascinating and smart parallel, but the analogy (contrast X-RAY imaging) is wrong. A radioactive element (F-18) marking a glucose molecule for tracking sugar metabolism in the human body with an imager (PET) is what they mean. This is one of many techniques in the field of nuclear medicine or molecular imaging.
I think most ML models aren’t very “lean”, meaning there is space in their weight layers for information isn’t directly attributable to predictive accuracy. That space is likely where this new “radioactive” like data is being “stored”/“remembered”.
The leanness could be increased during training by progressively trimming width/depth of weights, but I doubt if every model has this done.
This is definitely true. In fact, this can be exploited to extract sensitive/private attributes about the training data from the learned models. This may become an issue for, e.g., AI in healthcare.
"Watermarking" and trademarking can be different things. And access to data is already licensed.
I think you're right in that DRM systems are likely to be built on top of such infrastructure, but DRM has been broken in other contexts before and the system doesn't necessarily have to be used for DRM.
The question would be whether it’s possible to make one’s behavioural data (online or offline) “radioactive” to then prove with a high degree of accuracy whether someone (like Facebook) is stalking you online to deliver targeted ads.
At the moment advertising providers use a lot of data for ad targeting, some of which is benign and/or acquired with informed consent. As a result it makes it impossible for the user to tell whether an ad was targeted to them based on data they consented to share or if the data used was data they didn’t want to be collected or used for advertising purposes.
Could be easy enough if your opponent is not expecting it and deploying countermeasures. Something like a customized AdNauseam https://adnauseam.io/ that prefers clicking some particular crap you don't like.
I'm surprised that it's even necessary to modify the dataset to achieve this. From what I've read, large models will often memorize their training data, and it seems like even with smaller models it should be possible to tell whether or not it was trained with some set of images, simply because the loss will be lower.
It is already possible to know if a particular image has been used in training (see eg. https://arxiv.org/abs/1809.06396 by the same authors), but this new work also provides a p-value to give you a confidence on the result it gives.
Also notice that being proactive in watermarking the dataset can be desirable in some cases. For example, many datasets have large overlaps in the base images they use (but sometimes different labels), so it can be interesting to know whether a model was trained on "your" version of the dataset.
Not mentioned thus far anywhere in the article or in comments: potentially weaponizing this against deep fakes.
What's to stop cameras from making raw photos "radioactive" from now on, making deepfakes traceable by tainting the image-sets on which the models generating the deepfakes were trained?
This isn't my field. I'm certain there's a workaround, but I'd suspect detecting sufficiently well-placed markers would require knowing the original data pre-mark, which should be impossible if the data is marked before it's written to camera storage. I haven't even fully thought out the logistics yet, such as how to identify the radioactive data.
But am I missing something? I feel like this is viable.
Ctrl+f shows no mention of the study of post-processing quantization nor pruning on their tampered dataset.
Overall, I instinctively think that one can create an NN architecture that is not affected, or even easily detect the tampered pictures with a pre processing pass, and untamper them.
NN are actually fuzzy, they support noise, you could add a bit more noise in the dataset to defeat the "radioactiveness".
Also, I'm pretty sure Facebook is not doing it to protect user data, but I have no proof.
Have not yet read the article, as Facebook is blocked at work, but I would guess that this is mostly the application of steganographic techniques, to hide known patterns, in datasets that are likely to be stolen/borrowed for training.
Then observe the outputs of said models to try to discern related patterns.
I've read Accelerando and don't remember a major plot point that remotely looks like this. Perhaps are you thinking of one of the many secondary plot points.
First chapter; the lobsters are afraid of steganographic covert channels in the training data for their translator:
> Manfred drains his beer glass, sets it down, stands up, and begins to walk along the main road, phone glued to the side of his head. He wraps his throat mike around the cheap black plastic casing, pipes the input to a simple listener process. "Are you saying you taught yourself the language just so you could talk to me?"
> "Da, was easy: Spawn billion-node neural network, and download Teletubbies and Sesame Street at maximum speed. Pardon excuse entropy overlay of bad grammar: Am afraid of digital fingerprints steganographically masked into my-our tutorials."
I guess it would have only been a major plot point if the digital fingerprints had turned out to be present and had tripped some kind of monitoring system.
That raises an interesting question – one that has bothered me for a long time: Who owns copyright on training data?
As we saw with Clearview AI, a lot of data is being used without consent or even knowledge of the creators. And it's extremely hard to detect this usage, let alone enforce rights on it.
I might be misunderstanding this work, but it seems like this would give you the ability to mark your digital data in such a way that you could prove it was later used in a model.
Unfortunately, it's not that simple. You don't have access to the models (normally). And I'm betting that this work is somehow domain-specific, meaning you can't really come up with a generalized marker to imprint on all your data.
But this implies you might be able to mark your data with many such markers, in hopes that one of them will later be triggered:
We also designed the radioactive data method so that it is extremely difficult to detect whether a data set is radioactive and to remove the marks from the trained model.
The flipside is interesting, too: This might give companies yet another way of tracking users. Now you can check whether a given user was in your model's actual training set, and if not, fine-tune the model on the fly.
Looking forward to seeing what comes of this.