> Finding and prosecuting the people who make real CP is difficult enough already.
let's assume that AI generated CP should be illegal. Does it mean that possession of model that is able to generate such content should also be illegal? If not, then it's easy to just generate content on the fly and do not store anything illegal. But when we make model illegal, then how do you enforce that? Models are versatile enough to generate a lot of different content, how do you decide if ability to generate illegal content is just a byproduct or purpose of that model?
>> Finding and prosecuting the people who
>> make real CP is difficult enough already.
> let's assume that AI generated CP should be illegal
Well that's a big assumption, lol. I definitely agree that it would be impossible to enforce, for the reasons you say.
I personally would not be in favor of such a law at all. Partially because it's unenforceable as you say, and partially on principle.
The argument against real CP is extremely clear: we deem it abominable because it harms children. That doesn't apply to computer-generated CP, or the models/tools used to produce it.
I think you might be able to argue AI generated CP could cause indirect harm by feeding those desires and making people more likely to act on them, but I agree that's a far more fragile argument.
I think there's a big range of possibilities there and they're not mutually exclusive.
There's the possibility that watching FOO directly encourages viewers to do FOO in real life. Like you said, this is the most fragile. I think clearly this is true in some cases -- most of us have seen a food commercial on TV and thought, "I could really go for that right now." I'm less convinced that it's true for something like pedophilia: the average person will be revolted by it, not encouraged, unless they already are into that kind of awful thing.
There's the possibility that watching FOO doesn't directly encourage viewers to do FOO, but serves to kind of normalize it. I think this happens a lot, but I think it takes a carefully crafted context and message.
There's the possibility that AI generated CP could actually helps children, by providing a safe outlet for pedophiles so that they wouldn't need to do heinous shit in real life. I recall reading studies that instances of (adult) rape in societies were inversely correlated with the availability of (adult) pornography, with a possible explanation being that porn provided a safe outlet for people who weren't getting the kind of sex they wanted.
Most people are not developers and most people don't provide SaaS products. They are only consumers of existing technology.
In that sense, instead of enforcing non-existance of models, the enforcement could just make ilegal to provide any service that process inputs or provide outputs that are cp-like, by, i.e. obligating people with the models to add filters on input and/or after result is generated but before it is displayed or returned from computation.
let's assume that AI generated CP should be illegal. Does it mean that possession of model that is able to generate such content should also be illegal? If not, then it's easy to just generate content on the fly and do not store anything illegal. But when we make model illegal, then how do you enforce that? Models are versatile enough to generate a lot of different content, how do you decide if ability to generate illegal content is just a byproduct or purpose of that model?