It’s not unreasonable to think “we should look into this guy that is publishing CSAM even though he’s posted it under hashtag #totallylegal”
I have a legitimate question for the defenders of this specific activity: If a bad actor were to produce real CSAM by harming children and then used an img2img model that creates very similar but distinct outputs, what do you call those outputs? CSAM? Art?
What (if anything) should happen when law enforcement sees those images?
How can the idea of “synthetic CSAM that harms no one” exist without some mechanism to verify that that is the case?
It’s not unreasonable to think “we should look into this guy that is publishing CSAM even though he’s posted it under hashtag #totallylegal”
I have a legitimate question for the defenders of this specific activity: If a bad actor were to produce real CSAM by harming children and then used an img2img model that creates very similar but distinct outputs, what do you call those outputs? CSAM? Art?
What (if anything) should happen when law enforcement sees those images?
How can the idea of “synthetic CSAM that harms no one” exist without some mechanism to verify that that is the case?