Hacker News new | past | comments | ask | show | jobs | submit login

In the US if I get caught selling you a gram of pure cocaine I get the same punishment as I would if I sold you a gram that’s only 20% pure. If I sold you a gram of some random powder and told you it is cocaine I am likely to be prosecuted all the same whether I knew it was fake or not.

That aside, the “fully synthetic CSAM with no children involved at all” idea relies very, very heavily on taking the word of the guy who you just busted with a hard drive full of CSAM.

His defense would essentially have to be “Your honor I pinky swear that I used the txt2img tab of automatic1111 instead of the img2img tab” or “I did start with real CSAM but the img2img tab acts as an algorithmic magic wand imbued with the power to retroactively erase the previous harm caused by the source material”

There is no coherent defense to this activity that boils down to anything other than the idea that the existence of image generators should — and does — constitute an acceptable means of laundering CSAM and/or providing plausible deniability for anyone caught with it.

The idea that there would be any pushback to arresting or investigating people for distributing this stuff boggles the mind. Inventing a new type of armor to specifically protect child abusers from scrutiny is a choice, not some sort of emergent moral ground truth caused by the popularization of diffusion models.




Generally in the US, you must be proven guilty, not proven innocent. The prosecution must prove that the defendant actually committed the crime. The defendant absolutely can claim plausible deniability.

Although currently it is not entirely clear whether mere possession should continue to constitute a crime by itself, or it should require actual child abuse (because the former used to imply the latter).


I did not mention conviction, only prosecution.

It’s not unreasonable to think “we should look into this guy that is publishing CSAM even though he’s posted it under hashtag #totallylegal”

I have a legitimate question for the defenders of this specific activity: If a bad actor were to produce real CSAM by harming children and then used an img2img model that creates very similar but distinct outputs, what do you call those outputs? CSAM? Art?

What (if anything) should happen when law enforcement sees those images?

How can the idea of “synthetic CSAM that harms no one” exist without some mechanism to verify that that is the case?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: