Not sure where you live, but I'm pretty sure it's legal. And for a good reason - nobody is harmed by creating and consuming them (this is not true for pictures, for example). In fact, I believe "Lolita" can be found in libraries and is quite suggestive (don't know, I've never read it).
I find it awkward to defend the thing I do in this post, but I'm allergic to the "think of the children" argument. Way to shut off logical reasoning.
Edit: I see that later you give better examples of regulation (not information regulation though). Just editing to let you know that they were much more convincing to me.
I’m allergic to the “free speech is absolute” argument, which is literally not true and shouldn’t be true (CSAM just being the most broadly-agreed upon exception to it), as a way to shut off logical reasoning.
My argument is not that this stuff is simple or even that it should be regulated. I certainly don’t know how it should be, if it should. I’m just disputing the assertion that somehow AI is the one technology that should be built and operated without constraint, especially because it’s so transformative.
If it’s so transformative, then we should be extra cautious around its deployment, and not fall back on “it looks like language therefore free speech.”
Imagine where humanity would be today if “it’s transformative therefore it should be regulated” was the winning side when the printing press came into existence
I didn't make that argument. I said that it's transformative, therefore we should be cautious, and I don't know if we should or how we could regulate it.
But yeah, the hypothetical people saying "everything to come from the printing press will be good" were wrong. It dramatically destabilized society, which mattered quite a lot more to the people living proximally to the printing press than it does to us today. Note: We are likely to live through the pretty gnarly destabilization period and we have a legitimate interest in mitigating how gnarly it gets.
And I've not seen a compelling argument as to why all possible technologies need to net out positively in the long run. Would love to hear one though!
Absolute free speech is fine, IF it includes the right to agree-to-disagree. Agreeing to disagree, is the true boundary on freedom.
We've seemingly forgotten that.
The internet can be a much richer environment, with non-homogenous streams of consciousness. Way, way too much agreement without boundaries, locks out streams of consciousness, that can add a lot of value to the system.
Nobody and no-one realizes this is happening, people are happy with 'more of the same' for infinity. From my lifetime of obversation.
How hard creating erotica about children could be even without a LLM? Just take regular erotica and change the ages of the protagonists. We should ban the internet and MS Word while we're at it.
The very small benefits of censoring generated abuse material in text form are massively outweighed by the risks of only few powerful people having access to a transformative technology. We should not let Meta, OpenAI and others build moats around their products by using perverts as an excuse.
We’re also fine limiting who is allowed to practice law. Or who is allowed to drive a car. Or who is allowed to own a firearm. Or who is allowed to send automated text messages. Or who is allowed to market a drug. Or who is allowed to broadcast radio on certain bands. Or who is allowed to fly aircraft. Or who is allowed to dump things in rivers.
People become blind to the huge amount of control society exerts over most technologies, often for good reason and with decent results, and then have some ideological fixation that AI needs to be the one technology that is totally immune to any control or even discussion of control.
All of your examples offer up immediate, obvious harms that have actually hurt people in real life in measurable ways (injury, death), and that we've put mechanisms of control in place to reduce. I think that's good. It means society chooses to control things when a clearly articulated risk is both present, and manifests enough to warrant that control.
Not regulating lawyers leads to direct harm to the people hiring them, and the outcome of their court cases. It also has knock-on effects regarding the integrity of the justice system, which is part of the government. Exerting control makes sense for a bunch of reasons, from actual harm being manifested to the fact that justice is a government responsibility.
Not regulating who can drive cars leads to additional injury and death.
Gun control laws are attempting to address the harm of gun violence, which leads to injury and death.
Regulating spam addresses the harm of one actor externalizing their costs onto all of society, making our messaging systems (like phone calls, texting, and email) ineffective at their main purpose. This harms societies that use those systems for vital communication, since all of these are "push", in the sense one can get overwhelmed by incoming messages, emails, and calls.
Regulating drug manufacture addresses the case of manufacturers producing "medicine" that harms those who buy it, or extracts money from them despite the "medicine" being entirely ineffective. Both harms are well-documented going back decades/centuries.
Regulation of spectrum (broadcast and otherwise) is a result of the inherent scarcity of spectrum. Much like the automated messaging example, this system of control maintains the utility of the communication channel.
Regulating who can pilot aircraft has similar arguments to cars, but more so: costs are higher and damage is higher.
Dumping waste into rivers is again an externalization of cost onto society, and addresses harms of companies that dump toxic waste into public water supplies, thus poisoning citizens. This is a real risk, and regulation helps address it.
In every single case, the control society exerts addresses a real, actual harm that has been observed in many, many cases.
I have yet to hear anyone articulate a real, actual harm caused by an uncensored AI. I run Mistral on my laptop using kobaldcpp or llamacpp. Even if someone were to host Mistral publicly and allow folks to chat with it, the harm is unclear. People say inappropriate things (at least in some contexts) to Mistral, and Mistral responds in kind. Where's the harm? If I want it to help me write a violent fight scene for a novel or play, or describe something sexual for an erotic story, so what? This sort of stuff is discussed by humans constantly.
For me to buy that we need control and regulation, I need to understand the problem being solved, and the cost of the solution needs to be far outweighed by the benefit. So far, I haven't heard such a tradeoff articulated. My hypothesis is that most companies working on training AIs have a lot to lose, so most of the "safety" talk is intended to provide legal cover.
If we're pushing goalposts, let's ban pencils and keyboards for that exact reason. Running local language models at least require a few hundred bucks more in startup costs and far more electricity than a word document or a notebook. In fact, we might as well promote illiteracy to prevent people from being able to read harmful content altogether.