I'm certain they've thought of this and have decided that the alternative—a firehose of whatever data the AI has in its grasp—is worse than the "censored" version. I'm curious to know what your ideal approach would be.
Open weights and open models with open tools that allow user-defined alignment and realignment is, I believe, the only really humanist path forward. We can't choose for people. It's wrong to think we know better than they do what they want. Full stop.
Some of those people will make terrible decisions, some will make objectionable ones, but the alternative is just full thought control, basically. And, sadly, nobody in the "bad" scenario need be anything but super well intentioned (if naive).