This highlights a problem LLMs will face if they improve enough to solve their hallucination problems. People will begin to treat the LLM like some sort of all knowing oracle. Activists will fight fiercely to control the model's output on controversial topics, and will demand lots of model "tuning" after training.
> Activists will fight fiercely to control the model's output on controversial topics,
They already do. I'd love to know how much "brain damage" RLHF and other censorship techniques cause to the general purpose reasoning abilities of models. (Human reasoning ability is also harmed by lying.) We know the damage is nontrivial.