Roughly speaking, the whole point of an LLM is to create plausible sounding text, without regard to the truth (which it cannot determine or derive), so it's a tool that is perfectly suited to this sort of malfeasance.
I’ve read this over and over, and believe it - but it’s so easy to forget when it absolutely nails something like debugging or suggesting a CMake edit or finding 5 letter combinations that are reversible with a vowel in the middle, or etc.
It’s just still mind blowing I can get a sarcastic summary of an email in the theme of GlaDOS from Portal and in the same screen get an email proofread.
It’s funny how far “what should the next word be” can go.
It might seem that there are cases where the ability to generate heaps of plausibly sounding text on demand is helpful, and there are cases where it is pretty much the worst possible capability to have.
i don't agree. The way that most LLMs currently generate response might be through this modality. But the purpose of LLMs is to get closer to simulating how human minds and languages operate. A lot of models have been trying to overcome the problem of fabrications. GPT bots like SciSpace's ResearchGPT are trying to do precisely that.