Ugh. People think they're so clever by posting this stuff.
Chat GPT has a certain style, I've noticed. It really likes a bridge paragraph with "on the the other hand" or "conversely" or whatever, and to sort of sum things up a couple hundred words in with "ultimately" or "in conclusion." The five paragraph essay style is very much alive with Chat GPT.
Literally a third of these paragraphs start with the word "overall."
Yeah. A default writing style which can be modified, to be fair. It's certainly not a style that makes you think "this is good writing" rather than "this is a very thin synthesis of other stuff for the purpose of content generation" whether you pick up on the chatbot author or not.
I think there must be some analogue of Moravec's Paradox where understanding the most basic human emotional communication a child picks up on is one of the hardest tasks for an AI, but summarising science topics to the level of the average Medium blogger is one of the easiest.
The earlier GPT-3 models aren’t really like that, I wonder if their army of supervised trainers for InstructGPT were just instructed to write like that?
I think waffling and trying to be equivocal before coming to some sort of conclusion just scores well in chatbot training, especially when the bot picks up on the subject matter but not the precise intent of the prompt or has a lot of conflicting statements in its corpus to summarise.
Ah, that's what I get for just backing out and not scrolling to the bottom. At the same time though, ugh. I think this is probably where real world usecases of GPT are going but taking something a human expressed concisely and then using a machine to blow it out with a bunch of filler is the opposite of what I want LLMs to enable.
Using language models to generate persuasive essays strikes me as cowardly. Instead of owning the argument and engaging deeply with it, you force readers to spar with a shadow of yourself rather than your own identity. Of course it allows the writer to hide behind “it’s just an AI bro” if anyone complains, but reap real rewards when they don’t.
It’s even worse that the marginal cost of bullshit has dropped to near zero.
> Brandolini's law…postulates that the amount of energy needed to refute bullshit is an order of magnitude larger than required to produce it.
Chat GPT has a certain style, I've noticed. It really likes a bridge paragraph with "on the the other hand" or "conversely" or whatever, and to sort of sum things up a couple hundred words in with "ultimately" or "in conclusion." The five paragraph essay style is very much alive with Chat GPT.
Literally a third of these paragraphs start with the word "overall."