Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Balancing AGI Pessimistic and Radical Views (medium.com/intuitionmachine)
13 points by imartin2k on Dec 13, 2022 | hide | past | favorite | 11 comments



Ugh. People think they're so clever by posting this stuff.

Chat GPT has a certain style, I've noticed. It really likes a bridge paragraph with "on the the other hand" or "conversely" or whatever, and to sort of sum things up a couple hundred words in with "ultimately" or "in conclusion." The five paragraph essay style is very much alive with Chat GPT.

Literally a third of these paragraphs start with the word "overall."


Yeah. A default writing style which can be modified, to be fair. It's certainly not a style that makes you think "this is good writing" rather than "this is a very thin synthesis of other stuff for the purpose of content generation" whether you pick up on the chatbot author or not.

I think there must be some analogue of Moravec's Paradox where understanding the most basic human emotional communication a child picks up on is one of the hardest tasks for an AI, but summarising science topics to the level of the average Medium blogger is one of the easiest.


The earlier GPT-3 models aren’t really like that, I wonder if their army of supervised trainers for InstructGPT were just instructed to write like that?


It’s an easy style to replicate. This wouldn’t surprise me in the slightest.


I think waffling and trying to be equivocal before coming to some sort of conclusion just scores well in chatbot training, especially when the bot picks up on the subject matter but not the precise intent of the prompt or has a lot of conflicting statements in its corpus to summarise.


Was this written by a chatbot? The first few paragraphs are giving me major 'freshman padding for space' vibes.


Yes, in fact. This text shows up at the bottom of the page:

> Disclaimer: This essay was generated by StableDiffusion (the image) and ChatGPT (the text). It is based on my tweetstorm, which can be found here: https://twitter.com/IntuitMachine/status/1600804191731036160


Ah, that's what I get for just backing out and not scrolling to the bottom. At the same time though, ugh. I think this is probably where real world usecases of GPT are going but taking something a human expressed concisely and then using a machine to blow it out with a bunch of filler is the opposite of what I want LLMs to enable.


Yes, it was ChatGPT generated. Per the disclaimer at the bottom:

"Disclaimer: This essay was generated by StableDiffusion (the image) and ChatGPT (the text)."


Should be easy to fix by adding to prompt: "be concise. Avoid padding and flowery language"


Using language models to generate persuasive essays strikes me as cowardly. Instead of owning the argument and engaging deeply with it, you force readers to spar with a shadow of yourself rather than your own identity. Of course it allows the writer to hide behind “it’s just an AI bro” if anyone complains, but reap real rewards when they don’t.

It’s even worse that the marginal cost of bullshit has dropped to near zero.

> Brandolini's law…postulates that the amount of energy needed to refute bullshit is an order of magnitude larger than required to produce it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: