Hacker News new | past | comments | ask | show | jobs | submit login

[flagged]



The problem is that for the vast majority of use, LLM output is not revised or edited, and very many times I'm convinced the output wasn't even fully read.


I assume FrustratedMonky's comment was satirical, given that it appears to have been written like an LLM and starts with a "but, but, but" which is how you might represent someone you disagree with presenting their argument.


I thought so, but the rest of the comment is worded quite reasonably, so I decided to not interpret it as hyperbole or irony.


"quite reasonably"

Joke is, it was GPT 4o. I didn't phrase it to be like an LLM, I asked GPT to defend itself, and used it's text un-edited. Except adding the 'but but but"

There is a lot out there that people are no longer able to determine if it is an LLM. They are getting better. That they are no longer obvious is the scary part.


Ah, I've never really been able to discern. There is obvious LLM text, but it's harder to tell the difference in short samples, and with some prompting to get it to write in a particular style, it's been easy to make LLM output non-obvious for a while now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: