Hacker News new | past | comments | ask | show | jobs | submit login

In some cases it would be impossible, since sometimes it can output exactly what was written by a human, or something that sounds 100% like what someone would write.

But if you allow some false negatives, such as trying to detect if a bot is a bot, I think that could work? But I feel like the technology to write fake text is inevitably going to outpace the ability to detect it.




If generator output is truly indistinguishable from human output, then who cares? We've won.

It reminds me of this xkcd: https://xkcd.com/810/

> But I feel like the technology to write fake text is inevitably going to outpace the ability to detect it.

Counterintuitively, this isn't always true! For example, spam detectors have outpaced spam generation now for decades.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: