In some cases it would be impossible, since sometimes it can output exactly what was written by a human, or something that sounds 100% like what someone would write.
But if you allow some false negatives, such as trying to detect if a bot is a bot, I think that could work? But I feel like the technology to write fake text is inevitably going to outpace the ability to detect it.
But if you allow some false negatives, such as trying to detect if a bot is a bot, I think that could work? But I feel like the technology to write fake text is inevitably going to outpace the ability to detect it.