The patterns picked up by the training don't seem to offer too much variation than a simple Markov chain. The author finds the generated texts similar to a conversation, because that's what they're looking for. But they look just as random as simply selecting the next random word that comes after the current word from the training set.
Am I missing something? This reads like gobbledygook. I have a sneaking suspicion that this very post is made to seem legitimate/credible while I don't necessarily believe it is? My mind is exploding a little bit. I am confused. Am I? Hmm.
This is a computer pretending to be a human pretending to be a computer pretending to be a human.
Am I just being silly? I assure you, I am not!
I am looking at a computer screen at this very moment. I do not see any humans in front of me. I am at home, it is past 3AM, and even my dogs have gone to bed by now.
(Why am I up so late? Debugging, of course!)
Everything I see on this screen right now comes from a computer. Any pretense past that is beyond my knowledge.
So, whatever I see, it's a computer pretending to be...
Other idea: make a computer that is good at finding whether a chat user is a human or a computer (A sort of Turing Test judge bot). Then use it as a fitness function to evolve a bot.
It's also what I don't like about the Turing Test, the core trait that is rewarded by the test is deceit.