I think this is unnecessarily credulous about what is really going on with ChatGPT. It is not "applying the concepts of a poem" or checking meter and verse, it is generating text to fit a (admittedly very complicated) function that minimizes the statistical improbability of its appearance given the preceding text. One example is its use of rhyming words, despite having no concept of what words sound like, or what it is even like to hear a sound. It selects those words because when it has seen the word "poem" before in training data, it has often been followed by lines which happen to end in symbols that are commonly included in certain sets.
Human cognition is leagues different from this, as our symbolic representations are grounded in the world we occupy. A word is a representation of an imaginable sound as well as a concept. And beyond this, human intelligence not only consists of pattern-matching and replication but pattern-breaking, theory of mind, and maybe most importantly a 1-1 engagement with the world. What seems clear is that the robot was trained to recognize a certain pattern of pixels from a camera input, but neither the robot nor ChatGPT has any conception of what a "threat" entails, the stakes at hand, or the common-sense frame of reference to discern observed behaviors that are innocuous from those that are harmful. This allows a bunch of goofy grunts to easily best high-speed processors and fancy algorithms by identifying the gap between the model's symbolic representations and the actual world in which it's operating.
Also, it's not a very good poem. And it's definitions aren't entirely correct.
Which is a huge problem, because you cannot trust anything ChatGPT produces. It's basically an automated Wikipedia with an Eliza N.0 front end. Garbage in gets you garbage out.
We project intelligence whenever something appears to use words in a certain way, because our own training sets suggest that's a reliable implication.
But it's an illusion, just as Eliza was. For the reasons you state.
Eliza had no concept of anything much, and ChatGPT has no concept of meaning or correctness.
Human cognition is leagues different from this, as our symbolic representations are grounded in the world we occupy. A word is a representation of an imaginable sound as well as a concept. And beyond this, human intelligence not only consists of pattern-matching and replication but pattern-breaking, theory of mind, and maybe most importantly a 1-1 engagement with the world. What seems clear is that the robot was trained to recognize a certain pattern of pixels from a camera input, but neither the robot nor ChatGPT has any conception of what a "threat" entails, the stakes at hand, or the common-sense frame of reference to discern observed behaviors that are innocuous from those that are harmful. This allows a bunch of goofy grunts to easily best high-speed processors and fancy algorithms by identifying the gap between the model's symbolic representations and the actual world in which it's operating.