Also, it's not a very good poem. And it's definitions aren't entirely correct.
Which is a huge problem, because you cannot trust anything ChatGPT produces. It's basically an automated Wikipedia with an Eliza N.0 front end. Garbage in gets you garbage out.
We project intelligence whenever something appears to use words in a certain way, because our own training sets suggest that's a reliable implication.
But it's an illusion, just as Eliza was. For the reasons you state.
Eliza had no concept of anything much, and ChatGPT has no concept of meaning or correctness.
Which is a huge problem, because you cannot trust anything ChatGPT produces. It's basically an automated Wikipedia with an Eliza N.0 front end. Garbage in gets you garbage out.
We project intelligence whenever something appears to use words in a certain way, because our own training sets suggest that's a reliable implication.
But it's an illusion, just as Eliza was. For the reasons you state.
Eliza had no concept of anything much, and ChatGPT has no concept of meaning or correctness.