And it has some usefulness: essentially it's an alternative to reading through many pages/posts of StackOverflow and Wikipedia.
But it doesn't know anything. It has no clue whatsoever whether it is correct or incorrect. It only makes guesses. The only reason there is useful output is because that output is a transformation of useful input.
There is no logic. There is no way to introduce logic. There is no way to filter it through logic.
If some coherent mixture of the ML's training datasets already contains the answer to your question - like literary or code examples, definitions, etc. - then the output will be useful. Otherwise, it's just wrong, and sometimes unexpectedly so.
The output of chatGPT (or any other ML-based NLP) can only be as correct or knowledgeable as the data it is trained on; and it will practically never even match that level, because it is only mixing words by semantic popularity, never by logical relationship.