I recently asked GPT-3.5 ChatGPT to explain the grammar of a Japanese sentence using an obscure informal verb form, and it gave an answer that was obviously wrong. I just tried again with GPT-4, and it gave an answer that was more plausible, but which is almost certainly also wrong, because it contradicts the much-more-convincing Stack Exchange answer I had previously found via Google. (I only asked ChatGPT the question as a test.)
So, it can be helpful for basic questions, but there are limits. The harder the question is, the more likely it is to get the answer wrong, and the harder it is for the user to identify when itβs done so.