Anecdotally, chatgpt seems much worse to me than Google for getting correct answers. Like orders of magnitude worse. Tells me the wrong timezone for a city kind of bad. No doubt it will be much better in the future, and they've definitely found PMF with the interface, but I would not trust it right now with anything even slightly important to me.
But how can you verify ChatGPT's answers if you don't know what its sources were? E.g. if I google a technical question about HTML5, I can see whether a result is the HTML5 spec, MDN, W3Schools, or a random medium blog. If I google a medical question, I can see if I'm on a hospital's website or on Men's Health.
Parent is using chatgpt as a tutor though, not as a google search.
I’d expect a tutor to give the right answer, but I wouldn’t expect google to. Chatgpt is often wrong. It’s a problem if you’re trying to learn something and using it as your tutor/truth.