Hacker News new | past | comments | ask | show | jobs | submit login

> Anything specialized enough not to be covered by Wikipedia or similar resources [...] is not a good subject for ChatGPT.

Things don't have to be incredibly obscure to make ChatGPT completely flub them (while authoritatively pretending it knows all the answers), they just have to be slightly beyond the most basic details of a common subject discussed at about the undergraduate level. Lexell's theorem, to take my previous example, is discussed in a wide variety of sources over the past 2.5 centuries, including books and papers by several of the most famous mathematicians in history, canonical undergraduate-level spherical trigonometry textbooks from the mid 20th century, and several easy-to-find papers from the past couple decades, including historical and mathematical surveys of the topic. It just doesn't happen to be included in the training data of reddit comments and github commit messages or whatever, because it doesn't get included in intro college courses so nobody is asking for homework help about it.

If you stick to asking single questions like "what is Pythagoras's theorem" or "what is the most common element in the Earth's atmosphere" or "who was the 4th president of the USA" or "what is the word for 'dog' in French", you are fine. But as soon as you start asking questions that require knowledge beyond copy/pasting sections of introductory textbooks, ChatGPT starts making (often significant) errors.

As a different kind of example, I have asked ChatGPT to translate straightforward sentences and gotten back a translation with exactly the opposite meaning intended by the original (as verified by asking a native speaker).

The limits of its knowledge and response style make ChatGPT mostly worthless to me. If something I want to know can be copy/pasted from obvious introductory sources, I can already find it trivially and quickly. And I can't really trust it even for basic routine stuff, because it doesn't link to reliable sources which makes its claims unnecessarily difficult to verify. Even published work by professionals often contains factual errors, but when you read them you can judge their name/reputation, look at any cited sources, compare claims from one source to another, and so on. But if ChatGPT tells you something, you have no idea if it read it on a conspiracist blog, found it in the canonical survey paper about the topic, or just made it up.

> Go ask it for help [understanding determinants], and you will have a very different experience.

It's going to give you the right basic explanation (more or less copy/pasted from some well written textbook or website), but if you start asking follow-up questions that get more technically involved you are likely to hit serious errors within not too many hops which reveal that it doesn't actually understand what a determinant is, but only knows how to selectively regurgitate/paraphrase from its training corpus (and routinely picks the wrong source to paraphrase or mashes up two unrelated topics).

You can get the same accurate basic explanation by doing a quick search for "determinant" in a few introductory linear algebra textbooks, without really that much more trouble; the overhead of finding sources is small compared to the effort required to read and think about them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: