Hacker News new | past | comments | ask | show | jobs | submit login

A problem, though, is that it is not binary. There is a whole spectrum of nonsense, and if you are not a specialist it is not obvious to figure out the accuracy of the reply. Sometimes by chance you end up asking for something the model knows about for some reason, but very often not. That is the wrong aspect of it. Students might rely on it in their 1st year because it worked a couple of times and then learn a lot of nonsense among the truthy facts LLMs tend to produce.

The main problem is not that they are wrong. It would be simpler if they were. But then, recommending students to use them as tutors is really not a good idea, unless what you want is overconfidently wrong students (I mean more than some of them already are). It’s not random doomsayers saying this; it’s university professors and researchers with advanced knowledge. Exactly the people that should be trusted for this kind of things, more than AI techbros.




We could probably find a middle ground for agreement if we said, "Don't use current-gen LLMs as a tutor in fields where the answer can't be checked easily."

So... advanced math? Maybe not such a good idea, at least for independent study where you don't have access to TAs or profs.

I do think there's a lot of value in the ELI5 sense, though. Someone who spends time asking ChatGPT4 about Galois theory may not come away with the skills to actually pass a math test. But if they pursue the conversation, they will absolutely come away with a good understanding of the fundamentals, even with minimal prior knowledge.

Programming? Absolutely. You were going to test that code anyway, weren't you?

Planning and specification stages for a complex, expensive, or long-term project? Not without extreme care.

Generating articles on quantum gravity for Social Text? Hell yeah.


No, I don't support this.

A statement I would support is: "Don't use LLMs, for anything where correctness or accuracy matters, period, and make sure you carefully check every statement they make against some more reliable source before relying on it. If you use LLMs for any purpose, make sure you have a good understanding of their limitations, some relevant domain experience, and are willing to accept that the output may be wrong in a wide variety of ways from subtle to total."

There are many uses where accuracy may not matter: loose machine translation to get a basic sense of what topic some text is about; good-enough OCR or text to speech to make a keyword index for searching; generation of acceptably buggy code to do some basic data formatting for a non-essential purpose; low-fidelity summarization of long texts you don't have time to read; ... (or more ethically questionably, machine generating mediocre advertising copy / routine newspaper stories / professional correspondence / school essays / astroturf propaganda on social media / ...)

But "tutoring naïve students" seems currently like a poor use case. It would be better to spend some time teaching those students to better find and critically examine other information sources, so they can effectively solve their own problems.

Again, it's not only old theorems where LLMs make up nonsense, but also (examples I personally tried) etymologies, native plants, diseases, translations, biographies of moderately well known people, historical events, machines, engineering methods, chemical reactions, software APIs, ...

Other people have complained about LLMs making stuff up about pop culture topics like songs, movies, and sports.

> good understanding of the fundamentals

This does not seem likely in general. But it would be worth doing some formal study.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: