Hacker News new | past | comments | ask | show | jobs | submit login

Perhaps you should read it again because you seem to have misunderstood the message. It doesn't even try to claim that LLMs aren't beneficial - it agrees that they're useful in certain situations, but that they're also over-hyped and misrepresented by their creators.

> You might be surprised to learn that I actually think LLMs have the potential to be not only fun but genuinely useful. [...]

> I’ve heard from several students that LLMs have been really useful to them in that “where the !^%8 do I even start?!” phase of learning a new language, framework, or tool. Documentation frequently fails to share common idioms; discovering the right idiom in the current context is often difficult. And “What’s a pattern that fits here, never mind the correctness of the details?” is a great question for an LLM.

> Alas, the AI hype is around LLMs •replacing• thought, not •prompting• it.

This perfectly matches my experience too. If I know what needs to be done, then all LLMs waste my time on average. However when I'm trying to learn about some new topic then LLMs have been a good way to get me oriented in the right direction and get a really high level overview that allows me to get a jump start and start learning about the topic on my own.

They tend to shine a light on all the "unknown unknowns" (e.g. common terminology and concepts), but they're poor at converting "known unknowns" into real knowledge since they lack any real and deep understanding themselves.

Are LLMs useful? Yes. Do they live up to expectations set by their creators? Not even close.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: