Hacker News new | past | comments | ask | show | jobs | submit login

People who fake understanding by creating an ad hoc internal model to sound like an expert often get a lot of things right. You catch such people by asking more complex questions, and then you often get extremely alien responses that no sane person would say if they understood. And yes, LLMs give such responses all the time, every single one of them, none of them really understands much, they have memorized a ton of things and have some ad hoc flows to get through simple problems, but they don't seem to really understand much at all.

Humans who just try to sound like an expert also make similar alien mistakes as LLMs do, so I think since we say such humans don't learn to understand we can also say that such models don't learn to understand. You don't become an expert by trying to sound like an expert. These models are trained to sound like experts, so we should expect them to be more like such humans rather than the humans who become experts.




Hmmm, I don't agree. They do seem to understand some things.

I asked ChatGPT the other day how fast an object would be traveling if "dropped" from the Earth with no orbital velocity, by the time it reached the sun. It brought out the appropriate equations and discussed how to apply them.

(I didn't actually double-check the answer, but the math looked right to me.)

It also seems to have a calculation or "analysis" function now, which gets activated when asking it specific mathematical questions like this. I've imagined it works by using the LLM to set up a formula, which is then evaluated in a classical way.

There are limits on what it can do, just like any human has similar limits. ChatGPT can answer more questions like this correctly than the average person could from off the street. That seems like understanding to me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: