Hacker News new | past | comments | ask | show | jobs | submit login

You have to distinguish between intelligence and knowledge/experience.

Nobody knows everything or has experienced everything, but we do have "general intelligence", which is the ability to reason over whatever we DO know and HAVE experienced in order to make successful plans/predictions around it and actually use this knowledge rather than merely recall it.

Of course some people are more intelligent than others, but nonetheless our brain reflects the nature of our species as generalists who can apply our intelligence to a wide/unlimited number of areas.

There are at least two things fundamentally missing from LLMs that disqualify it from deserving of the AGI label.

1) LLMs have extremely limited ability to plan and reason, even over the fixed knowledge (training set) that they have. This is a limitation of the simplistic transformer neural network architecture they are based on, which is just a one-way conveyor belt of processing steps (transformer layers) from input to output. No looping/iteration, working memory, etc - they just don't have the machinery to be able to reason/plan in an open ended way.

2) LLMs can't learn. They are pre-trained and just have a fixed set of knowledge and processing templates. Perhaps we should regard them as having a limited type of intelligence ("crystalized intelligence") over what they do know, but it can't be described as general intelligence when it excludes novel reasoning/planning ("fluid intelligence"), as well as the ability to learn anything new.

We will eventually design human-like "general" intelligence (there's no magic about it that prevents us from doing it), so LLMs are not as good as it gets, but LLMs (and upcoming enhanced LLMs) may be as good as it gets for a while - AGI may well require a brand new architecture based around the ability to learn continuously. This isn't going to happen in next 5-10 years.




Personally, I think you are wrong about both 1 and 2.

They maybe cannot reason as well as a programmer or a mathematician, but they can do so better than a LOT of humans I know.

Also, they can learn, we’d just have to feed data to do so and we don’t… we just don’t.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: