Hacker News new | past | comments | ask | show | jobs | submit login

Chatgpt 4 is amazing. Still cant comprehend they build this.

However if you work with it a lot you realize it doesn't understand reality, it predicts language.

It reminds me of YouTube videos of chess when human start to figure out chess bots and the tricks they use to draw out time.

Understanding language is super impressive, but a far cry from general intelligence. Maybe from here open ai can build further. But I have a hard time believing that this will be a foundation for general intelligence.

A lot of animals have a form of general intelligence but cant do math or language. Yet in some ways are more capable then the latest self driving cars.




Is it surprising that a system that was exclusively trained on emitted language will only do well on language?

I don't see how I can extrapolate to anything beyond that. I certainly can't extrapolate that it would be unable to learn other tasks.


Everything is defined in language. As a system is able to learn from the corpus of texts, it also builds reasoning to a large extent. For example, GPT4 can guide you through solving a problem like building a flying and crawling drone for inspecting an attic. Or, watering plants. So this goes beyond language, unless I missed your point.


Karpathy explains it far better than I did in his "State of GPT" talk about a month ago: https://www.youtube.com/watch?v=bZQun8Y4L2A

Yes, language can do that, but books and other texts, on average, leave out a lot of steps that humans have learned to do naturally. They can be prompted to do them, and their answers improve, indicates that there is definitely some level of "reasoning" there.

But it has weird limitations in its current forms. I expect that to improve pretty quickly, tbh.


Not everything. For example, how did Wagner create his music? The ability to appreciate beauty stands above reasoning. But your point largely stands, as langugage processing alone allows to build a "linear thinking machine" with super human reasoning ability. It won't be able to make music or art, but it will create a spaceship and take science to the next level. As a side effect, such AI will erase all art, as art will look like noise in its "mind".


This is the problem here.

A Claim is made. "GPT isn't general" or "GPT isn't Intelligent" or whatever but a testable definition is not made. That is, what is generality to you and what bar need be passed ? Or What Intelligence is and what competence level need be surpassed ?

Without clearly stating those things, Intelligence may well be anything or any goal and your posts could shift to anywhere.

I'm just going to tell you facts of the state we're in right now.

Any testable definition of AGI that GPT-4 fails would also be failed by a significant chunk of the human population.


> Any testable definition of AGI that GPT-4 fails would also be failed by a significant chunk of the human population.

GPT cannot function in any environment on its own. Except in response to a direct instruction it cannot plan, it cannot take action, it cannot learn, it cannot adjust to changing circumstance. It cannot acquire or process energy, it has no intentionality, it has no purpose beyond generating new text. It's not intelligent in any sense, let alone generally. It's an incredibly capable tool.

Here's a testable definition of AGI - any combination of software and hardware that can function independently of human supervision and maintenance, in response to circumstance that have not been preprogrammed.

That's it. Zero trial learning and function. All adult organisms can do it, no AI can. Artificial general intelligence that's actually useful would need a bunch of additional functionality of course, there I'll agree with you.


>GPT cannot function in any environment on its own. Except in response to a direct instruction it cannot plan, it cannot take action, it cannot learn, it cannot adjust to changing circumstance.

Sure it can. It's not default behavior sure but it's fairly trivial to set up, just expensive. Gpt-4 can loop on its "thoughts" and reflect, it can take actions in the real world.

https://tidybot.cs.princeton.edu/ https://arxiv.org/abs/2304.03442 https://arxiv.org/abs/2210.03629 https://arxiv.org/abs/2303.11366


> Here's a testable definition of AGI - any combination of software and hardware that can function independently of human supervision and maintenance, in response to circumstance that have not been preprogrammed.

Not sure how I feel about a significant portion of the population already not meeting that threshold. The elderly? The disabled? I think you're proving the parent's point.


> Not sure how I feel about a significant portion of the population already not meeting that threshold. The elderly? The disabled?

Anyone? How did the saying go? The minimum viable unit of reproduction for homo sapiens is a village.

None of us passes the bar, if the test excludes "supervision and maintenance" by other people - and not just our peers, but our parents, and their parents, and their parents, ... all the way back until we reach some self-sufficient-ish animal life form. That's, AFAIR, way below primates on evolutionary scale.

But that test is bad also for other reasons, including but not limited to:

- It is confusing intelligence with survival. Survival is not intelligence, it is a highly likely[0] consequence of it[1].

- It underplays the non-random aspect of it. Phrased like GP phrased it, a rock can pass this test. The power of intelligence isn't in passively enduring novel circumstances - it's in actively navigating the world to get more of what you want. Including changing the world itself.

--

[0] - A process optimizing for just about any goal will find its own survival to be beneficial towards achieving that goal.

[1] - If you extend it from human-like intelligence to general optimization, then all life is an example of this: survival is a consequence of natural selection - the most basic form of optimization, that arises when you get a self-replicating something that replicates with some variability into similar self-replicating somethings, and when that variability affects survival.


Define "reality."

Human experience consists of more than language, for sure. There is also a visual, audible and and tactile component. But GPT-4 can also take visual inputs from my understanding, and it shouldn't be difficult to add the other senses.

Google's experiments with PaLM-e show that LLMs are also useful in an embodied context, such as helping robots solve problems in real-time.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: