> Any testable definition of AGI that GPT-4 fails would also be failed by a significant chunk of the human population.
GPT cannot function in any environment on its own. Except in response to a direct instruction it cannot plan, it cannot take action, it cannot learn, it cannot adjust to changing circumstance. It cannot acquire or process energy, it has no intentionality, it has no purpose beyond generating new text. It's not intelligent in any sense, let alone generally. It's an incredibly capable tool.
Here's a testable definition of AGI - any combination of software and hardware that can function independently of human supervision and maintenance, in response to circumstance that have not been preprogrammed.
That's it. Zero trial learning and function. All adult organisms can do it, no AI can. Artificial general intelligence that's actually useful would need a bunch of additional functionality of course, there I'll agree with you.
>GPT cannot function in any environment on its own. Except in response to a direct instruction it cannot plan, it cannot take action, it cannot learn, it cannot adjust to changing circumstance.
Sure it can. It's not default behavior sure but it's fairly trivial to set up, just expensive. Gpt-4 can loop on its "thoughts" and reflect, it can take actions in the real world.
> Here's a testable definition of AGI - any combination of software and hardware that can function independently of human supervision and maintenance, in response to circumstance that have not been preprogrammed.
Not sure how I feel about a significant portion of the population already not meeting that threshold. The elderly? The disabled? I think you're proving the parent's point.
> Not sure how I feel about a significant portion of the population already not meeting that threshold. The elderly? The disabled?
Anyone? How did the saying go? The minimum viable unit of reproduction for homo sapiens is a village.
None of us passes the bar, if the test excludes "supervision and maintenance" by other people - and not just our peers, but our parents, and their parents, and their parents, ... all the way back until we reach some self-sufficient-ish animal life form. That's, AFAIR, way below primates on evolutionary scale.
But that test is bad also for other reasons, including but not limited to:
- It is confusing intelligence with survival. Survival is not intelligence, it is a highly likely[0] consequence of it[1].
- It underplays the non-random aspect of it. Phrased like GP phrased it, a rock can pass this test. The power of intelligence isn't in passively enduring novel circumstances - it's in actively navigating the world to get more of what you want. Including changing the world itself.
--
[0] - A process optimizing for just about any goal will find its own survival to be beneficial towards achieving that goal.
[1] - If you extend it from human-like intelligence to general optimization, then all life is an example of this: survival is a consequence of natural selection - the most basic form of optimization, that arises when you get a self-replicating something that replicates with some variability into similar self-replicating somethings, and when that variability affects survival.
GPT cannot function in any environment on its own. Except in response to a direct instruction it cannot plan, it cannot take action, it cannot learn, it cannot adjust to changing circumstance. It cannot acquire or process energy, it has no intentionality, it has no purpose beyond generating new text. It's not intelligent in any sense, let alone generally. It's an incredibly capable tool.
Here's a testable definition of AGI - any combination of software and hardware that can function independently of human supervision and maintenance, in response to circumstance that have not been preprogrammed.
That's it. Zero trial learning and function. All adult organisms can do it, no AI can. Artificial general intelligence that's actually useful would need a bunch of additional functionality of course, there I'll agree with you.