Hacker News new | past | comments | ask | show | jobs | submit login

What would be a good benchmark? In particular, is there an accomplishment that would be: (i) impressive, and clearly a major leap beyond what we have now in a way that GPT-3 isn't, but (ii) not yet full-blown AGI?



How about driving a car without killing people in ways a human driver would never kill people (i.e. mistaking a sideway semi truck for open sky)?

That's a valuable benchmark loads of companies are aiming for, but it's not a full AGI.


Maybe nothing? “Search engines through training data” are already the state of the art, and have well documented and mocked failure cases.

Unless someone comes along with a more clever mechanism to pretend it’s learning like humans, you’re not looking at a path towards AGI in my opinion.


> you’re not looking at a path towards AGI in my opinion

What I'm trying (and apparently failing?) to ask is, what would a step on the path towards AGI look like? What could an AI accomplish that would make you say "GPT-3 and such were merely search engines through training data, but this is clearly a step in the right direction"?


> What I'm trying (and apparently failing?) to ask is, what would a step on the path towards AGI look like?

That's an honest and great question. My personal answer would be to have a program do something it was never trained to do and could never exist in the corpus. And then have it do another thing it was never trained to do, and so on.

If GPT-3 could say 1) never receive any more input data or training, and then 2) read an instruction manual for a novel game that shows up a few years from now (so it can't be replicated from the corpus), and 3) plays that game, and 4) improves at that game, that would be "general" imo. It would mean there's something fundamental with its understanding of knowledge, because it can do new things that would have been impossible for it to mimic.

The more things such a model could do, even crummily, would go towards it being a "general" intelligence. If it could get better at games, trade stocks and make money, fly a drone, etc. in a mediocre way, that would be far more impressive to me than a program that could do any of those things individually well.


If a program can do what you described, would it be considered a human-level AI yet? Or would there be some other missing capabilities still? This is an honest question.

I intentionally don’t use the term AGI here because human intelligence may not be that general.


> human intelligence may not be that general

Humans have more of an ability to generalize (ie learn and then apply abstractions) than anything else we have available to compare to.

> would it be considered a human-level AI yet

Not necessarily human level, but certainly general.

Dogs don't appear to attain a human level of intelligence but they do seem to be capable of rudimentary reasoning about specific topics. Primates are able to learn a limited subset of sign language; they also seem to be capable of basic political maneuvering. Orca whales exhibit complex cultural behaviors and employ highly coordinated teamwork when hunting.

None of those examples appear (to me at least) to be anywhere near human level, but they all (to me) appear to exhibit at least some ability to generalize.


From grandparent post:

> 2) read an instruction manual for a novel game that shows up a few years from now (so it can't be replicated from the corpus), and 3) plays that game, and 4) improves at that game, that would be "general" imo.

I would say that learning a new simple language, basic political maneuvering, and coordinated teamwork might be required to play games well in general, if we don't exclude any particular genre of games.

Complex cultural behaviors might not be required to play most games, however.

I think human intelligence is actually not very 'general' because most humans have trouble learning & understanding certain things well. Examples include general relativity and quantum mechanics and, some may argue, even college-level "elementary mathematics".


Give it an algebra book and ask to solve the exercises at the end of the chapter. If it has no idea how to solve a particular task, it should say “give me a hand!” and be able to understand a hint. How does that sound?


That makes me think we are closer rather than farther away because all that would be needed is for this model to recognize the problem space in a question:

“Oh, you are asking a math question, you know a human doesn’t calculate math in their language processing sections of their brain right, neither do I... here is your answer”

If we allowed the response to delegate commands, it could start to achieve some crazy stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: