> I also haven’t seen anybody claiming the recent breakthroughs are AGI.
If you time travel back 50 years ago and told them in the future that a computer could ace almost any exam given to a high school student, most people would consider that a form of AGI.
Now, the goalpost has shifted to “It’s only AGI if it’s more intelligent than the totality of humans”.
If you haven’t heard anyone claim that we’ve made advances in AGI, you heard me here first: I think GPT3+ is a significant advancement in humanity’s attempts to create AGI.
>If you time travel back 50 years ago and told them in the future that a computer could ace almost any exam given to a high school student, most people would consider that a form of AGI.
The problem is that these sorts of things were thought to require some sort of understanding of general intelligence, when in practice you get solve them pretty well with algorithms that clearly aren't intelligent and aren't made with an understanding of intelligence. Like, if you time travel back 100 years and told them that in the future a computer could beat any grandmaster at chess, they might consider that a form of AGI too. But we know with hindsight that it isn't true, that playing chess doesn't require intelligence, just chess prowess. That's not to say that GPT4 or whatever isn't a step towards intelligence, but it's ludicrous to say that they're a significant advancement towards that goal.
That's another way to state the same thing actually.
One can adopt a static definition of "general intelligence" from a point in history and use it consistently. In this case, GPT3+ is a leap in humanity's quest for AGI.
One can also adopt a dynamic definition of "general intelligence" as you described. In this case the equivalent statement is that in hindsight GPT3+ shows that language ability is not "AGI", but rather, "merely" transformer models fed with lots of data. (And then humanity's goal would be to discover that nothing is "AGI" at all, since we'd have figured it all out!)
The fact that we see things differently in hindsight is already strong evidence that things have progressed significantly. It proves that we learned something that we didn't know/expect before. I know this "feels" like every other day you experienced, but let's just look at the big picture more rationally here.
If you time travel back 50 years ago and told them in the future that a computer could ace almost any exam given to a high school student, most people would consider that a form of AGI.
Now, the goalpost has shifted to “It’s only AGI if it’s more intelligent than the totality of humans”.
If you haven’t heard anyone claim that we’ve made advances in AGI, you heard me here first: I think GPT3+ is a significant advancement in humanity’s attempts to create AGI.