The notion that GPT "interpolates between the training data" is a widespread misconception. There is no evidence that that's what's going on. GPT seems to be capable of generalizing, in ways that let it mix features of training samples at least, and even generalize to situations that it has never seen.
It seems to me your entire argument derives from this. If GPT is not exponential, then the m/p distinction falls apart. And GPT has way too much world-knowledge, IMO, to be storing things in such a costly fashion.
Neural networks learn features, not samples. Layered networks learn features of features (of features of features...). Intelligence works because for many practical tasks, the feature recursion depth of reality is limited. For instance, we can count sheep by throwing pebbles in a bucket for every sheep that enters the pasture, because the concept of items generalizes both sheep and pebbles, and the algorithm ensures that sheep and pebbles move as one. So to come up with this idea, you only need to have enough layers to recognize sheep as items, pebbles as items, those two conceptual assignments as similar, and to notice that when two things are described by similar conceptual assignments in the counting domain, you can use a manual process that represents a count in one domain to validate the other domain. Now I don't think this is actually what our brain is literally doing when we work out this algorithm, it probably involves more visual imagination and looking at systems coevolve in our worldmodel to convince us that the algorithm works. But I also don't think that working this out on purely conceptual grounds needs all that many levels of abstraction/Transformer layers of feature meta-recognition. And once you have that, you get it.
> If GPT is not exponential, then the m/p distinction falls apart.
Yes, I think if you have a system which implements QAWH with a similar compelxity to a known intelligent system -- at that point I have no empirical issues. I think, at that point, you have a workiung system.
We then ask if it is thinking about anything, and I think that'd be an open question as to how its implemented. I dont think the pattern alone would mean the system had intentionality -- but my issue at this stage is the narrower empirical one. Without something like a "tractable complexity class", your system is broken.
> And GPT has way too much world-knowledge, IMO, to be storing things in such a costly fashion.
This is an illusion. Knowledge here is deterministic, to the same question, the same answer. GPT generates answers across runs which are self-contradictory, etc. "the same question" (even literally, or if you'd like, with some rephrasing) is given quite radically different answers.
I think all we have here is evidence of the (already known) tremendous compressibility of text data. We can, in c. 500bn numbers, compress most of the histoy of anything ever said. With such a databank, a machine can appear to do quite a lot.
This isnt world knowledge... it is a symptom of how we, language users, position related words near each other for the sake of easy comprehension. By doing this one can compress our text into brute statstical associations which appear to be meaningful.
As much as Github's AI is basically just copy/pasting code from github repos, GPT is just copy/pasting sentences from books.
All the code in github, compressed into billions of numbers, and decompressed a little -- that's a "statical space of tricks and coincidences" so large we cannot by intution alone fathom it. It's what makes these systems useful, but also easy illusions.
We can, by a scientific investigation of these systems as objects of study, come up with trivial hypothesis that expose their fundamentally dumb coincidental character. There are quite a few papers now which do this, I dont have one to hand.
But you know, investigate a model of this kind yourself: permute the input questions, investigate the answers.. and invalidate your hypothesis (like a scientist might do)... can you invalidate your hypothesis?
I think with only a little thoguh you will find it fairly trivial to do so.
If the paper is substantially correct I concede the point. But what I've read of reactions leads me to believe the conclusion is overstated.
Regarding compression vs intelligence, I already believe that intelligence, even human intelligence, is largely a matter of compressing data.
Regarding "knowledge is deterministic", ignoring the fact that it's not even deterministic in humans, so long as GPT can instantiate agents I consider the question of whether it "is" an agent academic. If GPT can operate over W_m and H_n, and I live in W_1 and have H_5, I just need to prompt it with evidence for the world and hidden state. Consider for example, how GAN image generators have a notion of image quality but no inherent desire to "draw good images", so to get quality out you have to give them circumstantial evidence that the artist they are emulating is good, ie. "- Unreal Engine ArtStation Wallpaper HQ 4K."
Also, of course, it's hard to see how DALL-E can create "a chair in the shape of an avocado" by interpolating between training samples, none of which were a chair in the shape of an avocado nor anywhere close. The orthodox view of interpolating between a deep hierarchy of extracted features and meta-features readily explains this feat.
It seems to me your entire argument derives from this. If GPT is not exponential, then the m/p distinction falls apart. And GPT has way too much world-knowledge, IMO, to be storing things in such a costly fashion.
Neural networks learn features, not samples. Layered networks learn features of features (of features of features...). Intelligence works because for many practical tasks, the feature recursion depth of reality is limited. For instance, we can count sheep by throwing pebbles in a bucket for every sheep that enters the pasture, because the concept of items generalizes both sheep and pebbles, and the algorithm ensures that sheep and pebbles move as one. So to come up with this idea, you only need to have enough layers to recognize sheep as items, pebbles as items, those two conceptual assignments as similar, and to notice that when two things are described by similar conceptual assignments in the counting domain, you can use a manual process that represents a count in one domain to validate the other domain. Now I don't think this is actually what our brain is literally doing when we work out this algorithm, it probably involves more visual imagination and looking at systems coevolve in our worldmodel to convince us that the algorithm works. But I also don't think that working this out on purely conceptual grounds needs all that many levels of abstraction/Transformer layers of feature meta-recognition. And once you have that, you get it.