Because it gives a bad analogy relating to what their capabilities actually are. I've asked it to write fairly complex programs for me, and I'm blown away by what it's capable of.
It's reasonable to assume, especially when "emergent behaviors" only show up after tons and tons of training and parameters (i.e. Scaling_laws_and_emergent_abilities) that in order to actually get good at "autocomplete", that the model has to learn a very deep relationship between the concepts that are expressed in the words.
I mean, you say "If they're trained on a lot, they can parrot a lot but not anything more", but that's really not correct. They're not just playing back only complete phrases they've seen before, which is what a real parrot actually does.
> "It's reasonable to assume ... that in order to actually get good at "autocomplete", that the model has to learn a very deep relationship between the concepts that are expressed in the words."
While a "reasonable assumption", it's the kind of "reasonable assumption" that a diligent scientist would formulate hypotheses on and perform experiments to confirm before building a XX-billion dollar research programme that hinges on that assumption. But unfortunately for the rest of us who have to watch them complicate access to a useful technology, many high-profile AI researchers are not diligent scientists building a corpus of knowledge but impassioned alchemists insisting that they're about to turn lead to gold.
It's reasonable to assume, especially when "emergent behaviors" only show up after tons and tons of training and parameters (i.e. Scaling_laws_and_emergent_abilities) that in order to actually get good at "autocomplete", that the model has to learn a very deep relationship between the concepts that are expressed in the words.
I mean, you say "If they're trained on a lot, they can parrot a lot but not anything more", but that's really not correct. They're not just playing back only complete phrases they've seen before, which is what a real parrot actually does.