As far as I understand this is true. I like to think about it like this: there is some magic formula f(x)=? that perfectly maps our inputs to our outputs (e.g. image captions to images, or input texts to longer input texts), but we don't know how to find it. So we build a space with incredibly many dimensions, and we learn some mapping in this space, which is hopefully very close to the magic formula.
Our brains fundamentally work in a similar way, in that there are mappings from inputs to outputs through our senses and our nervous system, and we can literally determine neural circuits in mammalian brains through topological analysis of this magical function![0]
You're right about how machine learning is learning to approximate a function - most machine learning systems are mathematically equivalent to stochastic gradient descent, a statistical method which can, theoretically, do the same thing.
The surprise was that people (me, at least!) thought the computation and amount of data required to learn a function like "translate English to French" would be completely impractical to ever realize.
I think it's open question whether humans work like that, though we probably do.
This is really fascinating, assuming it is true, it could imply that everything we "learn" is essentially a training process in our brain to store a new model/function. As humans we've figured out how to transfer these models between our brains through communication. Maybe it is possible to "upload" a model
to the brain like Neo learning kung-fu...
Our brains fundamentally work in a similar way, in that there are mappings from inputs to outputs through our senses and our nervous system, and we can literally determine neural circuits in mammalian brains through topological analysis of this magical function![0]
[0] Youtube video: Neural manifolds - The Geometry of Behaviour, from Artem Kirsanov: https://www.youtube.com/watch?v=QHj9uVmwA_0