That's a pretty big claim. One could argue that the topology of the brain is a prior, analogous to the architecture of a neural net. But considering that we really have no idea how learning happens in the brain on a large scale, you really can't say.
I believe that there's a reasonable (but unprovable) assumption that any games which humans usually play - because the rulesets are learnable and interesting for humans - implicitly rely on priors of human brains and behavior.
The space of possible games is huge (infinite?), but only a tiny subset of these games could reasonably become a popular game for humans.
E.g. it's not an arbitrary random coincidence that the scoring rules for each grid intersection in go are the same (I mean, it could vary in an arbitrary pattern), it ensures that the ruleset is small enough so that humans can learn it.
It's not an arbitrary random coincidence that the playing of go involves pattern recognition on some level, since that's what we're good at and find interesting in many games.
It's not an arbitrary random coincidence that in Mario game after jumping the sprite falls back down eventually; that's reusing the priors from real world physics.
I don't remember the name but I definitively saw attempts to build a general AI that was designed to solve 50 different games. There was one long learning phase where the AI learns mechanics that are common to all of the 50 games and then there is a much shorter learning phase that is just specific to a single game. Same attempt was made with a Minecraft bot. First it just learned how to live and interact in a vanilla world. Then it was fed twitch pvp livestreams and finally it was placed in a pvp server. It didn't perform super well but watching the livestreams was definitively more efficient than learning combat from scratch.
There is very little that we understand about the larger picture of how learning happens in the brain. We have some understanding of how learning happens on very small scales, I'm talking plasticity at a single synapse. But even restricting ourselves to a single synapse, there is much we don't know. At the least, it's clear that synapses and dendrites have impressive computational capacity but making detailed measurements is currently beyond the reach of our experimental apparati. We can measure signals in dendrites and synapses, but not at a high enough spatiotemporal resolution to answer the big questions.
And we're starting to bump against fundamental limits of these apparati. Most modern neurobiology uses genetically encoded fluorescent sensors read out by rather expensive 2-photon microscopes. The sensors aren't as crisp as one wishes - there is a huge subfield dedicated just to deconvolving these fluorescent sensor readings into what the neurons are actually doing. And there's only so much further the 'scopes can be pushed.
The point being: it's really quite difficult to overstate just how overwhelmingly complex the brain is and how far we are from understanding even little really specific bits of it, let alone the whole thing.
That being said, the redwood center for theoretical neuroscience does some excellent work bridging the cutting edge of theory neuro and machine learning - towards the larger picture of how the brain works. You might be surprised at how 'rudimentary' the questions we're trying to solve in that domain are. Most work focuses on the visual system - far easier to study something when you have a good idea of what it's supposed to do (as opposed to, say, cortex).
I am not aware of anything resembling a grand theory that makes experimentally verifiable predictions. I am pretty sure I would have heard of such a thing if it existed.
Not just the topology of the brain, but the environment is also important. Human life is more diverse than that of AlphaGo, we can borrow concepts gained while doing something else. Should we count those external tasks as part of the learning to play Go?