Anecdotal (but deep) research led me to postulate that our entire "inner world", for lack of a better word, is an emergent construction based on a fundamentally spatiotemporal encoding of the external world. This assumes that feeding and motility, i.e., a geometric interpretation of the external world, is among the first 'functions' of living organisms in the evolutionary order. They subsequently became foundational for neuronal systems when these appeared about 500 million years ago.
The hypothesis was informed by language notably, where most things are defined in spatial terms and concepts (temporal too, though more rarely), as if physical experiences of the world were the building blocks of thinking, really. A "high" council, a "sub" culture, a "cover", an "adjacent" concept, a "bigger" love, a "convoluted" or "twisted" idea, etc.
Representations in one's inner world are all about shape, position, and movement of things in some abstract space of sorts.
This is exactly how I'd use a 4D modeling engine to express a more 'Turing-complete' language, a more comprehensive experience (beyond movement: senses, intuitions, emotions, thoughts, beliefs…): use its base elements as a generator set to express more complex objects through composition in larger and/or higher-dim space. Could nature, Evolution, have done just that? Iteratively as it conferred survival advantages to these genes? What would that look like for each layer of development of neuronal—and later centralized "brain"—systems?
Think as in geometric algebra, maybe; e.g., think how the metric of a Clifford algebra may simply express valence or modality, for those neuronal patterns to trigger the proper neurotransmitters. In biological brains, we've already observed neural graphs up to 11 dimensions (with a bimodal distribution peak around ~2.5D and ~3.8D iirc… Interestingly for sure, right within the spatiotemporal ballpark, seeing as we experience the spatial world in 2.5D more than 3, unlike fishes or birds).
Jeff Hawkins indeed strongly shaped my curiosity, notably in "A Thousand Brains" and subsequent interviews. The paper here immediately struck me as very salient to that part of my philosophical and ML research—so kinda not too surprised there's history there.
And I'm really going off on a tangent here, but I'm pretty sure the "tokenization problem" (as expressed by e.g. Karpathy) may eventually be better solved using a spatiotemporal characterization of the world. Possibly much closer to real-life language in biological brains, for the above reasons. Video pretraining of truly multimodal models may constitute a breakthrough in that regard, perhaps to synthesize or identify the "ideal" text divisions, a better generator set for (any) language.
Since I only partly understand your comment, I'm not sure if this pertains, but the phrase "spatiotemporal encoding" caught my attention. It makes intuitive sense that complex cognitive function would be connected to spatiotemporal sensations and ideas in an embodied nervous system evolved for, among other things, managing itself spatially and temporally.
Also, Riccardo Manzotti's book "The Spread Mind" seems connected. Part of the thesis is that the brain doesn't form a "model" version of the world with which to interact, but instead, the world's effects are kept active within the brain, even over extremely variable timespans. Objects of consciousness can't be definitively separated from their "external" causes, and can be considered the ongoing activity of those causes, "within" us.
Conscious experience as "encoding" in that sense would not be an inner representation of an outer reality, but more a kind of spatiotemporal imprint that is identical with and inextricable from the activity of "outer" events that precipitated it. The "mind" is not a separate observer or calculator but is "spread" among all phenomenal objects/events with which it has interacted--even now-dead stars whose light we might have seen.
Not sure if I'm doing the book justice here, but it's a great read, and satisfyingly methodical. The New York Review has an interview series if you want to get a sense of the ideas before committing to the book.
This is salient enough that I think you intuitively understood my comment. I won't pretend I can fully explain pending hypotheses either, it's more about research angles (e.g., connecting tools with problem categories).
Thanks a lot for the recommendations. That's what I love about HN. One often gets next-level great pointers.
> Objects of consciousness can't be definitively separated from their "external" causes, and can be considered the ongoing activity of those causes, "within" us.
Emphatically yes.
> […] spatiotemporal imprint that is identical with and inextricable from the activity of "outer" events that precipitated it
Exactly, noticing that it includes, and/or is shaped, by "inner" events as well.
So there's the outer world, and there's your inner world, and only a tiny part of the latter is termed "conscious". We gotta go about life from that certainly vantage but incredibly limited perspective too. The 'folding power' of nature (to put so much information in so little space) is mesmerizing, truly.
I like to put it down to earth to think about it. When you're in pain, or hungry, or sleepy—any pure physiological, biological state,—it will noticeably impact (alter, color, shade, formally "transform" as in filters or gating of) the whole system.
Your perception (stimuli), your actions (responses), your non-conscious impulses (intuitions, instincts, needs & wants…), your emotions, thoughts, and even decision-making and moral values.
I can't elaborate much here as it's bound to get abstract too fast, to seem obfuscated when it's everything but. I should probably write a blog or something, ha ha. You too, you seem quite astute at wording those things.
It's lovey to see areas starting to connect, in neuroscience,
AI/comp-sci and philosophy.
Let's remember philosophy started as questions about the cosmos, the
stars. Very much physical reality. And practical too, for agriculture
and navigation. How do we get from A to B and acquire food and other
goods. Over about 5000 years it's come to be "relegated to the
unreal", disparaged by radical positivists who seem unable to make
connections between areas (ironic from a neural POV).
A 'modern' philosopher I'll suggest here on "representation of
space-time" is Harrold Innes [0]. For those who are patient readers
and literate in economics, anthropology, linguistics and computer
science (and working on any field of AI relating language to space)
I'd hope it would be a trove of ideas about how our brains developed
over the ages to handle "space and time".
Some will be mystified how study of railways, maps and fish trading
has anything to do with cognitive neuroscience and representing
space. But it has everything to do with it, because we encode the
things that matter to our survival and those things shape how our
brains are structured. Only very recent modernity and anti-polymath,
hyper-specialisation has made us forget this way that the stars, the
soil and our brains are connected.
I'm sorry I couldn't reply sooner. The sibling comment took all my free time last week (lol).
I've taken great interest in Harrold. It'll be some time until I can deep dive into anything besides work, but he's made my top 10 list of thinkers to know and potentially assimilate into my research framework (I treat theoretical signals not as data but as methods, essentially, a panel of "ways to think about the data" itself).
Thank you very much for the suggestion (and for that write up, it really helped).
> Some will be mystified how study of railways, maps and fish trading has anything to do with cognitive neuroscience and representing space.
Commenting as someone who loves railways, maps and fish(ing) this is both a novel thought and endlessly fascinating. I fear you've provided me another rabbit hole to explore. Thank you!
There’s an idea in psych that a high IQ correlates more than anything to an increased ability to navigate complex spaces. That’s what we do when we program, we create conceptual spaces and then imagine data flowing through them. And it is also why being intelligent in that way is seemingly so useful in everyday situations like budgeting, avoiding injury, and navigating institutions.
It’s not all roses though—to quote Garrison Keillor, “being intelligent means you will find yourself stranded in more remote locations”
To elaborate a bit, I think there are layers in-between raw IQ and practical proprioception, for instance. Balancing one's body involves the full neural chain, down to origin (which is the end-cell, the sensor/motor device), and quite evidently can be trained to orders of magnitude more accuracy.
So to think like a tech stack of sorts, from the meat (purely biological, since the first unicellular organisms) to the highest-level (call it 'sapience', 'wisdom', whatever; that which is even above IQ), you'd find something that goes
good-enough bodily genetics
+
trained sensor & motorneural precision
+
high IQ for good aim and strategy
+
sapient decision-making
in order to best navigate complex spaces.
Case in point: cliche nerds (not your best dancers/athletes), unwise yet very intelligent people, bad draw at the genetic lottery for negative examples; conversely a very gifted "natural born" athlete or musician (which doesn't mean that without training they wouldn't get beaten flat by any seasoned professional) doubling as a strategy prodigy, or zen master, whatever 'wise-r.'
If we admit that space[time] is the "language of the brain" (what IQ actually tests), and therefore that even social spaces—like love, business, or politics—are navigated from the same core skills than physical spaces like sports.
(That much perhaps is a stretch, it may be more complicated; but perhaps partially true for 'core functions' as it were. Perhaps like 'speech mastery' alone is a core function that contributes to a slew of more complex tasks/goals).
I'm of the position this might be correct in the specific case of humans, but not fundamental to the algorithms of consciousness. Eg we could have similar emergent phenomena in algorithmic trading bots where all the emergent constructions are defined in terms of money and financial concepts rather than spatial concepts. They live in a reality of dollar signs rather than physical dimensions. That's neither inherently better nor worse.
In fact, I'm somewhat of the position that nearly any grounding in a domain of shared objects where signalling is inexpensive would be suitable. That said, AI agents which grew up in some alien domain of shared objects would find us as unintuitive to reason about as we find quantum mechanics uninituitive to reason about. If the goal is AI that acts and talks like us, your way may be the way to go.
I've no idea what the c-word means (consciousness), so I'll leave that aside; everything else checks out as absolutely sensible to me.
Your last sentence strikes me as particularly validating.
"My way", this framework, was meant to give a mechanistic description of our individual, subjective "inner world." Much like physics speaks of the outer, shared world; and in compliance with all objective 'hard' sciences.
Indeed, it lends itself particularly well to be exploited by AI, notably in terms of architecture and domain-selection (by whatever core we call 'sapience') within a "Mixture-of-Experts" paradigm of sorts—which biology seems to have done: dedicated organs or sub-parts for each purpose, the Unix way to "Do one thing and/to do it well."
Anecdotal (but deep) research led me to postulate that our entire "inner world", for lack of a better word, is an emergent construction based on a fundamentally spatiotemporal encoding of the external world. This assumes that feeding and motility, i.e., a geometric interpretation of the external world, is among the first 'functions' of living organisms in the evolutionary order. They subsequently became foundational for neuronal systems when these appeared about 500 million years ago.
The hypothesis was informed by language notably, where most things are defined in spatial terms and concepts (temporal too, though more rarely), as if physical experiences of the world were the building blocks of thinking, really. A "high" council, a "sub" culture, a "cover", an "adjacent" concept, a "bigger" love, a "convoluted" or "twisted" idea, etc.
Representations in one's inner world are all about shape, position, and movement of things in some abstract space of sorts.
This is exactly how I'd use a 4D modeling engine to express a more 'Turing-complete' language, a more comprehensive experience (beyond movement: senses, intuitions, emotions, thoughts, beliefs…): use its base elements as a generator set to express more complex objects through composition in larger and/or higher-dim space. Could nature, Evolution, have done just that? Iteratively as it conferred survival advantages to these genes? What would that look like for each layer of development of neuronal—and later centralized "brain"—systems?
Think as in geometric algebra, maybe; e.g., think how the metric of a Clifford algebra may simply express valence or modality, for those neuronal patterns to trigger the proper neurotransmitters. In biological brains, we've already observed neural graphs up to 11 dimensions (with a bimodal distribution peak around ~2.5D and ~3.8D iirc… Interestingly for sure, right within the spatiotemporal ballpark, seeing as we experience the spatial world in 2.5D more than 3, unlike fishes or birds).
Jeff Hawkins indeed strongly shaped my curiosity, notably in "A Thousand Brains" and subsequent interviews. The paper here immediately struck me as very salient to that part of my philosophical and ML research—so kinda not too surprised there's history there.
And I'm really going off on a tangent here, but I'm pretty sure the "tokenization problem" (as expressed by e.g. Karpathy) may eventually be better solved using a spatiotemporal characterization of the world. Possibly much closer to real-life language in biological brains, for the above reasons. Video pretraining of truly multimodal models may constitute a breakthrough in that regard, perhaps to synthesize or identify the "ideal" text divisions, a better generator set for (any) language.