The construct of time feels so real. And the fact that we surround ourselves with clocks, learn how to tell time from an early age, and live our lives aligned to precise schedules just reinforces this feeling of realness, and reifies the illusion. Seems like we are in a very literal sense time keeping (or “creating” might be a better word) machines.
At some point I got fascinated by flow experiences and the loss of all sense of time. While I’ve experienced this while coding, improvising music has always been the gateway that fascinated me most. No time to keep; no measures to count; nothing but the next note. There’s something about staying continuously in contact with the present moment that makes the time construct just dissolve.
And all that’s left is this continuously unfolding moment. No time. Just now.
I remember hearing things when I was younger like “time is a construct” and not knowing wtf people meant. Flow states helped the conceptual vs. experiential versions of time make more sense, and it’s pretty fascinating to see more about the neurological basis for the experience of time. It seems to just underscore the fact that time (as we think we know it) is an experience our brain creates for us. Clearly there’s some underlying phenomena that these neurons are representing, but it blows my mind a little to ponder how much we orient our understanding of life and experience around the constructs we’ve created vs. exploring other ways to conceptualize and experience it.
It's also possible that time is real. One thing happens after/causes another. It's possible that when you are in a flow state your brain is just failing to notice the time that is objectively passing in the world around you.
One of the weirdest things Einstein discovered is that time is relative, but cause and effect are absolute.
For example, muons should decay before they hit the ground, but they don't due to time dilation. We see the time dilation when observing the muons, but the muons don't, so you would think that for us, the muons make it to the ground but for the muon it would decay too fast. However, the muons experience length contraction, so they do make it to the ground from their viewpoint as well.
So cause and effect is preserved, even though we would disagree with the muon on the relativistic reason why it is preserved.
I don’t question that something is happening that we experience as time. But the point was that the ways we commonly experience it tend to instill ideas about what it is that seem not very close to whatever is objectively happening.
I think the point about the flow state is that it’s one of the most reliable ways to disconnect from that common experience, and doing so regularly starts to reduce the primacy of the constructs we’ve built on top of it.
Another way to put this is that the common clock and schedule oriented notion of time is just a complex set of concepts and labels and increasingly granular slicing of ongoing experience into chunks we can use to reason more effectively about other aspects of experience.
It’s extremely useful to be able to make plans with a friend at 3PM next Thursday. 3PM next Thursday doesn’t exist.
Understanding the rate at which phenomena occur helps us develop increasingly useful and accurate world models. To understand what “rate” even means, it’s necessary to build up some concept of time.
What I’m mostly pushing against is the primacy of that construct. The pervasiveness of clocks and schedules to a point that it feels like they rule us. The past and the future tend to feel like places that exist. Time tends to feel like some kind of arrow. We’re always coming from some point in the past or going to some point in the future. A useful tool to help us make sense of our memories, and decide what to do next.
But the role of time in what actually happens at any given moment seems pretty unlike our experience of it, which seems to be something we largely manufacture to make sense of the rest of human experience.
Time is a dimension of space. We are falling through time just like the other 3. The only way to know you are falling is to see something that isn't. The tick of a clock draws a boundary in time just like a pen draws a line on a page.
Is this novel? Spatial codes have been known for long and neuron (populations) are known to code for time [1]. That reference also states that "Neurons that respond selectively to the temporal and spatio-temporal structure of sensory stimuli have also been identified in the auditory system of birds and mammals". Doesn't that pre-empt the conclusion that "Thus, the brain can represent time and space as overlapping but dissociable dimensions".
It does not, just that both dimensions are coded for in some way. Per the current article, temporal and spatial codes have been previously studied in isolation, so it was not known if or how they might be intertwined. This has elucidated new nuances of that relationship.
Novelty is not a good question to ask. It is meaningless.
Instead ask:
- Is it plagiarized? Or is it redundant information organized in a way that does not benefit others?
- Does it provide utility to __someone__ in the research community?
- Is it void of any major mistakes?
Novelty is just an absurd idea that is *destructive* to scientific progress. If you abstract ideas, nearly every idea is just <x idea, abstracted> + <y idea, abstracted>. Generally, science moves forward in small steps. You DO NOT want to criticize work on novelty because you actively encourage obscurification. This happens, a lot, and not just by small teams[0]. Unless you are intimately familiar with the specific sub-niche, you probably can't accurately judge novelty because the small steps and the nature of expertise is literally predicated on having an extremely nuanced understanding of a topic. So you're going to abstract and cause the aforementioned error if not careful.
In fact, a well written paper will often trick you into thinking that you already knew all this beforehand. This is even codified in the Socratic method. So many new things that change the world often leave us thinking "well that was obvious." Maybe it was, maybe it is only obvious in hindsight. Either way, it was useful. Either way, you didn't do it, and someone has to.
As far as your reference vs the one here, at least to me they seem quite distinguishable from the abstract alone. From further reading, I'm not sure how you conflate these other than they are studying the same topic. For one, your work (which was 10 years ago btw) isn't working on mammals while the current one studies rats. The new one looks much more detailed, has more complex task and convincing experiment, and honestly, is far more reproducible. I am curious why you think these ideas are identical or so similar that no new knowledge is gained. This is an honest ask. I do want to understand[1]
I think we need to ask ourselves what we want from science. For me, that is: reproducibility, communication (the clearer the better), and exploration of the unknown (exciting or mundane). Personally I don't care about utility, but that's because the abstract comment above and I've seen history to show this purist it naive at best and egotistical at worst (thinking I know so well). (I can list entire fields if need be. I'll give you a start: knot theory) I actually want explicitly non-novel works to be, frankly, common! Trust, but verify. We did not trust the LK-99 authors and move on, we verified. Even with failure to replicate, we still learned a lot. Don't forget that part! I'm perfectly fine if we have "too many" papers, as there's already more than any one can read today (look at ML if you want. It hasn't died, it accelerated when everyone started pushing preprints (and boy do most of those not get published, many for this exact reason. I've had to defend great papers written by rock stars because self proclaimed novices thought the work was not novel enough, including in workshops...). I don't want a focus on utility because we shouldn't railroad research paths. This is often even incompatible with the concept of "novelty"! I'm happy to let researchers explore "dumb" ideas because if it works it works and if it doesn't I'd rather them write it down and let others know.
We need to SERIOUSLY consider the incentives of research, because I for one think they are not aligned with the goals.
I'm upvoting your comment not because I agree, but because it is an all too common sentiment I see, and I believe it is ruining science. The worst of Goodhart's Law: Goodhart's Hell. But you are welcome to disagree and I am happy to learn and understand the positions of others. I'm just in my little bubble trying to learn about other little bubbles.
Thank you for your seriously thoughtful insights. You're saying we should have reproducibility, communication, and exploration. I fully agree with that, but I would like to push back on your point that novelty is destructive. Can't redundancy be similarly destructive? Given finite resources, shouldn't the authors have some responsibility to check their findings against previous insights? Otherwise, in the worst case, Goodhart can forget about his law because we'll never agree on any metrics in the first place.
To clarify, this is orthogonal to the point about the paper. I'm still not convinced it's entirely different, but I agree that the value of the paper doesn't purely depend on that. It's a nice exposition irregardless.
I'm not sure if you'll be surprised to find out that I agree with you.
Thing is, we're naturally inclined to pursue new things. There's already incredible amounts of natural pressure to pursue progress and innovate. It is much more lucrative regardless of the criteria for publication. Even the most modest of researchers are overjoyed when they find a clever solution or stumble on something no one else has before. We're naturally driven in this direction.
That said, I'm not going to claim this too isn't hackable. All metrics are hackable. My belief is just that metrics are guides, not answers. I believe Goodhart would agree, and that's his point. His point is yours, that no one will agree on metrics in the first place. Because the truth is, that no metric is perfectly aligned with a goal. His law isn't saying how to generate a good metric, his law is a warning about over reliance on metrics. To use them mindlessly.
The thing is you have to embrace the chaos. Maybe consider Pournelle's Iron Law of Bureaucracy as well. Bureaucrats can't create nuanced metrics, they must be mindless or else they would need an expert to interpret them in the first place.
Usually the comment sections here are pretty fun to read but anything neuroscience related results in absolute cringe-inducing comment chains. This one is no exception.
But I don't know how "computers" work nor do I know how brains work. (I do know a fair bit about the types of computers we are talking to one another and that host HN, but a computing machine is more than that) In fact, that's what got me into ML in the first place. What a crazy computer, that we've been studying it for thousands of years but still so much remains a mystery. How even extremely basic questions have baffled us for centuries. We still stumble over the nature of consciousness, sentience, and sapience. We still can't even agree on the definitions of these things!
And by computer, I mean that there is a mechanistic process. One that is void of magic and beholden to the laws of physics. Which means, we should be able to replicate it. But this is also true for planets, galaxies, and so much more. So that does not mean it is within our capacity now or anywhere in the near future.
I'm always wary of thinking that our metaphors for the brain represent the real object.
History shows that we tend to use as metaphor whatever popular technology is in place at the time. Right now is neural networks. Before, computers, telegraphs, hydraulic systems and so on.
I could easily imagine the next analogy being quantum computers. Or whatever trending technology comes around 2030.
Metaphors are aids. The problem is to take metaphors literally. It is, ironically, over-fitting the meaning of words. But words only mean what we mean them to mean. In the language of ML, you can say that -- analogously or metaphorically -- language is kinda like a VAE. You encode your thoughts, put them into a compressed version that is then passed to another independent network which must decode this encoding and hope you got it all right. While far from accurate, it is useful and can help us remember that the goal is not to interpret the words, but what was attempted to be encoded in the first place.
That said, a computer is a very vague term. I think if you strictly mean the device we are typing to one another on or the thing that transmits the signals between our machines, you have overfit. Even with how the word itself has historically been used as well as even used today. A person can be called a computer as well as a physical slide ruler. It is simply a machine that can calculate, which is absolutely another vague term. In fact, if the universe is completely mechanistic, we could frame it as a computer (but do not confuse this with all that simulation stuff, that's a misunderstanding of what a universe as a simulation actually means).
FWIW, Roger Penrose for quite some time been making the connection of a brain to a quantum computer. The problem with that is we aren't exactly seeing quantum processes in the way we'd expect (though there are clearly some quantum processes). It gets nuanced, but it's a rabbit hole that sounds like you'll enjoy going down.
HN has a lot of people who are quite smart but have not been through graduate school or academia, so they don't realize just how niche the type of training and topic-specific knowledge you get in that world is relative to a classic university-to-industry track. I think that is true for the general populace too - it takes doing the depth of learning and research you have to do for a PhD to realize just how much there is to be known about even the narrowest slice of a topic even relative to the most intense casual enthusiasts. I suspect this is a major reason for the high levels of impostor syndrome seen in academia, but that is a tangent.
Neuroscience is also a field that mostly happens within academia, unlike something like computer science where a ton of work legitimately is being done at the types of companies many HN users work at, so where you can indeed get something approaching expert-level knowledge on the job in some cases in the latter, almost no one on the forum has enough exposure to be an expert in the former. Couple that with the previous thesis and you end up with a bunch of people who don't know what they don't know (I will not be engaging on Dunning Kruger in this thread). That plus, perhaps, a bit of hubris and you're gonna get a wonky comment section.
This is your only comment in this thread. I hoped to see you debunking.. uh.. bunk. Can you at least elaborate on comments you feel are completely wrong and "cringe"?
I had a rare condition in 2022 that caused me to lose track of all sense of sequential memories, which heavily affected my perception of time. I could not reliably tell anyone if something had happened 2 years or 2 weeks or 2 days ago or what order events had taken place without using significant deduction. It was incredibly distressful and confusing, and I had to rely on copious note taking to get by day-to-day. When my sequential memory returned, I have never ever felt more appreciation for this sense.
Thanks. It was terrifying. It made me appreciate a lot that time is just a construct. Everything felt jumbled and running together with no sense of continuity. Our sense of time is intricately related with our sense of the world is what I concluded.
Can anyone in this rough field even comment on the long term benefits of such research? And why this needs to be in Cell? I’ve tried my best to attend as many neuroscience talks about work like this regarding place cells etc and keep walking out of them scratching my head what the point is in the long term.
Grid cells and place cells are among the very few examples where we actually know what a population of neurons is doing. This is exciting for at least two reasons:
1. If we know what the neurons do, we can start trying to understand how they do it. Like at the circuit level. And if we can figure that out, this will very likely help us figure out what other neurons are doing and how their circuits work. Currently very little is understood.
2. There's a lot of speculation that grid cells are also used for higher-level cognition, to represent relations between more abstract things. This is one of the two main theses of '1000 Brains' by neuroscientist Jeff Hawkins.
Why are these two things interesting? Well wouldn't it be cool to understand how the brain works? It might help us build AI that learns as fast as we do (chat GPT was trained on more text than one could read in many lifetimes). It might help us augment our own intelligence. These are very long-term goals of course, but understanding grid and place cells seems like a reasonable starting point.
You can't see the point of elucidating the nuances of the way the brain encodes the information around us? Surely even if you see no value in basic science of this nature in and of itself (in my opinion a mistake) there are myriad application-based reasons to understand how the brain works, including both medical progress and the brain's influence on artificial intelligence designs.
To clarify, I did my PhD in an immunology department in a top US grad school, and I attended the talks by the actual authors of some of these seminal works in the neuroscience department next door. My question was not an indictment on all basic neuroscience but a request for clarification on how this particular mode of questioning (pointing out that there are grid cells or time and place cells) is productive in us figuring out our brains.
Not an expert, but one application of neural decoding you may be aware of is brain machine interfaces (eg Neuralink). By decoding motor intent we can create devices which offer movement to paraplegic individuals with external robotics. Decoding neuronal activity in other parts of the brain could allow us to directly interface with other neural functions. Imagine a machine that could interpret your perception of space or time (or even modify it).
Another application is in computing. Advances in the understanding of neuroscience have stimulated the creation of artificial neural networks (for example, the convolutional neural network was inspired by the cat visual cortex). Understanding how the human brain encodes concepts such as time or space might help us to design artificial systems.
Finally, the most important aspect of this work is (imo) advancing basic science. We don't understand how the brain works, and work such as this brings us closer to solving the grand mystery of neuroscience.
> Our study reveals neurons in the MTL and mPFC that encode
time and space during exploration in a virtual environment for
fixed durations. Time cells activated at rest in the absence of
movement or other external contextual change, while distinct
time cells and place cells emerged during navigation and ex-
hibited divergent responses to changing tasks. These results
demonstrate a neuron-level code for spatiotemporal context in
the human brain, in which time and space are simultaneously
represented but not wholly conjoined.
Hack for skimming papers:
1. Read the abstract. It's short and will tell you what the paper investigates. If it's too much jargon, skip it and come back.
2. Skip the entire paper and read the "discussion" or "conclusion" near the end. This will pretty much tell you what the paper is saying, typically in more familiar language.
3. Go back and re-read the abstract.
4. If you're still interested, read the figures.
5. If this paper is up your alley, now you can read the whole paper with context and can appreciate the methods of investigation.
One small suggested addendum: 1.5) read the last paragraph or two of the intro. Most good papers spend these paragraphs laying out their study and their expected and alternate hypotheses. It's a quick way to go from the floaty higher order topics covered in the abstract and discussion and drill into exactly what the scope of the paper actually is.
The temporal lobe got its name because of its proximity to the temples, so it would be incredibly serendipitous/prophetic if it turns out to actually encode temporal information.
Were temples not themselves often used to encode [spatio-]temporal information (about the time of the year, our location within the solar system and cosmos, etc) within their construction, now that you mention it?
Formidable work. But as always, let's wait until it's confirmed from other groups. I wonder if AI was fed with advancements in understanding of neural code: would that speed up the coming Singularity?
At some point I got fascinated by flow experiences and the loss of all sense of time. While I’ve experienced this while coding, improvising music has always been the gateway that fascinated me most. No time to keep; no measures to count; nothing but the next note. There’s something about staying continuously in contact with the present moment that makes the time construct just dissolve.
And all that’s left is this continuously unfolding moment. No time. Just now.
I remember hearing things when I was younger like “time is a construct” and not knowing wtf people meant. Flow states helped the conceptual vs. experiential versions of time make more sense, and it’s pretty fascinating to see more about the neurological basis for the experience of time. It seems to just underscore the fact that time (as we think we know it) is an experience our brain creates for us. Clearly there’s some underlying phenomena that these neurons are representing, but it blows my mind a little to ponder how much we orient our understanding of life and experience around the constructs we’ve created vs. exploring other ways to conceptualize and experience it.