It all works this way (though a certain female family member would disagree, claiming to remember conversations word-for-word years later).
But my memory works this way. A summary "party at so-and-so's house, weather was nice, overall vibe was ___". The rest is context. You know what the house/backyard is like, you know the general feel of that time of year, you know the crowd that usually comes, you can easily synthesize details like the smell of the BBQ and the taste of the food... build up a complete "memory" from stuff that could be summarized in a paragraph of text plus generic (not specific to one memory episode) context.
I can build up a relatively vivid mental image of my walking route to school (from the bus terminal) over 40 years ago. Is it accurate? Who cares. As long as no detailed record exists to compare it to that would reveal the "lossy compression".
>I can build up a relatively vivid mental image of my walking route to school (from the bus terminal) over 40 years ago. Is it accurate? Who cares.
Not only that, but by recalling and rebuilding memories, how gaps are filled in depends on your current mental state. For example, if I'm feeling depressed and brooding over past social interactions, I'll likely imagine people having meaner expressions or saying harsher things than they did. The big problem is that your memory of the event is "written over" based on the rebuilt memory. Again, only the seemingly important bits, but people are more likely to remember emotionally strong portions. Like those imagined harsh words.
I realized I was doing this when I thought a professor strongly disliked me, avoided his classes for a couple years, but then found him pleasant. My depression and social anxiety had warped my memories over the years. Being aware that this happens really helps. I trust negative parts of memories less, and I consciously stop myself when I start to brood (or at least, have fun with a puzzle while thinking back on things).
Just thought I’d share an example of how this memory issue manifests for me.
I listen to a lot of music and use it (like many) to index various stages of life - i even have playlists by year to help facilitate this.
The problem occurs when I listen back to music i was listening to in, say, 2008. All of a sudden, i’m transported back. But, each time i do this, the effect wears off a little, because their is some - let’s call it - “meta data” being written from the current moment in time im listening from which is adding new color to the initial index.
This effect has been studied and noted somewhere before, but i’ll have to dig up when i’m not on mobile.
Ditto on the "certain female family member who insists that she remembers things word-for-word". When she recounts her meeting with a friend it is needlessly tedious (I try to be a good listener of course). Complains that my recollections are too vague and she wants to know what really happened and is frustrated I won't give her details.
I think a large part of it is just that you store what is important to you. To me the day-to-day politeness is just filler. I don't care if they had black coffee or a latte. If someone was struggling with something and poured out their heart over multiple conversations, I'm going to remember what arguments and concerns they had and the mental model I built up around that situation. The filler is just unimportant and doesn't stick around.
My wife is the opposite. Signs of weakness are an embarrassment to be forgotten. She lives for the day-to-day.
The Myers-Briggs system distinguishes call these two perspectives "Sensing" (detail orientated) and "Intuition" (theory/model based) [not the best names]. And it posits that it's less a matter of importance people place on things and more that people literally notice different things and perceive the world differently (so it's not even just about remembering, it's about what you notice and how your mind represents the world in the first place).
Meyers-Briggs is a fundamentally non-empirical model. I wouldn't recommend it as the basis for any argument or position concerning real world phenomena.
I don't think Myers-briggs is fundamentally non-empirical. Empirical evidence is certainly lacking for it, but I think there's a good argument to be made that this is due to poor experimental design (for example applying the categorisations to persons rather than to mental processes) and a general difficulty in empirically measuring internal mental processes (it's notable that more mainstream competitors to the Myers-Briggs like the five-factor model don't even attempt this).
I would also point out that I was replying to a comment that was an empirical observation. My comment highlighted that their empirical observation corresponds to the pre-existing Myers-Briggs theory (which suggests that other people have previously had similar observations).
No, Meyers-Briggs is fundamentally non-empirical, it has no empirical validity, nor was it derived from any empirical process. If you have a personal faith in its validity then I'm sure I don't mean to disrespect that.
If you're talking about the testing instrument itself, then sure I don't think it's great. But if you're talking about the underlying theory (and that's much more interesting - when we discuss physics we talk about the theory of relativity not the tools we use to measure it) then I would like to point out that the theory (lets call it Jungian Type Theory - JTT) was derived from an empirical process. Specifically it was derived from Jung's observations as a clinician over several decades. That's not a controlled experiment, but it's certainly empirical.
Regarding my personal experience with it: I have personally found it highly useful as a predictive model of behaviour. It's the only thing I've found that allows me to explain the aspects of people's personalities that aren't easily explained by their environment or life experiences. And by combining with JTT with an understanding of someone's life experience and environment I've found that I can pretty much always find a satisfying explanation for someone's behaviour in a way that I am not able to do with either one individually.
Regarding its validity:
- Firstly, there's no need to tiptoe around the subject if you think it's bullshit. I won't be offended, and I can totally understand why might be skeptical given the experimental evidence that exists so far.
- Having said that, I would like to challenge to idea that we ought to expect JTT to be experimentally validated given how limited our current ability to inspect the brain is. Specifically (and unlike other models of personality) the theory's primary claims are that there are certain internal thinking processes, which of course we currently have no way of observing directly, and which will not necessarily correspond straightforwardly or 1:1 with observable behaviours (environmental and life experience factors being huge confounders).
- Given this I find it entirely unsurprising that experimental designs which rely on numerical scorings of observed behaviours fail to find an effect
- I think it no coincidence that this theory came out of clinical psychology, because you need to be able to control for the environmental and life-experience factors in order to be able to see the other pattern that is (well ok, might be) sitting there beneath them. And therapeutic relationships which continue over a number of years are one of the only scenarios where that context is available outside of close personal relationships.
> I can build up a relatively vivid mental image of my walking route to school (from the bus terminal) over 40 years ago. Is it accurate? Who cares.
It's not just decades old memories. Memory of recent events is likely to be suspect. Which is an issue for the legal system because it relies so heavily on eyewitness testimony.
A while back I went on a google maps street view tour of a place I lived until I was 9 but hadn't been to in well over a decade. I wasn't sure what to attribute to the tenuous nature of my ancient memories versus what things had actually changed since I last looked. It was honestly a bit uncomfortable and disorienting having this gaping hole in my perception of reality. Was the swing set always blue in that park? I thought it was yellow. Maybe they repainted it? I will never know.
I think some memories are closer to lossless compressions than lossy which I wonder if it's more of a scale where memories can slide between the two modes with varying degrees of fidelity. Like there are memories that I know I shouldn't remember from childhood that I can remember clearly and others I barely remember what year it happened. So I have to wonder if some of this seeming lossless-ness is more fractal-like in nature where one can just reconstruct from the base encoding and expand it outward to fill in sufficient detail to seem like it's perfectly captured when it's really just merely the reconstruction.
I vaguely remember reading something that traumatic or “very important” memories never go through the usual process of becoming memories. Instead, when you recall them, your brain physically “relives” it so it is never forgotten. Probably a evolutionary trait to make sure we learn as much as we can from the experience. This is also why you remember those “times you almost died” in slow motion. Your brain goes into a high resolution mode in those cases, which you remember as slow motion, like speeding up a camera and playing it back at normal speed.
Sorry I don’t have any sources, I’m just a casual reader in this space.
If you are taking a truncated SVD, the math says that it is the best representation of that data for a given truncation size, and will even give you a measure of how good that representation is. But picking how good you need often ends up being a kind of annoying and fuzzy heuristic thing. In addition, some data just gives you better singular values, and so fundamentally compresses better.
I guess the brain probably is dealing (in a hugely non-mathematical way -- it is just an analogy!) with a similar sort of thing. Somehow we pick some memories to keep in great detail -- either because they seem to be very valuable, or because they just seem to compress nicely.
It is a bit funny that one name for this sort of thing is a "singular experience."
though a certain female family member would disagree, claiming to remember conversations word-for-word years later
Surely many people do. Otherwise you wouldn't have all these biographies and non-fiction books packed with conversations people have managed to recall in a level of detail enough to not get sued. I can barely remember a line of conversation from this week, let alone important ones from years ago, so I always assumed most/many people can remember conversations to some reasonable level in a way that I cannot.
I suspect many (most?) conversations in biographies and non-fiction books are not necessarily quoted verbatim. In most cases, the author may at best have had access to diaries or other notes from the time that recorded a summary of what was said, or they may have interviewed people who, years later, summarised what they recall -- more or less accurately -- being discussed.
The author may then present this in the form of quoted speech in order to make it more vivid and compelling for today's reader, but it rarely corresponds to a precise transcript of the original conversation.
I think most people remember the basic concepts and then they fill in the details using what they know about the situation and participants. I have remembered events a certain way that in my mind was very clear. But upon reviewing said events in old video, it turns out I got quite a few details wrong. Sometimes two people will recall the same event very differently. Which is why I think our justice system relies far too heavily on witness testimony.
One of my oddest part of my dreams, is that they often tend to be places from my childhood or young adult life and that my brain seems to processing the 3D layout. Like I will walk specifically to school, remembering the route, or through my church and I had re-visited a giant thrift store from many moons ago and my feet just trod the path right where I knew I wanted to go. It's like watching my mind process these locations into mental maps in dreams. Kinda neat
Using routes is a key technique in memory techniques (an the so called 'memory palaces'), presumably because when we went hunting for food we needed to find our way home, so memories attached to routes are a lot stronger.
Interestingly I was able to retrace the walk two decades later (we had emigrated to another country in the meantime) and while the "vibe" matched, the details were quite different from what I thought I remembered (this is an old town in south Germany where things don't change that quickly so it wasn't redevelopment).
But it was possible, with a bit of head scratching, to walk the route just from memory.
This is more about how what you can remember about an event after five seconds differs from all that you experienced, as opposed to what you can remember a year later. I think most people can give a word for word summary of an utterance after a few seconds so this particular experiment doesn't really have any bearing on your relatives claims, which are more about recall from long term memory rather than working memory.
Photographic memory is not a real phenomenon though. But eidetic memory is real, some people can remember almost everything they read. But they don’t remember photographic images.
My perception is that I've only collected "raw" materials, left unprocessed. Not sure if it's compressed and lossy or lossless. There's a quote that say like "if you can't explain, you don't really understand", I really hate this quote because I don't have to explain! Those necessary memory will come up, load into my processing unit and execute it.
My accusation is that the conversation memory works the same way as the BBQ party memory. You remember a skeleton. This subject was discussed, and things were said that gave me a feeling of ____. And a few more easily compressed details. The rest is interpolated. Imagine a language model the size of GPT-3 being trained on one particular person's manner of speaking and then given a one-paragraph summary of a conversation to get it started. Barring an audio recording or a transcript, who's to say that these weren't the words that were spoken?
Of course the engineer is tempted to test this by secretly recording a conversation and trying to trip up the perfect rememberer, a year later. But the non-geek life experience accumulated says don't go there.
I should add that as a geek I ought to have a better ability to remember, say, computer code that I've written. But am I the only one who, going back to something I haven't touched for two years, has to re-learn my own code?
>But am I the only one who, going back to something I haven't touched for two years, has to re-learn my own code?
No, that is perfectly normal, and it starts much earlier, weeks sometimes days after leaving the code.
Depending on its complexity and level of its abstraction.
You mentally build something highly abstract without much emotional or bodily bond.
Your brain has not much incentive to rememeber it.
Adding to that, there's a lot of sampling bias as well. If a function fits my mental model of it, then I'm unlikely to revisit it. If a function doesn't fit my mental model, then it is very likely that I'll misuse it, increasing the likelihood of a bug, and increasing the likelihood that I re-read the code.
Not to forget, memories are not only unreliable per se, but also change with each act of their remembrance.
For example, by character peculiarities, new experiences, current circumstances, etc.
Often they are made up on a whim, without the remembering person being aware of it.
So in a sense, memories have a past and a history.
As someone suffering from Aphantasia [0] (I don't have mental imagery at all) and I've been telling people for the longest time that this is how I relate to the world. I summarize things. Even my mother's face. A post by a Facebook engineer [1] felt like a good way to understand it.
I am aphantast, but I do not suffer from it.
When I am fully conscious, I have no inner vision, but I have vivd and colorfull dreams.
If I remember them, outside that twilight zone shortly before full awakening.
So I have an idea what it probably is like to have inner vision when fully awake.
Allthough there are some disadvantages, of course.
I admire people that are able to draw and paint based on their inner vision.
Much more important for me was the realization that I can evoke images, scenes, etc. in other people that trigger feelings in them. Which in turn can trigger actions or omissions.
Fear, joy, hate, love, disgust, lust.
Which they can't do to me, at least not just by invoking visual images in my mind through words.
Manipulative, but not manipulatable in this regard.
With time, that came in handy.
By the way, I am friends with a handful of people who suffer from schizophrenia.
They say they envy me a little because in their worst phases they wished they didn't have this movie in their head.
It repeats itself, over and over again.
And aphantasia is a spectrum, I have known people who describe rather dull, colorless inner visions and others who can sustain them only for short periods of time.
On the other hand, I met an artist who seemed to live in his own private vibrating Van Gogh painting.
Judging by his descriptions.
And of course, without DMT.
> Much more important for me was the realization that I can evoke images, scenes, etc. in other people that trigger feelings in them. Which in turn can trigger actions or omissions. Fear, joy, hate, love, disgust, lust. Which they can't do to me, at least not just by invoking visual images in my mind through words. Manipulative, but not manipulatable in this regard. With time, that came in handy.
I've also discovered this but I can only admit it to my closest friends else I'd be labeled a psychopath. There are things that trigger these kind of feelings in me, but it's more about situations than images and never in remembering something.
I do have a feeling that we might be more susceptible to do really nasty deeds if push comes to shove (Nazi Germany?) so I think it's something we need to be careful about as we can be manipulated into doing things that other people might find gut wrenching just thinking about.
You can also reverse it and perhaps claim that doctors can benefit from less visceral reaction to seeing blood/internals. Though of course it is a learned behavior anyone can get better at.
I realized some time ago when I learned of Aphantasia that it is a spectrum. From 0 to 10, 10 being perfect photographic memory and zero being total Aphantasia, I feel like I'm somewhere in the middle. I can recall images, sounds, memes, faces, but in terrible quality, with very little color or focus, more similar to fast paced dreams than photographs.
I've been wondering the same ever since I read that Nikola Tesla invented/designed the AC motor in his mind's eye.
Seems to be along the lines of lucid dreaming, with a vast difference in degree. Sometimes as I'm falling asleep I can see vivid scenes or objects that I can--to minor degrees--play with for a short time before I either fall asleep or wake up, then it's gone.
also, consider somebody who is an expert already in the problem domain.
most of us here are programmers and do this on a daily basis. somebody describes "A GraphQL API driven by a clojure back end connected to a postgres database" and to a layperson that looks like either a bunch of nonsense words or maybe a few boxes, clouds, and arrows. but to you and me we can visualize the individual lines of code, configurations, functions, and infrastructural requirements behind that simple sentence.
same with an electrical engineer/inventor in their domain.
I wouldn't say that I exactly 'visualize' it. For me it's more of a bunch of formless even nameless things and connections that don't have specific shapes or places, only logical structures. The lines of code also flows freely but it feels more like synthesis from those ideas rather than copy-typing. Eventually the 'picture' or network in my mind gets complicated/unstable and I have to draw it out.
I would take something written about Tesla on such an intimate level with a grain of salt though. He is very very hyped and often elevated to a God-like level.
I've been trying for years to visualize. My SO is an artist and my mother a psychologist, so I've been trying to gets tips and tricks from them. I never managed to even get a hint of color.
I definitely think the ability is there and can be unlocked. Like the way you can sometimes figure out how to fire some random muscle on your body with practice.
While I was in jail I managed to imagine images in vivid color and detail several times, but never on demand. It would happen randomly when I was just lying, bored, thinking about stuff. Each time was one of the most amazing experiences of my life. It was like going from being a caveman looking at scratchings on a wall, to straight into a 3D IMAX movie. The quality was that spectacular, with bright colors and completely 3-dimensional. Since I left jail it hasn't happened again a single time.
I have to wonder if that's how others see inside their minds? If so, I am in awe.
You sound similar to me, girlfriend is an artist with an incredible visual imagination, mother is a therapist, and I was at a 0 on the scale before I met her. My gf and I have had some deep conversations, sometimes assisted by MDMA, and at times to the point of crying in front of her in a state of completely trusting her which uncovered some past trauma, social fears, and other discomforts I needed to work through. Anyway after each of these times it would get a little easier to visualize; simple colors at first, then colorful shapes, now small snippets of images that come in and out. Maybe a 2-3 on the scale. Also my memory has improved, not so much for technical stuff, but just remembering the details of my life which before had huge spans (in years) that I mostly didn't remember.
Anyway this might just be specific to me but something to think about.
I am very interested in the topic, and have been looking into it for ages. I think most people vastly exaggerate their ability to visualize anything. Most people can't really hold a square or a sphere in their mind, rotate it, or change colors. The only people who truly can are really good artists. My point - you may be mis-diagnosing yourself, especially since aphantasia doesn't seem to have clear tests or definitions. How could it, if a verifiable test would be to ask a person to draw what they see, and obviously that confuses the whole test with one's art skill.
Is it really unique? I can visualize a sphere, rotate it, rotate the "camera", see it in wireframe, apply any kind of texture, reflections, make it bounce, like working with CAD software. I can picture the image through a fishbowl lens, or through telephoto. However, I do not believe, for example, the reflections or the light sources to be realistic. I can "see" the effect of changing the lenses, but I don't think they correspond to reality. I think that's where people exaggerate. The dimensions, light sources and reflections are not based on reality.
I can picture anything that I want. Movie scenes with my friends faces in them. I always thought everybody could do this. If it's somewhat unique, can I use it for something?
This is absolutely exceptional if true. As far as I am concerned you should be a good artists out of the box. The argument that "well, it's drawing the line well" is mostly bs - it's being able to visualize proportions, distance between things, etc. Most artists actually don't - they follow rules. A person starts as a square and gets broken into quarters with each quarter denoting say the middle of the chest, the groin, the knees, etc. Each box is then broken down more (eyes in the middle of the top box, etc). If you can actually visualize things accurately, especially "wiremap it", you should be able to draw a portrait from memory in 3D. Otherwise you are lying to yourself.
As far as use - Tesla was said to have "run" schematics in his head to see what would break. I believe it. This allowed him to rapidly prototype.
Also, your working memory should be off the charts, which should be indicative of a very high IQ
I am an atrocious artist who absolutely kicks ass at those mental object rotation tests. I can very easily manipulate objects in my head, but draw a picture? It's an ugly mess.
Could be that I am mis-diagnosing myself. I've never seen a mental image in my mind. I've never been able to conjure one and I've been trying for years before falling asleep to conjure even a sense of color. Nothing. Black.
To be honest, it doesn't feel like such a handicap to my life that I would start submitting myself to clinical trials. If the worst to come out of my mis-diagnosis is this post, I can live with it.
I’ve always thought of myself as being fairly good at visualization.
For example, I can imagine multiple 3D shapes at one time, rotate them, keep track of which direction a face is pointing on each one, etc.
However, I don’t really “see” any image. It’s more like a feeling of seeing it. Now I’m wondering to what extent other people actually see things they imagine…
Now, close your eyes and imagine another thing and describe it
comparison of imagery in reality and visualised. This presupposes people describe things visually even when directly seeing them, and not in other modes (texture, sound, smell , etc)
Same here. Recently tried to explain this to someone who has vivid imagery, but it was challenging. It seems we do have a wildly different experience of life in this aspect.
Imagine you sit at your desk all day answering emails. Emails come in, responses go out. Except when you step back from the desk, it's just a black void. Information from your eyes? That's just an email saying what grandma looks like. Pain in the leg? Re: URGENT. Nothing exists beyond the emails. The emails are reality. The brains representation language is the same as it's actual language. Why have more than one language?
Compression is a component of general intelligence. A few years ago I was very sceptical of machine learning ever leading to general intelligence. I've since changed my mind. There are a lot of parallels to this work and the concept of "embeddings" in machine learning.
Intelligence requires the ability to generalize. A prerequisite for generalization is the ability to take something high-dimensional and reduce it to a lower-dimensional representation to allow comparison and grouping of concepts.
We're doing this all the time. Take a pen for example: we're able to combine information from sight, touch, and sound. Through some mechanism, our brains reduce the multi-sensory information and create a consistent representation that is able to invoke past memories and knowledge about pens.
Our brains encode the embeddings in a very different way to deep learning neural networks, but the commonality is that both are able to compress data into a _useful_ representation. Note that as a result of this, the quality of the compression is important. Some forms of compression might be very efficient but they also tangle concepts together, resulting in loss of composability. The ideal compression (from an intelligence point of view) is both information efficient and maximally composable.
A nice definition of intelligence I've heard is exactly the ability to form models of the world with predictive power. And a model is essentially a compression of real-world data. Physical laws are a great example of this.
Can you recommend any philosophy of science (or life) treatises about this?
I long considered myself a Popperian. A few years ago, I decided that I'm a "Predictionist" (a placeholder made up word until I learn better). I'm struggling to figure out what that even means.
I still agree with Popper. Empathetically.
I'm just tired of arguing. I forfeit. I give up. I no longer believe that discourse is helpful, that people are persuadable, that we can share Truth.
Instead, I just want to know the predictive strength of someone's Truth.
For example:
The Earth is flat? Oh? Cool. Please, tell me, how does that Truth help me?
My research involve applying Popper's epistemology to natural language processing.
So I am quite involved in this.
As far as I can tell, almost all of what Popper tried to do with quantification measures of information are exactly what you are talking about.
In particular Conjectures and Refutations covers this really extensively so I'd recommend reading or re-reading that. Though Logic of Scientific Probability covers an early form. David Miller's Critical Rationalism covers it well too and some of it's problems.
I.e:
His notion (shared with positivists like Carnap and others) that science is a set of logical statements. A collection of statements is a theory, a theory entails a set of predictions which is called the information content of the theory (sometimes I(c) or C(I) in his notation).
If the I(c) > I(c') where c' is a competing theory then it is said to have more explanitory power. I.e. it makes more predictions.
This is part of his defnition of what makes a good explanation and what david desutch calls "hard to vary".
The other main part of the definitition is about whether these statements reflect Truth in anyway.. that is covered by his notion of verisimiltude or truthlikeness which is quantified as the degree to which the information content of a theory I(c) can be corroborated.
Both of these are essentailly "The predictive strength of someone's Truth"
The problem you and many other have probably encountered is the information content of an explanation is *intractable* it's an open set of statements which cannot full by fleshed out. So instead we can never have a perfect quantification of whether my theories or your theories are better... there may indeed be statements entailed by flat earth theory that have yet to be discovered and could indeed be more corroborated and provide better information content than a non-flat earth theory! Popper revels in this fact and fully embraces it.
Beyond Popper though, we need to understand more of the dynamics of "predictive strength" - I am finding causality a great source of literature that for which I would recommend Judea Perl and the Book of Why among other things.
For Philosophy of Science in particular there are ton's of great articles on stanford encyclopedia of philosophy about Explanation that go into this in depth - in fact the positivists like Carnap wrote amazing things about this which I would recommend.
Slight tweak to this imo: models that can predict which new reframings/samples of current scientific-community-consensus SOTAs/benchmarks/datasets will disprove contemporary consensus is science :)
That's actually one of the flaws of Austrian economics. Mises just postulates that certain of his personality traits apply to all humans and ignores the personality traits of other humans. He is also confusing the informational theoretical world inside his mind with the harsh reality of the real world.
The word "capital depreciation" kind of summarizes all sorts of flaws. Gold is considered an ideal currency yet it suffers from no capital depreciation at all, while all other forms of physical capital depreciate. Once you have built all the houses, roads and schools, etc there is nothing left to build. If you happen to have built too many schools, then you are a fool and should convert the school back to gold by demolishing it and selling the scrap or in other words, idle capital, that is left to depreciate, must be eliminated.
When you think about this, this is secretly rewarding monopolies and all sorts of other nonsense that capitalism suffers from. For 10 people there are exactly 10 jobs. If a workaholic hoards a job by working 80 hours per week to retire early, then somehow there won't be enough "jobs" for everyone. This gives employers the upper hand during negotiations as employees start competing with each other instead of employers competing over employees. The current system is also dumb because it tries to represent capital depreciation through an increase in the price level i.e. 2% inflation targeting.
I disagree with this definition. We have yet to produce a perfect model of the world (aka, a theory of everything). All models produced by "science" thus far are "wrong", at least on some level (ex. Newton's model doesn't cover relativity). I think "Creating models with predictive power is also a precise definition of science." is a fair description.
I think it's fair to say that a "theory of everything" is sort of the great work of any particular field of science. In practice that means refining models, but the model-building is ancillary to the truth-finding, not the other way around. Of course, if the truth wasn't predictive we're all just screwed, but that doesn't mean that whatever is predictive is necessarily the truth. It just means we might all be screwed.
I think that most work in quantum physics negates that claim.
While we are improving our predictive power, we’re still baffled by the underlying nature of reality. We don’t know the “mechanism” by which the quantum world works.
I like to define intelligence as knowing data, but knowing data only creates idiot savants. What is lacking in AI today is artificial comprehension. What we're calling "artificial intelligence" lacks comprehension. Until the concepts handled by AI are composable, forming virtual operating logical mechanisms, and an AI comprehends by trial combinations of concepts (virtual operating logical mechanisms) we are only creating idiot savants incapable of comprehending what they do.
In the incredible story "Funes the Memorious" the eponymous Funes has an absolutely perfect memory, but is functionally mentally handicapped.
He can't even abstract to the existence of "trees" because he can recall and diff all of the details of every tree he's ever seen.
He can't even identify that he's seen a particular tree before, because he can diff how different it looked in a particular configuration of leaves and shadows because of different wind and cloud cover
Not capable of functioning independently or surviving for any long period unassisted but having a brain and cognition setup that allowed for amazing feats of mental wizardry. If you could have that ability and function normally in society you could do some astounding things.
I thought that the idea of Mentats (human computers from the Dune novel) were kind of ridiculous, but yeah, when you look at savants like Peek, makes you kind of wonder if such a thing would be possible.
Yeah, if you had the appropriate brain surgery as an infant and took highly specialized drugs to maintain or accelerate your mental functioning, you too could be like Kim Peek.
I like that Frank Herbert imagined that it would be possible and went with it before we had any real proof that it could be, and then he turned out to be right.
I still think the approach outlined in the paper (using embeddings to map the physical world) is sound especially for the field of self-driving which is in dire need of generalization, but I've since changed my mind and currently do not believe we can achieve AGI (ever).
While embeddings are a great tool for compressing information, they do not provide inherent mechanisms for manipulating the information stored in order to generalize and infer outcomes in new, unseen situations.
And even if we would start producing embeddings in a way where they would have some basic understanding of the physical world, we could never achieve it to the level of detail necessary - because physical world is not a discrete function. Otherwise we would be creating a perfect simulation (within a simulation?). And the last time I was playing God, was in "Populous".
> I've since changed my mind and currently do not believe we can achieve AGI (ever).
Considering we (as in humans) developed general intelligence, isn't that already in contradiction with your statement? If it happened for us and is "easily" replicated through our DNA, it certainly can be developed again in an artificial medium. But the solution might not have anything to do with what we call machine learning today and sure we might go extinct before (but I didn't have the feeling that's what you were implying).
It is not a contradiction as I meant "achieving" in the context of creating it (through software).
The fact it happened to us is undeniable (from our perspective), but the how/why of it is still one of the many mysteries of the universe - one we will likely never solve.
FWIW this is the same argument once made against human flight. In the late 19th century, there were a lot of debates in the form
> Clearly flight is possible, birds do it
> Sure but how/why is one of the many mysteries of the universe, one we will likely never solve.
"Man won't fly for a million years – to build a flying machine would require the combined and continuous efforts of mathematicians and mechanics for 1-10 million years." - NYT 1903
The real answer to how birds fly is that they're extremely light weight so that wing muscles can lift them. Common pigeons or seagulls only weigh about 2 or 3 pounds. The largest birds of prey top out around 18. Anything heavier is flightless. A 150-pound human isn't getting anywhere on wing muscle power.
The largest Pterosaur are estimated to have had wingspans of more than 9m and weigh up to 250kg (550 pounds) and we believe they were able to fly. [1]
But that's not the most relevant point here. The fact that humans did achieve to fly, but through a different method than birds is exactly a supporting argument that we might achieve AGI with a different approach than how our brains do it.
There are countless similar examples. We see a natural phenomenon, we know it's possible and we find a way to replicate the desired effect (not the whole phenomenon) artificially. I haven't heard anything here that it will be any different for intelligence, except that we don't know how yet.
The chain of reasoning that everything observable in nature is replicable by humans would also imply us being able to replicate creation of a living cell from non living material and then endow that organism with consciousness.
Further more it would also imply us being able to replicate birth of stars, black holes, and "the big bang" itself.
I am not qualified enough to speak if there is anything fundamentally impossible with all of these, but that would basically make human race "God".
> us being able to replicate creation of a living cell from non living material and then endow that organism with consciousness.
Afaik we are very close to artificially creating living cells. This is one recent example [1]. The consciousness part is similar to AGI.
> Further more it would also imply us being able to replicate birth of stars, black holes, and "the big bang" itself.
Some things might be a logistical challenge rather than one of knowledge. Fusion energy attempts to replicate the way stars produce energy and we already managed to replicate the effect, we are just (many years) shy of maintaining it to produce positive energy.
But you might be right and some things are impossible to replicate. I'm much more inclined to believe we can't replicate the big bang than general intelligence as mother nature replicates general intelligence millions of times each day. And by now we started to have a discussion about believes rather than knowledge, which is a much more healthy way to put it, as we indeed don't know.
> Afaik we are very close to artificially creating living cells. This is one recent example [1].
I beg to differ. It may look impressive on the surface, just like GPT-3 looks impressive on the surface, but is far from the real thing. It is just another extension of the ladder to the Moon.
The effort described in the article is nowhere near a living cell. It lacks protein building and DNA/RNA mechanisms. They basically describe a group of nanomotors.
I can recommend watching James Tour on this very topic [1] and Stephen Meyer on the related topic of intelligent design [2]. Those two lectures were eye-opening for me learning more about this field. Note: both of them are self-confessed theist scientists which to me did not represent a problem (my viewes are agnostic, and it only made it more interesting as you rarely get to hear different views about these matters than pop-sci).
> The consciousness part is similar to AGI.
It is not clear what you mean by that. One thing is to build computer code and then have it manifest 'intelligence'. Whole another thing is doing same with organic matter that can not be 'programmed', even if we knew how to do it (let alone there is no evidence that 'programming' is responsible for consciousness at all to begin with).
This is also known as 'hard problem of consciousness' and David Chalmers is considered one of the leading experts in the field [3]. Basically smartest scientists in the world are clueless about this and do not know even where to begin, in many ways similar to AGI.
> Some things might be a logistical challenge rather than one of knowledge. Fusion energy attempts to replicate the way stars produce energy and we already managed to replicate the effect, we are just (many years) shy of maintaining it to produce positive energy.
I can see why one can have this position where it seems like we are making progress in everything we talked about, but that is the main punchline of the ladder to the Moon analogy. Indeed it is imaginable, and indeed every step makes us closer. But it does not mean we will ever reach it.
I agree with you that the discussion ultimately boils down to direction and strength of one's beliefs.
I’m curious why you think that. Do you think it’s a fundamental problem with the discrete nature of traditional computers? Or a problem with scale and computational limits? If it’s the latter, if a hypothetical computer has unlimited time and memory capacity, why do you think AGI would remain impossible?
Machines are good at computation, which is not equal to reasoning, but rather a subset of reasoning.
And not only they are good at computation, but they are exceptionally good at it - I have no illusion of trying to compete with a machine doing square roots or playing chess. And increasingly harder problems are being expressed as computation problems, with more or less success - most famously probably self-driving.
But at the end of the day it feels like using an increasingly longer ladder to reach the surface of the Moon.
While imaginable, and every time we extend the ladder the Moon does get closer, it is fundamentaly impossible.
Ever since Gödel we’ve had a pretty convincing proof that there is nothing that you can do in terms of reasoning that can’t be expressed using computation. And since Turing we’ve got a framework that shows there’s nothing computable that you can’t compute using a universal computer.
So unless there’s something mystical beyond the realm of mathematics to ‘reasoning’ it can’t be a superset of computing.
If a finite amount of matter in a brain with a finite amount of energy can do it, then a universal computing machine with a finite amount of storage and a finite amount of time can do it.
There are actually a lot of well-defined things beyond the power of a Turing machine (for example a Turing machine plus a halting oracle that only works on Turing machines without a halting oracle) but in terms of finite amounts of electrons doing normal low-energy electronic stuff you are quite likely correct. Humanity may go beyond computability if as some papers have suggested quantum gravity requires solving uncomputable problems.
Even if our brains reason based on quirks of quantum mechanics (seems unlikely given the scale at which neurons operate), what stops us from creating non-biological machines that interact with QM in the same way to produce artificial reasoning?
I am not saying that anything more than a really big computer is necessary for reasoning, only that one day physics knowledge may reach beyond the Turing machine (quantum computing does not).
Do you believe human brains contain a halting oracle? Or the moral equivalent of one - something that enables our brains to accomplish some non computable reasoning task?
It's semantics at this point but we did not create ourselves, it was a complex process that took billions of years to create each one of us. Something being conceivable isn't the same as it being practically possible. I can imagine what you propose, but the same goes for traveling to distant stars or a time machine for going to the future. All perfectly possible in theory.
Intelligence is an abstract concept, it depends on what exactly one means by that. I have watched a rockets take off from earth. I have never seen a self-aware machine.
Thanks for your perspective. We’re still in disagreement but I wouldn’t bet on either side of the AGI debate with any significant conviction.
Embeddings are very good at a few things: combining concepts (addition), untangling commonalities (subtraction) and determining similarity between concepts (distance).
> While embeddings are a great tool for compressing information, they do not provide inherent mechanisms for manipulating the information stored
What are the manipulations you’re referring to? I would love to learn more. From my understanding, embeddings actually provide great generalisation. If you have a well conditioned embedding space then you can interpolate into previously unseen parts of that space and still get sensible results. That is generalisation to me. Many current ML methods do _not_ result in a fully meaningful embedding space but my hunch is that we will get there with future insights and advances.
> We’re still in disagreement but I wouldn’t bet on either side of the AGI debate with any significant conviction.
That is probably a superior position to hold. I am agnostic by nature, and interestingly this is one of the rare topics I've taken a hard position on. It could be a result of the years spent in the field but also some kind of bias.
> What are the manipulations you’re referring to?
Need to take a step back and mention that in the field of AI there is a great debate between symbolic and non-symbolic approaches. (and after decades spent with AI under symbolic approaches domination we are now in the golden age of non-symbolic AI; with symbolic starting to have a comeback. this podcast can be a good starting point to learn more https://lexfridman.com/gary-marcus/ - although I disagree with GM on many things - and this tweet for learning about symbolic making a comeback https://twitter.com/hardmaru/status/1470847417193209856)
Basically embeddings are "non-symbolic AI" (which is great and this is where their huge potential stems from), but the very way they are generated and then later utlized is completely "symbolic". Which means the the limits of embeddings is defined by the limits of (in this case human written) symbols used to define them. Hope that makes sense.
I think AGI will remain out of reach. Even a simpler thing like level 5 self-driving, which is only like 0.3 AGI or something, will remain forever out of reach no matter how much compute we throw at it (though I also think that if we ever reach 0.3 AGI we will also reach 100%).
The reason is that the mundane world keeps surprising us everywhere we look and constantly keeps creating more questions than answers. Just look at the questions field of quantum mechanics is trying to tackle, but also every other field of research science - astronomy, genetics, biology, antropology even mathematics... Now imagine trying to keep up with all that - by writing code.
This also ties in to the cybernetic concept of the law of requisite variety, where adaptable entities need to be able to compress their sense-data about their environment into an internal model that corresponds in complexity to their need to act - this necessarily involves compression as the totality of reality is effectively infinite and can't fit between your ears.
There's also the Hutter prize that ties data compression directly to intelligence through Kolmagorov complexity.
Information and cybernetic theories cut pretty close to a general theory of intelligence in my opinion!
Already plugged this book elsewhere in the thread, you might be interested in "The Mind is Flat". One chapter of the book explores the concept you're describing. Our brain creates the illusion of a "full picture" when often our imagination and internal representation is quite sparse. I think that's one of the key impressive qualities of our brains and general intelligence. We only do the minimum necessary imagination and computation. As we explore a particular concept or scene, our brains augment the scene with more details. In other words, our mind is making it up as we go along.
Can you expand on this? Can you give an example of a kind of image it might work well with? I’ve always assumed the apparent detail of mental images was a kind of illusion, a bit like the illusion of detail outside the centre of the visual field.
I found out the separation when I can read Chinese perfectly fine but can't write it. When I'm writing, or trying to write it, I employ the technique you described. Though more often than not, I'd have tiny parts missing here and there.
I have always thought that the best measure of intelligence is compression of information. If you can create a smaller, abstract model that is still accurate despite a loss in details, then you are intelligent.
Interesting counterargument from AI researcher François Chollet (creator of Keras and one of the main contributors to TensorFlow): https://www.youtube.com/watch?v=-V-vOXLyKGw
>>> Intelligence requires the ability to generalize
Counterexample: I saw a TED talk video [1] by Dr. Grandin (autistic) telling that she is unable to generalize a "church" (starting at 02:50). I would say that she is extremely intelligent though
The human brain also forgets, something that may be a feature instead of a bug. Also, beyond compression––brains are simulation machines: imagining new scenarios. Curios to understand if ML provides anything analogous to simulation that isn't rote interpolation.
I don't think it's true. I can imagine a lot of aspects of systems around me I cannot possibly experience in any way, except maybe them leading to some outcome that I might experience as well. I sometimes do verify this experimentally, but that comes later.
GP said something about compression being a component of intelligence, parent said also brains simulate, then I said yeah i agree, and believe that the content of the simulation is the experience itself, whereas people often think there are 2 things, themselves and the world. I don't believe that at all. There is only one thing happening, the experience you are having now.
Ah ok, that doesn't work for me because I want to call dreams and hallucinations experiences as well. And those are disconnected from some idea of reality.
The primary aspect is first person perspective, not congruence with the external world.
I am quite a novice in ML topics, but isn’t this concept of simultaneously training a generator and validator sort of this?
I don’t know the exact term but I think of deep fake generators with an accompanying deep fake recognizer working in tandem bettering each other constantly?
>In fact, she was not very good at memorizing anything at all, according to the study published in Neurocase.[1] Hyperthymestic individuals appear to have poorer than average memory for arbitrary information.
Absolutely. Generative methods are all the rage now. Those methods work on learning information-rich representation spaces. You could argue it's still "interpolation" but instead of interpolating in data-space per se you are interpolating in representation-space.
I saw a presentation once (It was not recorded, so I can't link it) that said the difference between real intelligence and artificial intelligence is the ability to quickly learn.
As an example, he said imagine if he invented a word. Poditon. And he told us that a poditon is any object that can fit on a podium. Instantly, you know whether or not any object can be classified as a poditon. A laptop is a poditon, but a car is not.
We are not at the stage where we can just tell a program "Any object that can fit on a podium is a poditon" and then ask "Is X a poditon?" and get a correct answer. And we probably won't be there for another couple decades.
So, I tried this out with GPT-3 in the OpenAI Playground. (The model called text-davinci-002.) My prompt looked like this:
Definition: A blorple is anything that is red and more or less round.
Question: Is a tennis ball a blorple?
Answer: No, because although tennis balls are round they aren't red.
Question: Is a cherry a blorple?
Answer: Yes, because cherries are red and approximately round.
Definition: A poditon is anything that can fit on top of a podium.
Question: Is a laptop computer a poditon?
Answer:
GPT-3 says: "Yes, because laptop computers are small enough to fit on top of a podium."
Is a normal-sized automobile a poditon? "No, an automobile is too large to fit on top of a podium."
Is the sun a poditon? "No, because the sun is too large to fit on top of a podium."
Is a human being a poditon? "Yes, because human beings are small enough to fit on top of a podium."
Is a house a poditon? "No, because a house is too large to fit on top of a podium."
While generating those answers it also spontaneously answered the question for tennis balls (yes) and books (yes).
> We are not at the stage where we can just tell a program "Any object that can fit on a podium is a poditon" and then ask "Is X a poditon?" and get a correct answer. And we probably won't be there for another couple decades.
If that presenter actually said that, they need to take a look at "Few shot learning in language models" (just Google the term and start reading the papers).
> A prerequisite for generalization is the ability to take something high-dimensional and reduce it to a lower-dimensional representation to allow comparison and grouping of concepts.
I've been thinking that it might actually the other way around - intelligence is taking lower dimensional data and being able to infer higher level representation in terms of context, meanings and other abstractions. I.e. understanding when a stop sign isn't a stop sign.
It's actually the compression that forces it to learn higher level concepts.
In your stop sign example, say we are trying to teach a visual model the difference between toy stop signs and real stop signs.
To train it you feed it a 3D model of the world and the actions a person takes in response (ie, ignoring toy stop signs but stopping for real ones). Once the embedding is well trained (with lots of data) if you then run it through something like UMAP to reduce the number of dimensions in the embedding from hundreds to 2 or 3 you'll see it has "discovered" the concept of "scale" - all the small toy stop signs will be clustered together and the real ones clustered elsewhere.
That generalisation forced by compression is where the abstraction of "scale" comes from.
(Of course in real life you'd use a more complex model than just an embedding for this, but in principle this is the idea).
How do theories such as "The 100th Monkey" as well as transferred information via DNA to offspring translate to ML|AI at all?
For example, couldn't a sufficiently developed AI modify some code/libraries it utilizes/learns from/creates, to ensure any new spawns of said AI/ML/Bot has the learned previous behaviors?
I doubt 100th Monkey will ever hit AI.
So that's an interesting aspect to the limits to AI 'evolving'
I think compression is a bad word or description. Another definition of intelligence is sometimes to differentiate essential from superficial information. Of course that often aligns with the application of compression of information.
>able to compress data into a _useful_ representation. Note that as a result of this, the quality of the compression is important. Some forms of compression might be very efficient but they also tangle concepts together, resulting in loss of composability
---
I wonder if various factors inform how/what compression is used on a memory...
For example, a memory of putting the object back where it belongs/got it from vs the memory of a violent attack is through the lens of emotional (trauma) and thus the memories will be stored differently.
Its interesting in that I have been wanting to post an ASK HN on memory and dreams...
Now with this post, and your comment, I will post that.
---
The idea is that the surunding meta-information of a memory is important.
Lenses of senses that colour a memory are many, and individualistic.
i.e.
A person who is a psychopath, has an emotional block on the lens that they would see their actions through (remorse, guilt, empathy, etc) - thus they may not recall or RE-MIND themselves of an action/situation.
A memory that is laid with a sensuous experience, such as sex with someone you love/lust deeply may last a lifetime.
Certain things that one does/says can also lead to a lifetime of regret ; a cringe-worthy action/comment from decades ago can still haunt your thoughts.
---
I think the mystique btwn ML and biological memories is a really interesting space, as an ML|AI based system will never achieve the 100th monkey or DNA|biological transfer of information, but an approximation/facsimile based on evolved|updated libraries/files/code which are maintained exclusively by the AI entity will/does exist
Speculating here: if the brain really uses embeddings similar (in concept) to neural network embeddings, the mechanism could explain a lot of the peculiarities of the brain. Embeddings are naturally entangled, so are memories. For example, a specific smell can evoke a previous memory.
Could you please stop posting unsubstantive and/or flamebait comments to HN? You've been doing it a ton, unfortunately, and we have ban that sort of account. It's not what this site is for, and it destroys what it is for.
What we want here is thoughtful, curious conversation, not people bashing each other's comments or inflicting snark on each other or ideological talking points.
In my experience this plays out on multiple timescales. When you get older you start to have entire decades of life boiled down to the factual knowledge you gained plus a handful of episodic memories.
It's a good reminder to write shit down and take lots of mundane pictures. You don't realize until it's too late though.
A friend of mine from some time back chatted about this once. His take was that as you get older, your "mental models" grow and are able to cover larger parts of your day/week/month and your mind simply keeps the important parts but lets the rest fade.
When you're younger, those models are less complete and larger parts of your waking moments are needed to build the foundations of these models, so you feel like time is slower since so much more of your time is kept "fresh".
I'm probably butchering his take on it, but I blame my own mental models for compressing away the finer details!
> When you're younger, those models are less complete and larger parts of your waking moments are needed to build the foundations of these models, so you feel like time is slower since so much more of your time is kept "fresh".
As a corollary, if you want to keep on feeling young and feeling time pass slowly, you need to keep on incorporating "new" experiences into your life that extend (or change) your mental models.
Actually this generation will have a very different relationship with the past. Never before there was so many high resolution traces of your daily life.
There was a great article on this recently. The author made the point that if there was a day, week, or month where there were no backups or photos then you might put less value on that time of your life in retrospect. Conversely there might be a timespan where you took too many photos and might feel like there was more value to be had in that time.
Interesting. I forgot who said that in your best moments you don't have time for anything else. Indeed if you don't take pics it might just be because it was deeply interesting and not worth taking your smartphone out of your pocket.
Indeed. We take photos and videos for our son pretty much everyday. He would need a good chunk of his life to review all these if he wants when he grows up and moves to his own house.
> It turned out that either visual stimulus—the grating or moving dots—resulted in the same patterns of neural activity in the visual cortex and parietal cortex. The parietal cortex is a part of the brain used in memory processing and storage.
> These two distinct visual memories carrying the same relevant information seemed to have been recoded into a shared abstract memory format. As a result, the pattern of brain activity trained to recall motion direction was indistinguishable from that trained to recall the grating orientation.
Isn't the alternative explanation that our tooling for inspecting the brain at work abstracts too much detail away for us to be able to tell the difference?
Exactly. Our tooling doesn't even allow us to make clear inferences about neuronal activity yet - all we can image with fMRI as of today is heamodynamics.
The work proposes that because responses to difference visual stimuli show the same haemodynamic spatiotemporal response in memory areas, the actual memories must share a common representation.
This is debatable since we know that localized spatiotemporal responses reflect the variability in the vascular tree and blood flow and volume, regardless of the experiment at hand [see 1 for a discussion]. To dare to claim that effects could be neuronal one needs to run a bunch of extra control experiments (vascular reactivity mapping / breathhold hypercapnic challenges, resting-state imaging as a control dataset), none of which were conducted in the proposed work.
This is known to MR physicists, but hasn't clicked in yet within departments of cognitive neuroscience.
Right, it's quite obvious that the memory is not being stored bit-for-bit in exactly the same way because if you ask the person what they saw after the experiment, they will be able to recall the difference of "lines" or "dots".
But the paper is explicitly looking at the representation in working memory; so two obvious possibilities are, one that the "orientation" and "dotness vs. lineness" attributes are being decoupled and stored separately in working memory (different "registers" if you will). Or the "dotness / lineness" is getting stored somewhere else (not working memory, some other memory system) because it's not "behaviorally relevant" (i.e. relevant to the task that the participant is attending to while creating the working memory). I'd guess at the first because my impression was that essentially everything that makes it into episodic memory starts in working memory, but I'm not a neuroscientist.
I think the OP is getting way ahead of itself with "The findings suggest that participants weren’t actually remembering the grating or a complex cloud of moving dots at all.". The paper is making a much more modest claim that "direction" is recorded in the same underlying way, specifically during a task where you're being asked to recall direction. It's completely possible that this intermediate/common representation would not be generated if you're just looking at the pattern and not performing a task related to direction.
I couldn't find the full paper on SciHub, just the abstract linked in the OP: https://pubmed.ncbi.nlm.nih.gov/35395195/. I'd hope the full paper talks about all this in more detail.
Indeed, it's like saying "the CPU used 16 Wh of energy, executed 1 billion MOV instructions and 2 billion ADD instructions for both these two tasks, thus the algorithm it ran must be identical".
That's possible. But I think the reason why this is interesting is because you are seeing the same kind of representation in the brain for two seemingly different phenomenon, motion and orientation.
It's intuitive to see how you could represent motion by an orientation (we do this with vectors in math), but its interesting to actually see it happen.
This sounds much like the old "chess positions" memory test studies - in which chess masters were found to be vastly better than novices at remembering chess positions taken from actual chess games. But just as bad as the novices at remembering random (non-game-like) arrangements of the playing pieces on a chess board.
Plausibly, their years of experience had given the chess masters a far better compression dictionary - for situations within the scope of that experience.
I read somewhere that déjà vu is when your two hemispheres get out of sync for a split moment and record the same input one after the other. But I don’t know if that’s true for sure. Or maybe they’re just rearranging the matrix.
Another theory I read was that some signal from your hippocampus (memory storage) fires, so that the rest of your brain believes erroneously that the current sensory input is coming from memory.
I believe this is the case. I have heard the same thing and it matches my experience every time I get déjà vu. I have this strong sense that what is happening has happened before, but I am unable to relate it to anything nor recall what should happen next.
It's not an indiscriminate lossy compression though, it's a summary your brain finely tuned for a specific audience: yourself.
What's cool in this whole intelligence process is we get to refine the algorithm of what exactly it is we want to keep in the summary.
In "discarding features that aren't relevant" mentioned in the article, we subconsciously pick what is and what isn't relevant.
That's why I think we sometimes have such vivid memories of some childhood scenes: something new happened, our algorithm at that time didn't know what was "relevant", so out of safety it decides to store everything.
The researchers used a magnetic resonance imaging technique to get their data:
> "It turned out that either visual stimulus—the grating or moving dots—resulted in the same patterns of neural activity in the visual cortex and parietal cortex. The parietal cortex is a part of the brain used in memory processing and storage. These two distinct visual memories carrying the same relevant information seemed to have been recoded into a shared abstract memory format. As a result, the pattern of brain activity trained to recall motion direction was indistinguishable from that trained to recall the grating orientation."
Alternative hypothesis: the technique used wasn't sensitive enough to distinguish between how the brain handled the different information types.
My memory made a lot more sense to me when I learned it was a giant associative array, with multiple keys to look things up with. When I forget something I try various other "keys" to find it again, and that usually works.
For example, if I forget someone's name, I'll try their last name, or their spouse's name, guessing names that sound like their name, trying common names, various syllables, other memories associated with them, etc.
If I misplaced something, I'll try to reconstruct what I was doing the last time I remember having the item. When I find the item, that is the key that brings up the memory of putting it there.
A consequence of this is my memories are not in chronological order (not at all like a movie). I can clearly remember events but have no information about what order they are in or when they happened, unless there is some anchor in the memory to tell me (like where I was living at the time).
> It turned out that either visual stimulus—the grating or moving dots—resulted in the same patterns of neural activity in the visual cortex and parietal cortex. The parietal cortex is a part of the brain used in memory processing and storage.
>These two distinct visual memories carrying the same relevant information seemed to have been recoded into a shared abstract memory format. As a result, the pattern of brain activity trained to recall motion direction was indistinguishable from that trained to recall the grating orientation.
>This result indicated that only the task-relevant features of the visual stimuli had been extracted and recoded into a shared memory format. But Curtis and Kwak wondered whether there might be more to this finding.
——
That is outrageously bad logic and is basically assuming your conclusion. This is not good science.
The subject is given two tasks requiring working memory. The researchers observe activity in the parietal and visual cortices via fMRI, and find the neural activity between the two tasks is indistinguishable. And conclude
> ... distinct visual stimuli (oriented gratings and moving dots) are flexibly recoded into the same WM format in visual and parietal cortices when that representation is useful for memory-guided behavior.
Seems a pretty big leap to me.
I'm not a neuroscientist, and fMRI is amazing. But I think there's more handwaving about how 'thoughts' and 'memories' are 'encoded' as if the brain were a piece of electronics we fully understood.
There's no magic -- everything we think has to happen at some physical level, but I think there is a generation of neuroscientists who are fooling themselves by projecting a reductionist mental(?!) model of how the brain works that is as yet unjustified, and interpreting all of their results in the light of that model.
I read it as saying they actually got topographical line-like structures in the MRI, similarly to the well established result that there are typographically-arranged visual neurons that essentially light up like pixels in response to a scene.
So if that’s right then they are actually measuring a spatial encoding. Anyway, it’s just the abstract so I could be completely wrong.
“The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.“
This is somewhat like an inference from best estimate used to develop a plan and then disregard that and implement the plan. Why the design of your plan is important to get right, because it’s about to be thrown away. There is even a certain trauma or frustration with having to go backwards, unless you’re prepared for it. Have you pulled your hair out on being questioned all the time by passer bys: why are you doing that? Why don’t you do it this way? (Usually best translated as why aren’t you/why don’t you do it my way) By someone who has no conception of the system that produced the implementation plan? Because I am! Grrr! Or you core dump everything on them and you get: sorry I even asked. Or you go along with it only to find later there was a good reason you were doing it the original way and now there’s a lock on the crit path.
This is disregarding the times you’re the one who is wrong.
Which also hints at why logos is hard. Same with debugging. The sanctity of the system that produces the outcomes. Constantly having to remember details. What is happening? Why is it happening? How do you know? How can it be otherwise? Non technical people seem to be able to get away with the first idea that comes to mind, unexamined.
Frameworks, shortcuts, assumptions are developed only at some point to fail you and shoot you right back to first principles. Or you never leave them and the unconcerned dance circles around you. I heard you’ve been having trouble with your tps reports?
Lua indexes from 1 not 0! Are you kidding me!!!? ;_; I went through 5 Adams before I figured that out.
“Professor Henry Jones : Oh, yes. But I found the clues that will safely take us through them in the Chronicles of St. Anselm.
Indiana Jones : [pleased] Well, what are they?
[short pause as Henry tries to recall]
Indiana Jones : Can't you remember?
Professor Henry Jones : I wrote them down in my diary so that I wouldn't have to remember.
Indiana Jones : [angry] Half the German Army's on our tail and you want me to go to Berlin? Into the lion's den?“
To extend further, is that why don’t touch my stapler? Get out of my chair?
This is nothing new; I have read several books and works on neurology, and this is best described a "a simplified representation of the environment". Thanks to signal noise and neuroplasticity over time the weakest connection points between "remembered" stimuli deteriorate and all what is left is even more simplified version of a "memory". I am surprised that they did not heard of it yet.
That doesn't sound like quite the same thing. This finding seems to suggest that the memories are compressed from the get-go. Where you are describing why memories get more compressed over time, I think.
The compression already begins with the receptors, maybe I should have started there. Each stimuli/pattern gets more simplified with each neuron layer, e.g. if a region of receptors fire a the same time, fire that one neuron, if not at the same time, inhibit that neuron, if nothing happens, do nothing. It's impossible to "capture" stimuli without compression with neurons in the first place. Information is being "reduced" or encoded if you will along the signal path into the brain, and then over time when recalling this information.
It doesn't compress jack fuck all. There is no black board sketch. Specific qualia have a higher cortical resonance, due to overarching reinforcement over time, and they are picked out.
Calling that compression is like calling a string search for the word 'it' compression. Of course, it's not even a string search, all the little unnoticed things still produce some kind a response, and thus changw in the brain structure, memory, might as well call it noise at some point due to the lack of neurons that give a fuck.
It's idiotic.
The more a single stream of information is focused on, the greater degree of resonance that may occur with less endowed qualia (slow down, notice more shit due to neuro-satiation).
I do often share this sentiment, even if I’m not in agreement with your tone. But I’m curious —
> Specific qualia have a higher cortical resonance, due to overarching reinforcement over time, and they are picked out.
> The more a single stream of information is focused on, the greater degree of resonance that may occur with less endowed qualia (slow down, notice more shit due to neuro-satiation).
How did you come upon these conclusions, or this framing? I’m surprised, because what you’re saying meshes well with some details of a model for the neurophysiological basis of conscious experience I’ve been working on… and I don’t think I’ve ever seen anyone else point out these exact connections before.
For all I know this could be general knowledge in some community; or something believed widely at one time, now discredited. I’m operating in a vacuum and have been putting things together from what to me appear to be first principles. I might be reconstructing common knowledge, or just unknowingly restating things I’ve unconsciously absorbed in the past… The potential for which bothers me now that I’m seeing your comment.
Where’s your perspective coming from? :) Pointers to your sources would be much appreciated.
The weird (scary?) point will be when we figure out how to subtly present adversarial information to the brain that will be coded in a way that collides with some target information to induce false recognition/ memories.
I have seen research where false memories were induced into people by photoshopping childhood images of those people into events that did not happen to them — and worse, in 16% of cases just by showing adverts of things that could not happen such as meeting Bugs Bunny at Disney World (wrong franchise): http://people.uncw.edu/tothj/PSY510/Loftus-Memory%20for%20Th...
Fascinating - thanks for the article. It's strange to think that however much we think that we're completely rational and can trust our own memories, we're more like malleable rationalizing machines.
So I remember reading somewhere, probably on HN, that we don't remember real facts but instead we remember our last call of a particular memory. I've hijacked some unpleasant memories that way. I'll add some colors, a round ball bouncing, all kind of stuff that'll alter the memory. It doesn't make it totally disappear but it kinda smoothen it.
I definitely construct scenes from a few noted details plus general context. Like what colour is my neighbour’s front door? Not sure, even though I pass it every day.
However if I mentally retrace my steps within a short timespan, it seems that I recall details that I would generally not remember. For instance if I leave my house and think, “Did I brush my teeth?”, I can usually confirm/disconfirm by picturing something very specific like where I placed the toothbrush afterwards.
That makes a lot of sense. One big result from a lot of the subliminal stimuli research scientists do is that nothing that doesn't enter your consciousness and get combined with your other sensory input streams get preserved by the brain for more than a second or so. As best we can tell conscious awareness has a far narrower bandwidth than your visual cortex so of course its dropping details.
The most impressive part here is the "decompression" imo. Computers are already being used to do stuff that's more or less similar (creating apps, 3D models, pictures, videos from code) but the speed at which a human brain does it is incredible.
It can be pretty inaccurate, though, adding extra objects/words/feelings/circumstances that literally were not there :D
> To take a closer look, they used a sophisticated model that allowed them to project the three-dimensional patterns of brain activity into a more-informative, two-dimensional representation of visual space. And, indeed, their analysis of the data revealed a line-like pattern,...
So are they reading their minds? Is that possible/ What does it look like?
This reminds me of this recent book on high-dimensional analysis with low dimensional models: https://book-wright-ma.github.io/. It looks our brain is great at finding sparsity of information and compress it accordingly.
As someone who has problems remembering dates or names to events, I always assumed my brain had poor summary ability. Other aspects my mental compression likes to make fuzzy, clothes people are wearing, hair styles. But memory for locations, down to the room seems relatively loss less.
> The new study, from Clayton Curtis and Yuna Kwak, New York University, New York, builds upon a known fundamental aspect of working memory. Many years ago, it was determined that the human brain tends to recode visual information. For instance, if passed a 10-digit phone number on a card, the visual information gets recoded and stored in the brain as the sounds of the numbers being read aloud.
I'd be cautious over-generalizing that result, because I think it's also been found that different people do this in different ways, and it may be one of the things that distinguishes speed-readers from other readers.
I know when I read text, my brain sounds it out. It's gotten very fast at it, so I can read pretty quickly, but that sounding-out engages auditory parts of my brain that make it hard to read and listen to someone at the same time. Other people I've met simply do not have that limitation, and their description of the qualia of how they read doesn't mention a sounding-out step at all.
Isn't this easy to visualize? Think of driving down the highway. There'll be certain features that you remember in more detail than others. Trees, for example, will generally just be trees with the exception of a few "interesting" ones.
It seems the human brain is compressing memories like an autoencoder. But how does it learn? We are not reproducing the input again from the encoded memories and backpropagate the error. Or is that what we call dreaming?
This seems analogous to the weights in a neural network. In training, essential information about the training set is stored in weights and the rest is discarded. You can’t recover a training sample from a trained network.
Most useful thing I've ever learned about memory: every time you recall a memory, you change it. Memory is not a fixed or static 'historical record'; ultimately, it's unreliable.
Is this entirely true? I remember a lot of my work and stuff. If the work is a few months old it definitely is compressed but two weeks work is still fresh. I also remember every bit of my effort.
Compression is very well captured by the neural networks already. Value of using those features(or knowledge as we say) outside the purview of training data(iid) is dismal. Symbolic AI may help ?
Except many people have eidetic recall and memories for their entire life so this doesn't hold water, yet another garbage study about the brain that ignores the edge cases.
There are high-res versions of important events. The locus coeruleus-noradrenaline system act in concerto with the amygdala to induce protein synthesis in the hippocampus to produce vivid episodic memories. It gets rewritten every time it's accessed, but it doesn't become abstracted away like the memory of mundane events.
Semantic memory is what we call our repertoire of facts and gists and otherwise abstract knowledge. You can lose your episodic memories while keeping your semantic memories and vice versa. Which is pretty cool.
Systems consolidation is the term for the hypothesized process where episodic memories are compressed and shipped out to the cerebral cortex for semantic storage.
You can use propranolol, a beta-blocker, to prevent noradrenaline from tagging a given episodic memory as "high priority" and you can sort of force it through the consolidation process this way. And yes, you can use it to treat PTSD.
Great information & a good summary, thank you! If I’m looking to get more in-depth with the known mechanisms of locus coeruleus and episodic memory, would you be able to recommend any sources of yours?
To try and tickle your interest, here’s why I’m looking into it: I have no episodic memory. I have aphantasia. I have hypotheses for explanations that I’m looking to correlate with different pieces of research, and locus coeruleus interactions is one area I have yet to look sufficiently deeply into.
There's a difference between low-res and high-res; that's what I'm getting at. I think it's interesting to know a little about how it might work.
There is something new and interesting about the topic to many people. And the paper itself represents a small blip of progress, though it must of course be hyped up to lure in eyeballs.
I think this is why it's hard sometimes to argue in support of something you believe, even if you're right.
At one point, all of the relevant facts and figures were loaded into your working memory, and with that information you arrived at a conclusion. Your brain, however, no longer needs those facts and figures; you've gotten what you needed from them, and they can be kicked out of working memory. What you store there is the conclusion. If it comes up again, you've got your decision, but not all of the information about how you arrived there.
So when your decision is challenged, you are not well equipped to defend it, because you no longer retain why you arrived at that decision, just the conclusion itself.
It's immensely easier to trust that you arrived at the right conclusion and the person who is in disagreement is missing something, than it is to reload all of the facts and figures back into your brain and re-determine your conclusion all over again. Instead, you can dig in, and resort to shortcuts and logical tricks (that you can pull out without needing to study) to defend what you've previously concluded (possibly correctly, but without the relevant information).
If this finding ends up being generally an approximation of how our brains work, it could explain a lot about what's happening to global conversations, particularly around the Internet and on social media specifically. It also suggests a possible solution; make the data quickly available. Make it as seamless as possible to re-load those facts and figures into your working memory, and make it as unpleasant as possible to rely on shortcuts and logical tricks when arguing a point.
> At one point, all of the relevant facts and figures were loaded into your working memory, and with that information you arrived at a conclusion.
I often say "X was explained to me once and it sounded reasonable, but I don't remember the details anymore."
Sometimes remembering the reasons themselves for X off the top of your head may not be important, but knowing that there are reasons (that you can look up) is.
What the answer is for something may not be as remember as remembering that an answer exists.
There can be surprising insights yielded from such an exercise. For example, if I think about what separates breads from cakes and muffins, I am forced to deal with the way that a typical "banana bread" (baked with lots of sugar and without yeast) is really a bread-shaped muffin more than a banana-flavored bread. This might seem overly semantic, but it does reflect differences in how it is baked and what it means nutritionally.
The examples that you're structuring your attempted definitions around (banana bread) come from your intuition. In the ultimate limit your definition would be a complete list of your intuitions.
This is more about the fact that we recognize bread, and definition plays no role in the process of recognition. Even if we define what bread is, that won't play a role in our recognition of anything other than maybe-this-is-bread-plus-I'm-being-asked-to-judge-if-it-is-or-not.
It is literally a definition: it defines the boundary between what is and isn't bread.
There is a lot of context that is needed to get to a positive identification (maybe the word you meant) of bread, but that is true of many definitions present in dictionaries, etc. today.
At risk of really devolving this thread, I’m pretty sure that bodybuilders generally agree that bread is counter-productive in the pursuit of definition :)
Definitions do not have to be computable, even in principle. For example, "a Turing machine that halts" is well-defined although there is no algorithm for classifying things into that bin.
That makes you a bread-oracle O, but doesn't define bread.
Since there are some inputs x where O(x0) = False, some where O(x1) = True, and the laws of physics are continuous(yes, even in quantum mechanics), Buridan's Principle implies that you are incapable of deciding the breadness of arbitrary input in bounded time.
I agree that I cannot decide the breadness of arbitrary inputs in bounded time, although I contend that does not stop me from claiming to have defined bread, on the grounds that the set of Turing machines that halt is well-defined but also has the same difficulty you're describing.
A definition doesn't change: The prime numbers or Turing Machines are the same set regardless of who Putin invades next or what law Biden decides to veto.
But the set of inputs that an oracle implicitly defines, could change if the oracle changes. And you could change your mind or die tomorrow.
So you would need a very large number of definitions of bread, indexed by (time, person). Any one of them could be a valid definition - it's theoretically possible to make you look at 1000 pictures of bread so your brain is encouraged to make a bread-detector neuron, and then scan your brain and calculate its response on any input - but you don't know which one is correct to use for any purpose.
i.e. If I want to start a bakery, should I use your current bread-oracle to define "marketable bread", your bread-oracle as of 5 years ago, should I take a statistical ensemble of brain scans from millions of people, or should I use my own?
It seems like just having a function that returns true on some inputs and false on others doesn't tell you much, whereas traditional mathematical definitions have strict relations to other things.
I don't think this is true? Suppose I define "bread" as "that which has a net positive charge" [1]. Can I not put the bread candidate in an electric field in flat spacetime and measure (the direction of) its acceleration in a bounded time? I suppose I might be depending on its mass being finite, but the observable universe supports that assumption.
[1] I don't think this is a very useful definition of bread.
Remarkably, you are getting downvoted for stating exactly the conclusion of pretty much all philosophical discussion on the matter since the mid-20th century.
Notably, the public reacted similarly then as HN does now, rejecting the notion that meaning is only constructed and, furthermore, hopelessly solipsistic.
It is impossible to share definitions of natural-language words, at least pending advanced brain scanning technology. That's a limitation of physical reality, not a philosophical flaw.
I'm implying that natural-language definitions are physical objects, in your brain, made up of brain stuff, and that you can't write them down in ways that are much briefer than a full description of their physical manifestation, although you can roughly approximate them in something like a dictionary.
I know this is a joke but it seems unnecessary. Most people actually do use evidence and logic to arrive at their opinions. The problem is some people are presented with incorrect or fabricated evidence. Some people draw incorrect conclusions, or maybe some of the evidence is above their head so they ignore that when it's vital to proper understanding. Some people aren't particularly good at logical thinking, or never progressed past introductory levels.
This is all why you can show identical evidence to a group of people and get multiple, sometimes very different, opinions.
"Most people actually do use evidence and logic to arrive at their opinions."
They do not. The brain is a machine of lies designed to keep you alive, rather than arrive at some pure truth. The vast majority of your brain power is subconscious. Your brain is extremely good at arriving what it needs to know, not at knowing or truthfulness in general.
It takes an incredible effort in critical thinking (which does not come natural) to unravel the layers of misdirection and crap your brain has produced in order to come to a kind of objective truth. It's such a headache inducing process that few will undertake it. Even more so when the outcome of critical thinking is typically uncomfortable.
Perhaps more unsettling is that even the very concept of you is a lie. Not your body, obviously. Your inner self, your identity if you will. You think you're some kind of well defined, consistent character. Carved in stone. One could perhaps summarize you in 10 bullet points and this idea of you is pretty stable over time. That's how you know it's you.
In reality, the brain has established this concept of you because it's in your best interest. Every little piece of input, thought or memory that directly contradicts it (which is constantly) is carefully dismissed whilst the confirmation of the false belief is amplified. Not because it is correct, because it is preferential.
I'm happy to leave you in this confused state on a random Tuesday. You can now think that this guy is full of shit, which proves my point of your brain filtering information that is not in your best interest. Or, you can agree. The outcome is the same. I'm right. Or, rather, my brain thinks it is. Which is what brains do. It's a defensive organ.
I have a feeling if OP had read some of the papers surrounding Daniel Kahneman (and the works Kahneman cited) he wouldn't be so sure about mankind's rationality.
It's like the vast majority of experiments on the subject ends up with "and then they proceeded to use their intuition and who they like more to make their decision".
Also, I think it was "Classical Rhetoric for the Modern Student" that also said that logical arguments are the weakest kind of rhetorical arguments since basically anything else is more likely to convince people.
Interesting thought. Perhaps that is also why people sometimes have a hard time changing their mind when confronted with new information: a certain number of bits of information have led you to your belief, and even if some of those change or turn out to be false, you can't access those bits anymore individually, but only the resulting belief.
Perhaps, the more those beliefs are reinforced, the less likely you are to access it's constituents. Sounds a lot like inductive bias, but somehow different from ML.
> why people sometimes have a hard time changing their mind when confronted with new information
Something else happens with me, it's like my brain says "this does not fit in with what I understand, discard it". At a conscious level I don't hear what I've just been told. I have to be told it again, and sometimes more than twice before it finally works its way in. It's a liability for me and a frustration for others and it's just plain peculiar.
I don't think this is too uncommon. I sometimes go through such a phase, also in reading, and what helps me get back on track is to do things really intently for a while. And I mean even basic things, being really aware of what I'm doing and thinking in that moment.
When you dont pay attention to what is currently happening, it's usually that your mind goes on tangents. I'd recommend becoming aware of those tangential thought processes. Mindful meditation may help a bit.
"Would I not need to be a barrel of memory to also remember all my reasons? It is hard enough to remember just my opinions themselves!" -Nietzsche in Thus Spoke Zarathustra
I don't think the conversation on the social media is based around data. Most data points that people have are inaccurate (if not false), taken out of context, or used with an incorrect mental model. Once someone states something on social media, it has usually been taken on a viewpoint: at that point data is generally viewed with a confirmation bias type approach.
I am wondering if there is a way to teach everyone to separate facts from values. The facts are the most important part that should be maintained separately (you can do this with notes). Then we need to recognize that different individuals will apply different values and focus on transmitting facts in discussions and let everyone apply their own value system.
Which is also why I think using facts to convince others is a Sisyphean endeavor. It is far more rational to learn rhetoric when you have to argue. Learn to wield fallacies like a weapon.
Of course, this relates back to good-faith, bad-faith engagement. Wielding rhetoric like this constantly deters people from engaging in good-faith, so you also have to develop a heuristic to determine whether or not the individual challenging your assertions is worth engaging in good-faith in the first place.
I've found that 100/100 people just get offended and/or pissed and retreat to their amygdala if you point out a fallacy in their logic. It certainly doesn't help that many people pointing out logical fallacies are in fact wrong (and fallacious) themselves (the "you're using a slippery slope fallacy" for example is fallaciously used all over the place).
I'm becoming increasingly convinced that good faith engagement is essentially impossible. The only reason I engage at all anymore is for the third party that might be an honest seeker who may stumble upon the thread at some point in the future.
>I've found that 100/100 people just get offended and/or pissed and retreat to their amygdala if you point out a fallacy in their logic.
And I am sure I've been guilty of this before, many many times. Being challenged is not a comfortable position to be in. I have since learned to weaken my position to give myself and others some leeway when one of us is wrong.
>I'm becoming increasingly convinced that good faith engagement is essentially impossible.
It is certainly getting more difficult. I think it is still useful to engage with individuals in your chosen social circle honestly and in good-faith, otherwise why are they in your circle in the first place?
Favorited this comment for when my brain remembers "people argue online because of how our memory works", but not exactly how I arrived to that conclusion.
It's extremely difficult to maintain a database of __all__ the citations for __anything__ you ever adjudicated (reached a decision).
Making things more easily findable and a database of debunked lies might be better.
Also great would be training (for anyone) on how to spot 'magic tricks' in debates / information presentation. E.G. how things might be cut down, remixed, or staged to create something that at a glance is convincing, but with closer examination could just be gaslighting.
Is there a problem? The so-called global conversation concern seems to be simply that some people have differing feelings and their feelings push them to want others to share in the same feelings. To 'solve' for those feelings of some implies that their feelings are of greater importance than the feelings or others, but that seems pretty wishy-washy.
Another potential upside of a brain to computer interface (Neuralink), the ability to store every memory you have ever had (while the device was installed) in full resolution.
Assuming of course you maintain a server rack at home with copious amounts of hard drives.
People will still argue that self-hosting is too hard so you might as well just accept that Evil Corp is gonna be the central store of all memories (with a great proprietary format!). Better not think of anything that violates the terms of service.
The ability to experience a memory as precisely as you want, including the option of a full mental transplant, like loading a save file for a video game. See, hear, touch, smell, taste, and think the exact same thoughts as you did 15 years ago. The playback mechanisms will have some caveats, as it may not strictly be possible to playback perfectly, as you are a different person with a different brain and body than say 15 years ago. You could relive something in the first person perspective, or perhaps just observe yourself from a third person perspective.
To a lesser degree, just being able to hear the dialogue in your brain at the time of a memory would be monumental. Then you can get into the business of using tools built around this, such as searching your memories, computing statistical analysis (maybe you can find out why you haven't been able to commit to an exercise habit for the past 5 years?), and so on.
I have aphantasia, so my experience of memories is generally closer to factual recall than sense experiences; additionally I don't have an inner monologue. Which is sort of why I asked: memory is not necessarily a record of our sense experiences. Keeping an arbitrarily precise record of our sense experiences would be quite cool and useful, but that would necessarily be a different physical process than memory, and any "memory" generated from that data would only be an interpretation of what that sensory experience might have been.
When I mean memory I mean every possible electrical signal in the brain, including sensory input. Maybe it won't be possible to "see what your eyes saw 10 years ago" directly into the brain, but perhaps you could render it on a monitor?
When it comes to not having an inner monologue, that complicates the example, but I think it's still possible to work with actual memory. What I suggested was a tool to search your memory by tapping into the words from the inner monologue, but if you don't have that available, you can still search the signals of the brain, it would just be less comprehensible. Say you're trying to quit smoking, you could pattern match the brain signals that are present when you have a craving by checking historical data, and pipe that feedback into a controlled release nicotine patch designed to slowly taper you down over a few months.
Edit: While that particular use case doesn't sound exciting (why not just use a regular patch?), I don't think it's because the possibilities aren't exciting but more so I'm just not the best at imagining what the use cases would be specifically.
I had that thought this morning, knowing I have to present at a design review today!
I think the boring solution is to take written notes when making decisions. As an engineer, I find that architecture documents are very powerful and always worth while.
Announcements like this seems so out in front of what we actually understand. It's not like we can take someone's brain and read memories from it, right?
Parents report that student brains compress memories of the just-ended school day into "fine" or "nothing" depending on the specific interrogative used as a prompt.
But my memory works this way. A summary "party at so-and-so's house, weather was nice, overall vibe was ___". The rest is context. You know what the house/backyard is like, you know the general feel of that time of year, you know the crowd that usually comes, you can easily synthesize details like the smell of the BBQ and the taste of the food... build up a complete "memory" from stuff that could be summarized in a paragraph of text plus generic (not specific to one memory episode) context.
I can build up a relatively vivid mental image of my walking route to school (from the bus terminal) over 40 years ago. Is it accurate? Who cares. As long as no detailed record exists to compare it to that would reveal the "lossy compression".