> In tests, the man was able to achieve writing speeds of 90 characters per minute (about 18 words per minute), with approximately 94 percent accuracy (and up to 99 percent accuracy with autocorrect enabled).
I'd be interested in knowing how this metric changes over time as the user gains more experience with the BCI device. The article mentions that researchers recorded his neural activity while he was thinking about writing letters. Would the man eventually find that the system is more accurate or faster when he instead learns how to think "the thought that generates the letter A in my BCI device"? Fascinating stuff all around.
Long ago, the accuracy would decline as scar tissue formed around implanted electrodes. Not sure if that's changed in recent years as techniques improved.
How long ago is long ago? 7 years ago I took a course on the then-current state of the art for neural interfacing, and this was nowhere near a solved problem then. There was research going into emulating sea cucumbers, so the electrode could be stiff enough to penetrate the brain but then soften to avoid the build up of scar tissue. I think that research is still ongoing.
This makes me wonder if "thinking about writing letters" is really an accurate description of what's happening. Was the subject merely thinking about writing letters, or was he actually trying to write them, such that if he weren't paralyzed his muscles would be moving to perform that task?
The subject was instructed to actually try and write the letters; the AI they trained on the electrode outputs attempted to return pen stroke velocities.
You need to establish the initial feedback loop somehow and imagining writing is a good way to do it. But once you have it I’d suspect you could get faster doing what you’re describing.
I agree, and I'm a bit excited/worried if brain interfaces turn out to be kinda like smart handheld devices, years of failures until the essential "recipe" is discovered like the iPhone, followed by a tsunami of innovation, functionality, and power - some realized, some not.
Do we have adequate wisdom to wield the powers we are granting ourselves?
Conversely, T5 is 65 years old and just adapting to a completely new way of communicating at retirement age. Comparing that to the neuroplasticity of a far younger subject, I can see a high chance of this outperforming mobile typing. 10 fingers, I'm not so sure.
with a big heap of "maybe". Signal could be a limiting factor, but also come on, more than 99% correct? That's above the level of a non-professional human transcriber in most situations. I'm not sure that typing I get more than 99% correct, I've made at least six errors typing this sentence alone (though it's quick to fix).
The real improvement, it seems would be to speed and latency, looking at the diagram, the sampling is over the course of 3 seconds, which is butt-slow. A good NN would be able to compressed patterns in the data stream and blat out more than one letter, or contextually learn letters in the word, or learn whole words, etc. But that is not, it appears, supported by the model.
Interesting that the user has to think of specific letters to spell out a word. I guess the 26 distinct English letters are much easier to parse and separate than the untold thousands of words, especially when those words could be used in different contexts.
I bet the next step wouldn't be to even parse words to make a sentence. It seems to me the next low hanging fruit would be to enable this machine to parse common ideas. I wonder how complex it would be to translate full sentences like "Good Morning", "I gotta take a dump", or "I'm hungry". It doesn't seem like it would be that much of a leap, since the user already has to imagine the idea of different letters. Admittedly I have no idea how different those concepts are, or how they would express themselves in the brain to be interpreted by the machine.
Note that this doesn’t work by detecting his “imagined” letters. It’s detecting him trying to move his paralyzed hand through the motion of drawing the letters on paper.
By measuring the motor cortex activity it seems to be looking at something closer to an “output” of the brain rather than an internal representation.
So detecting imagined letters or words or ideas seems like a different problem than this achievement.
A paralyzed man wrote a book by blinking when someone pointed at the right letter (sort of like how the guy stranded on Mars in book/movie The Martian sent messages to earth). This story was later made into the movie The Diving Bell and the Butterfly.
https://en.wikipedia.org/wiki/The_Diving_Bell_and_the_Butter...
That next step is a leap. I view it as the difference between parsing and processing spoken English and parsing and processing spoken Mandarin Chinese. The letters and numbers are 36 symbols to understand, plus capitalization and punctuation. Understanding words means mapping out the brain pathways for each word.
There is actually a path for this that's been done before, in a way. Dragon Naturally Speaking was evolved this way.
As I understand it, that evolution took decades.
In 1952 Bell Labs came up with Audrey (Automatic Digit Recognition). Voice specific, and could only recognize numbers 0-9. This is where the OP linked Brain Computer Interface (BCI) is.
In 1962 IBM revealed Shoebox at the World Fair. Shoebox could understand 16 English words. It would listen to the words and complete an instruction for example adding up numbers and providing the result.
Harpy came in 1971. Funded by Darpa and developed through a collaboration between CMU, Stanford and IBM. Harpy cold work with ordinary speech and pick out individual words, but it only had a vocabulary of around 1000 words.
In 1974, Kurzweil forms Kurzweil Computer Products (KCP) for development of pattern recognition technology.
In 1976, KCP introduces the Kurzweil Reading Machine, combining three technological firsts.
In 1982 Dr's Jim and Janet Baker launched Dragon Systems and prototyped a voice recognition system that was based around mathematical models. The Bakers were mathematicians and the system they came up with was based a hidden Markov model – using statistics to predict words, phrases and sentences.
In 1983, Kurzweil Music Systems launches a keyboard synthesizer that accurately reproduces the sounds of acoustic instruments.
In 1985, Kurzweil Applied Intelligence introduces the first speech-to-text computer program.
In 1990, Dragon Dictate was launched as the first general purpose large vocabulary speech to text dictation system. This was a groundbreaking product for Dragon, but it required users to pause between individual words.
In 1994, KurzweilVoice for Windows 1.0 is launched, bringing discrete speech command technology to the personal computer environment.
In 1995, Kurzweil Technologies is founded.
By 1997, the problem of having to pause between words had been overcome and Dragon Naturally Speaking v1 was launched, 45 years after Audrey.
In 1997, the Continuous Speech Natural Language Command and Control software is launched as Kurzweil Voice Commands; The Medical Learning Company is formed.
In 2000, Kurzweil forms FAT KAT, Inc. to develop artificial intelligence that can make decisions about buying and selling on the stock market.
Then in 2001 KTI introduced "Ramona," the virtual reality rock star.
Yes, the last two have little (maybe even nothing) to do with speech recognition, but I found them interesting, so I thought you might too.
I think the next step would be predictive text. Have 3+ symbols that correspond to a screen that is in the subject's eye line. Basically just leverage the current tech to streamline this.
hmm, I wonder what would be the fastst method wether teaching it the idea of "good morning" or the words "good" and "morning"
I would imagine eventually you would end up with system where less common words or complex nuanced ideas would need spelled out and commonly used thought words or sentences could be recognized.
I've spoken to Abe Caplan, one of the Braingate researchers a few times online on clubhouse. https://clubhousedb.com/user/abecaplanhttps://www.researchgate.net/scientific-contributions/Abraha... He said it did not need a high resolution to work well off the top of my head (it did not need to be very accurate, it does not have to be a single method of working, there are many places it can be linked and still work, this refers to the signal processing on the BCI), companies like neuralink often repackages these research projects with slick marketing as original but it was simply rebuilding this project with another interface (they used clunkier probes while neuralink wants to do an implant, but that was before they filed a patent for an implant as well).
This says it requires an implant, but I am not sure if its true, his contributions are much older though and they might not be the same as the current Braingate research, but they were also for aiding with disabled people with controlling prosthetics, and also with signal processing and calibration to the user, they have made text input before inplants were required, so I don't see why its required, its benefit is being more convenient than setting up the probes or as a wearable. https://www.frontiersin.org/articles/10.3389/fnins.2012.0007...
I'm curious how his imagined writing compares to the handwriting he had before he was paralyzed. Was it always that messy or are the BCI controls difficult to use? For example, his comma seems exaggerated, as though he had to imagine grand gestures to get the SW to recognize it as a valid character. But on the other hand, his "m" and "n" look fairly normal. It's possible that's just what his handwriting looks like.
Actual handwriting goes with a feedback, at least with kinesthetic, tactile and visual. Here he writes "blindly". I dont' know how much it contributes to the mess, but I'm sure that a noticeable amount of it can be explained by the lack of a feedback.
Very cool. The opportunity for people with physical impairments to continue communicating - and convey their ability to still experience the world - gives us the ability to help these people lead happier lives.
How do you measure accuracy? Is there another way the man can communicate?
edit: The researchers compared the BCI output to a prompt that T5 was supposed to restate. I was thinking that T5 was communicating without prompts in the experiment. This isn't my idea of translation accuracy, but you've got to have some baseline.
No. You’re trying to measure whether the machine is accurately translating his thoughts into characters. He may make errors in spelling or memory. Success is whether the machine spits out the characters he intended, even if they weren’t the ones he was supposed to send.
Five years ago, the avegant glyph seemed reasonably small for an HMD (well, if you could get it without the earphones). I feel like we've regressed from that point.
Yea, I actually consider the smaller FOV a feature. Doesn't necessarily have to be 45 degrees, but I'm not interested in covering my entire field of view. I want a monitor for work, not for games and movies. Immersion doesn't matter, but the comfort of not needing to cup my eyes sure does.
The whole point of VR is immersion. It's literally called Virtual Reality.
What you seem to want is just a screen strapped to your head with some lenses to make it look further. Why do you want that exactly? Portability? Space saving?
Yeah I never said I want VR. I want head mounted displays to free me from bulky & expensive monitors that require lots of desk space and aren't available when travelling with a laptop. And if you can see around the HMD and reach for the coffee cup on the desk without having to look through cameras as you can with the glyph, I consider it a good design. Unfortunately the whole market seems to have developed around multimedia consumption and immersion.
The title is clickbait. This isn't "reading thoughts", it's reading motor movements, which is a different part of the brain than cognition. Most people will assume thoughts = cognition reading the title.
I wonder how something like this would function with chorded typing? A keyboard like this [1] with only one key per finger I would imagine would be relatively easy (easier than handwriting even? Since it's not limited to only your finger muscles - you attach every easily controllable muscle in the body to a button on the 'keyboard', and it seems to be a binary value whether a finger is clicking down or not, instead of what letter your hand is writing) for a brain implant to register. And a lot faster than handwriting.
Total dystopian thought, but I wonder if this could ever be used to extract info people unwillingly, as in some interrogation scenario. I mean, you have a lot more control over what you say than what you think.
Most likely the brain implement is working against/with a persons consciousness. This "information extraction" device you talk about, would need to work against/with the sub-consciousness, otherwise you can just think "WALL WALL WALL WALL WALL" and the device would only be able to extract that.
Not to say that this device was probably trained on the person a lot before they could reach that accuracy. A "information extraction" device would have to be trained on it's victim first, but why would they play along with the training?
Maybe I'm wrong, but I think most people have control over what they consciously think, at least it works like that for me.
If you read the article it's clear that the researchers aren't pulling out the information from the guy's thoughts, they're reading the nerve signals from him "imagining" he's writing a letter, and using ML to map that to pen strokes which get mapped to words.
So if James Bond gets captured, all he has to do is to not imagine writing the information, he can think it all he wants.
Perhaps someday. It seems the way this experiment worked was by "imagining writing the words by hand", which would indicate that this kind of transmission was dependent on an explicit act of will.
This is freaking awesome. Those mind-control drone pilots we have today mean there's an exciting near-future for mind-machine interfaces. It will be fantastic not just for people who are paralyzed, but also perhaps we will have better prostheses, and maybe in the metaverse we will have additional appendages to do more tasks.
This was my first thought. Writing is incredibly slow. I can barely even operate a pen anymore. I don't see any reason why this mechanism couldn't be used on any thought patterns. If the subtle motions of typing aren't high-fidelity enough to be differentiated, I still wouldn't have chosen normal letter patterns; I'd design a new motion alphabet that is much easier and faster to "write" by thought.
The article alludes to this:
> the researchers say that alphabetical letters are very different from one another in shape, so the AI can decode the user's intention more rapidly as the characters are drawn, compared to other BCI systems that don't make use of dozens of different inputs in the same way
The fact that it works so well on these complex motions means it can probably work better and faster if they use an alphabet with simpler--but still distinct--motions. Probably lots of lessons to be learned from shorthand and other rapid transcription techniques.
Losing the ability to communicate scares the hell out of everyone. This is amazing progress. And it'll have plenty of applications even for able-bodied people.
I don't think it works like that. Letters are shapes but keys just are a relative position. The software is reading gestures, specific keypress motions seems much less data to work with.
The next truly massive tech revolution will have to be something even we tech peeps reject. And I think bodily embedded stuff would do the trick.
It could be amazingly effective though if this is where we’re at already. Imagine the speed and enjoyment increase for anything from typing to gaming to driving a car. You’d get completely left behind if you rejected it.
> In tests, the man was able to achieve writing speeds of 90 characters per minute (about 18 words per minute), with approximately 94 percent accuracy (and up to 99 percent accuracy with autocorrect enabled).
Don't get me wrong I'm sure there will be advances. But this current tech is based off reading nerve data meant to be movement data - the user needs to mentally trace each letter.
So I don't see this form of the tech at least being able to compete with qwerty let alone stenography.
Actually to that point, stenography would allow people to input data (it must be language specific and error tolerant but most BCI is as well) at ~5x the average typing speed on qwerty but that hasn't proliferated.
EDIT: On second thought I could see it matching physical movement, maybe _slightly_ outperforming it by a few % by skipping a few physical limitations. I think this should be essentially identical to any other physical motion based communication.
>this current tech is based off reading nerve data meant to be movement data - the user needs to mentally trace each letter.
I just thought about tracing out a letter as to how I might write it and it took me a second or so per letter, around the speed I actually write I'd guesstimate, and I can't write anywhere near 90 characters in in minute, probably because my brain has adapted to sync with my hand speed. I'm curious whats actually meant by "tracing" for movement signals because I'm either slow at this or it means something a bit different. I can easily type 90 characters a min but in a lot of cases its rote memorized patterns for words I'm thinking of in sequence (I'm not really thinking of individual letters in words, just words as a known pattern of keystrokes), at least I think that's how I think.
Question to anyone that knows - I know that actively "imagining" movements/activities is neurologically _very_ similar in many ways, which is why it works here for BCI. Do these "imagined" thoughts develop muscle memory?
Anyway, assuming they do, I'm not sure if it'd be a real advantage over physically swyping with your finger (for those that can obviously) - it seems like they'd be roughly equivalent?
Actually that'll be my second question - how is this system affected by things like tremors?
I'd suspect they originate from your brain in which case your "mental movements" should have the exact same quirks and limitations as physical movements.
The end game of course is not needing to use the movement system to interpret information.
From the research that I recall from a decade ago when I was much more into BCI, I'm not sure there's a significant difference. And just like with a swipe-keyboard, accuracy will be hit or miss but be partially dependent on trained feedback mechanisms to hone towards a set of patterns.
Those that train on a BCI from an early age will "type" significantly faster and more naturally (as if at the speed of thought) than those that do not. There's a natural limit to idea-formation => symbol-formation => symbol-expression. Those that have trained on keyboards are able for this to fly from their fingers with only a slight delay; those that speak at something like "auction-speed" are mostly executing verbal macros: ie, it's just a single thought/action, highly trained so that it can manifest at high speed.
I can retype the entirety of the text above with just a few actions: cmd-A, cmd-C, ->, cmd-P. Performing a novel action, however, moves at a different speed entirely.
A friend of mine was a nurse on a mental health ward in the UK. There was a long term patient there who no-one could understand, until my friend, who was Mauritian, started working there. The patient was saying 'mes dents, mes dents'. (my false teeth, my false teeth..)
Interestingly there are people like myself who don't have an inner monologue (some people supposedly even have a constant dialogue with themselves) at all. The only time I experience something like an inner voice is when reading. My normal thoughts are more abstract I guess, I'm just sort of aware of what I want to do, how I feel etc. It's hard to describe.
I’m so deeply fascinated by that because it’s not how I experience consciousness. Very cool. I think I’m far in the opposite direction: I have a voice that sounds like me, but I don’t control and it asks me questions and gives opinions. It’s like having a copilot.
If I start trying, it completely goes away, replaced by my monologue. I'm sure a lot of people know that feeling where you very intentionally drive your own internal voice. I can do that.
I think it's related to my ADHD. Reading is very difficult because as I read, my voice just completely wanders off, CONSTANTLY. "Oh hey, 'gargantuan' that's a great word. Reminds me of a video game monster... Oh did you just read 2 whole pages without absorbing a single bit of it?"
To add a bit more commentary: this is the one time I consider my ADHD to be a terrible disability. I cannot read. I just can't. Grad school was HELL when I had to read. But my ADHD and associated strong independent inner monologue is immensely powerful when I'm trying to solve problems myself, such as doing software design.
Yeah, my inner monologue sometimes feels like I'm driving it, and sometimes I am definitely definitely not driving it. I don't know if it's "my voice" or not, it's just... the the thing in my mind that outputs (silent) language.
And yeah, I can have conversations with the voice. Yeah, it can help to focus moving my thinking forwards on some problem-solving thing. But, alas, often when I'm doing that it just starts saying obviously-silly things, as if it's a Markov model. Sentences that might sound structurally reasonable but that are obviously not what I meant to say/think, and obviously not true.
But it almost never interferes with reading. Reading is too compelling. Even if my thoughts go off in some other direction while reading and I do the "just read a page but absorbed nothing" thing, there's no voice involved then, it feels like a different process.
Is that just an ADHD thing though or part of human nature? Reading in the beginning is difficult yet, after training your brain to sit and process by reading an hour in the morning you can change things. Very fast.
As someone with ADHD, I have those same experiences, just not verbally. I can verbalize them if I have to, but they are wordless.
The only time I experience the sort of word-based hijacking you're describing is with what I think of as "internal argument". I'll think of a previous or possible future discussion and structure my reply in words. Although since words are slow, often the words will sort of collapse and I'll shift to sort of a mental outline mode, where it's more a feel of structure with occasional words or phrases cropping up.
I also have a very constant inner voice, sometimes multiple, maybe it's an ADHD - although my ADHD I can read fine (although sometimes I need to get up and walk around absorbing what I've read and considering the implications)
I've always been a bit perplexed by the claim of not having an inner monologue, but after reading the replies to your comment I wonder if I'm closer to not having an inner monologue than having one. I thought it was about not being able to summon an inner voice at all. Regardless, it's fascinating to read peoples descriptions and see that it lies on a spectrum.
I definitely experience it when reading & writing carefully and sometimes in deep thought with conscious effort, but never minute to minute and definitely not compulsively. I'd describe my minute to minute like you, more abstract and intuitive. Yet at the same time I'm very introspective, it just doesn't happen with an inner monologue.
edit: after reading the study, I think I fall into the not having an internal monologue. Fascinating.
In one of Feynman's books, he discusses and experiment that he did with people keeping time in their heads (count 30 seconds) while simultaneously reading a passage of a book. Some people had absolutely no issue with doing this, but many people found it impossible.
When he asked people how they kept track of how much time had passed, he found that people who pictured a number in their heads were able to keep time and read without issue, while those who "spoke" the time to themselves found it impossible to read.
May I ask you if your inner voice has an accent, can you recognize a tone to it? It was very weird when I realized that other people can imagine their thoughts having a physical voice.
My inner voice by default sounds like how much own voice sounds to me, but I can make it sound like anyone I know, even with accents, though I can't speak with those accents because I don't know how to move my mouth to precisely make the sounds.
Yeah, mine is my voice. When I was thinking rather or not to reply and what I would reply with it was just my voice - as it sounds in my skull, not as it sounds in recording - just talking through the details as if I were speaking out loud to myself.
I’m pretty skeptical of people that make this claim. It’s just surprising to me that a base level feature would work so differently. I’d expect some variation in models (like that old Feynman video about counting), but if you can speak and use language it’s hard for me to accept literally no internal voice is going on.
I’ve always kinda suspected people making this claim are lacking introspection to such an extreme extent that they don’t even recognize the inner voice that’s omnipresent.
> I’m pretty skeptical of people that make this claim.
Your essential point for justifying this skepticism is that you cannot imagine people are this different. In my experience people are always a little bit more different than you can imagine. After all, there are people that tirelessly work to charitable ends on one end and people that run death camps on the other.
> people making this claim are lacking introspection to such an extreme extent
It's perfectly acceptable to you to imagine that you are (essentially) fundamentally better or more complete than them, but not that perhaps they are your equals and merely experience life differently. I think it could be valuable to figure out why one is easy to you while the other is hard.
I have an inner monologue when reading (or otherwise interacting with language), but mostly not when actually doing stuff. There must be some stuff you do where you don't have an inner monologue? I think it's a matter of degrees, like when programming my inner monologue consists mostly of variable names, but not like a procedural "I'll do this, then this, then it will do this".
If you're having a hard time grasping that, try doing or thinking about things while doing a mantra. I think you'll find that you're still able to "think" while the only thing your inner dialogue is saying is some kind of mantra. (It may take some practice)
You can also try speed reading apps which force you to absorb information without the time for an inner monologue.
I find this helps with introspection as you can observe ideas without the (direct) bias of language. Being able to recognize and observe the thing that's making your inner monologue happen is a useful skill, I think. I can't really imagine being bound to language like you're describing, and it often takes me a while to put more complicated ideas into words.
Languages categories are never going to be accurate. Is a whale a fish or a mammal? Well technically a mammal, but if you want to put someone in charge of them it's probably better if it's the department of fisheries than whoever's in charge of buffalo. One of of them has boats. The word is just a word, a pointer at a vague collection of things with similar properties. Being able to think about and work with the things directly without the distraction of language is very important to me.
Man, this is a really weird thread. I don’t vocalize my thoughts internally either unless I need to formalize and remember them. My guess is that the people who can’t imagine not having an inner monologue just don’t take control of that process, since I can’t imagine they’re unable to think at all without mentally vocalizing things.
And now it sounds like a lot of meditation is training to be able to think the way you or I do, haha.
Is there any useful, productive research out there about this stuff? The only time I’ve come across any convincing or scientifically rigorous psychology was when Feynman did some for fun in his spare time and wrote about it
- Don't accidentally get caught in the fallacy of "how I experience existence must be how everyone does
- Is it possibly just a semantic distinction at that point? If you are completely consciously unaware of an internal voice that speaks your language, does it matter if it's there or not?
I’m with you on 2, but it doesn’t seem like just me - it seems like vast majority except the occasional person claiming it’s different for them.
It’s not a semantic distinction to me, since the mechanism underlying it would then be the same and it’d just be their recognition of it that varied which is way less surprising.
Well here is one other random sample who has to turn the inner voice on when necessary :)
Instead of a voice I have a constant song playing in my head when I am not focused. The song changes multiple times per week but if I am on idle I have a song.
When I speak the song turns off, but no voice comes on, unless I consciously prepare my words.
Now when typing this comment, I have a voice (which is my own voice) say the words I am about to type milliseconds before I type them.
I also have a song playing in my head most of the time (even while reading but not writing , like you). I’m a hobbyist musician; maybe that’s part of it.
But I also have an inner voice . I think the music stops when the voice starts, not sure.
For the last week I’ve had “right down the line” by Gerry Raferty (70s pop song) playing.
This repetetive inner soundtrack thimg can be really annoying sometimes, hampering or even killing concentration on "real thoughts".
Any pro tips on how to turn that off? Best approach I found is bulldozing it over with a really powerful but not too beautiful/memorable song. The famous "rickrolling" piece seems to work OK for this. (I.e. not by actually hearing it, just by intentionally "playing it internally".)
I truly believe this can be quite different between people. Personally I don't have "a" inner voice, but a quorum of three, all of which together form "my" thoughts
If I would hazard a guess, I'd say it's possible that the region of the brain processing language has developed elsewhere than with your average human, leading to less connections from language (Broca's region) directly to auditory region.
This hypothesis would explain hearing the inner monologue when reading, as reading actually transcribes visual data directly to their phonetic counterparts.
Is it that much different from aphantasia? My sister cannot "see" anything in her mind at all, whereas for me mental images are so strong that I sometimes stop seeing the world in front of my eyes in favor of the one in my head. That's a pretty radical difference in a "base level" feature.
I think it’s different - there was an old post by an early FB employee who has no ability to visualize images after a head injury and has to adapt as a result for that.
Thinking of images is also different than thinking of words (since all of us speak the language).
I’m not saying it’s impossible, but that I suspect its more likely a lack of introspection - I’d need to be persuaded empirically somehow and don’t know how to test it.
> Thinking of images is also different than thinking of words (since all of us speak the language).
No more of speak the language than see with our eyes. Personally while I do have some kind of inner voice, my thoughts tend much more heavily to the visual. And memories too. If I need to recall a phone number or spelling then I'll imagine it written.
Thanks - something for me to think about. These HN threads do have a history of changing my mind (or at least softening my position) on topics where the true answer can be harder to know.
My introspection is quite good on this; I've been doing meditation on and off for decades. Sorry, but I don't have an internal narrator. Instead, the meditative interruptions that come with words are generally imagined discussions with other people or things to write about.
Like you, I had a hard time believing people were different in this. The whole idea of an internal narrator seemed absurd to me. Why would anybody need a narrator for themselves? They're right there! But enough people claim that this is their real life that I'm willing to believe it, however tedious and exhausting that sounds to me.
Imagined discussion is what I'm talking about. It's not narration like "I'm picking up the coffee mug now, I'm clicking the button now" - it's silent speech with oneself. Often it's trying to predict what will happen or thinking about things with language. It's not that every action must be stated by some narrator, but that a narrator exists to discuss things with oneself.
Without language and semantic meaning tied to ideas, what does 'thinking' mean at all?
My point is more that there are always thoughts (typically in the form of words, but sometimes images) flowing through your mind all of the time. Meditation and 'mindfulness' is focused on recognizing them as they happen and getting control of that kind of thing (at least enough to reduce thought loops, rumination, unwanted emotional response, etc.).
For me there are significant periods without words or images. I also almost never "discuss things with oneself". I understand that people do that, but for many years I just thought it was metaphor occasionally made real in film and books. The sort of storytelling convention that is made fun of here: https://www.youtube.com/watch?v=CahNAauFgys
I get that you have a hard time understanding thinking without words because that's your main experience. But please understand that it's different for other people.
And not just people. Animals can be very thoughtful. Watch documentaries, for example, of animal cognition and problem-solving. From crows to chimps, an awful lot of thinking happens, just not in words.
I have also wondered if the two are related. I have aphantasia, and also the majority of inner monologue I experience is when sounding out words during reading. It's pretty much quiet all the time in my head (which I guess is not everyone's experience?) and much of the thinking seemingly happens at the conceptual level.
Do you not consider introspection a base level feature? I think the point of these aphantasia-related discussions is that people make wrong assumptions about what is base-level.
Does you inner monologue have an accent? Can you recognize a definite tone to it? I heard someone mentioning "I loved your accent, so I'll imagine you narrating my thoughts from now on" and the idea of your thoughts being pronounced in your mind with accents sounded completely alien to me.
I can apply any character voice I can imagine to the inner voice. The default that I "hear" most of the time isn't even my voice - the timbre is a bit lower and more neutral, and it lacks my distinctive vocal affectations. But if I want to hear it as an Irish woman, or whatever else, I just do. Perhaps it's relevant that I always did character voices and accents out loud as well, since I was a kid wanting to be an actor.
For many cognitive processes, I don't see a clear survival value to conscious awareness of that process, so I don't expect that awareness to be a reliable feature. The survival relevant result of that cognition can still come through.
I also think that an internal voice that doesn't get conscious awareness is likely to become a process that doesn't present as voice. So it's not like someone can just pay more attention and hear something, because it stopped talking a long time ago.
It is not surprising. That you can use verbal language does not mean something inside must use it continuously. In fact, it makes sense that it is used only when relevant. That you can move your hand does not mean you continuously use them. If an «internal voice is going on», you are somehow letting it. This is especially valid for people with heightened introspection (owing to the higher control that internal assessment gives).
Language is still a learned skill. It is quite normal to assume someone not raised in civilization and doesn't speak any human languages does not have an inner monologue expressed in words. While we do all thinking in terms of words (thats how we express ideas) it doesn't necessarily follow. I do have quite a loud "copilot" but I can see how it's a configurable behavior
Meditation is exactly the practice of letting your inner monologue chatter until it dies away and you're fully tethered to your sensations and surroundings. So if one can learn to meditate, in theory one can meditate all the time (therefore not have inner chatter/monologue)
The more interesting question is what is the usefulness of inner dialogue in itself. A way to rehearse/articulate thoughts to be communicated to someone else? A roleplay with yourself to prepare for a future encounter? Thinking doesn't necessary need the 'echo' of hearing a voice. That's separate, that's more intriguing to me
Meditation is recognizing the omnipresent voice and trying to quiet it down. It’s partly why I suspect those that think they don’t have an internal monologue just aren’t recognizing it.
I think you can get better at quieting the voice or letting thoughts pass, but I don’t think you can really turn it off for longer than a few moments. Gurus that claim they have and have “reached enlightenment” just seem to be lying either to themselves or everyone else (or both).
Maybe it's similar to how an artificial neural network can converge to different local optimums for a particular problem, depending on its initial parameters and training method. Our brains might just find different ways of representing thoughts, be it through words, images, sounds or even just abstract concepts. If none of them are strictly better than all the others, then there's no selective pressure for the brain to prioritize development in one specific direction over others.
Sure, but there are 4 billion years of selective pressure behind us that make us a lot more alike than different. Maybe this runs at a higher level in the 'brain software stack' that has more variation, but it seems like it'd be a more common lower level type of thing.
Ultimately this is just a hunch though about what I suspect is more likely, I can obviously be wrong.
Everyone's brains operate the same from a basic view, but have very different details. We all think in different ways, we all experience life differently. We just apply similarly understood terms that make it seem like it's all the same. Who knows how varied our actual consciousness is.
I'm on the opposite side of that spectrum. I have such a strong internal dialogue that I talk to myself vocally when doing stuff. Not always, but very often.
In my experience, it's not significantly more peaceful than how other people experience life: I can still ruminate, it just doesn't involve an internal monologue or dialogue.
I’m curious about this too. I definitely experience my endless inner monologue as “compulsive thinking.” Even when I have nothing interesting to think about my mind keeps chattering on endlessly and I find my thoughts in that state tend to be more anxious in nature.
May be you're like me? I have a lot of thoughts, I just don't use language internally to express them. It makes it hard sometimes to articulate what I'm thinking to another human being, but thinking through language, and especially in SPOKEN language seems so excruciatingly slow and ineffective I'm quite glad I don't have to do it in order to think.
For sure. For years I thought "inner monologue" was just a metaphor for the interplay of thoughts. I was really surprised that many people literally have an internal monologue going all the time. That seems so exhausting to me!
Feels almost like the setup for a horror story. It turns out everyone's inner monologue is saying absolutely terrifying things, but then we just immediately forget. Schizophrenia turns out to just be the ability to remember and notice the things we're all saying.
And with that thought, I wonder if this sort of technology might be really useful for people with intrusive thoughts or schizophrenia, etc. Being able to objectively measure how well any given medication or therapy is working feels like a win to me.
My entire life whenever I’m having a face to face with someone, anyone at all, I get this urge to kiss them. No freaking clue why. Don’t worry, I manage it effortlessly.
Same with my urge to break the tension and jump onto train tracks.
Same with just intensely inappropriate thoughts during formal events. Like wanting to stand up and scream the F word or something.
I used to worry about this a lot, particularly when brainstorming. I found that I would start with a breadth-first search for ideas, but then as soon as one was moderately appealing, I went into depth-first mode and not only did I stop the breadth first search, but I also forgot much of the initial set of ideas.
I created a tool for myself to avoid this pattern, which is effectively a kind of interactive map-reduce system.
Also, I really like insights like this. Is there a dedicated place where people discuss these kind of meta-cognitive topics?
Good point, and we are! BCI is a rapidly growing field with plenty of academic and now industry groups working on all aspects of implantable devices, decode algorithms, etc.
I was just thinking about how, once this technology is perfected, at how easy it will be to interrogate high value prisoners. Just pop one of these in their head, and tell them not to think about the top secret or incriminating stuff.
I'm not going to pay for the article, but note there's a big difference between sticking this device on a guy and him having to learn how to use it deliberately, versus sticking something on your head and reading your internal monologue. This is almost certainly something the man had to do with great deliberation and effort, not something that was magically reading his mind.
I'm not convinced a device to read your internal monologue from the outside is even possible, or if it is, it may be very, very large. A device that sees you're conducting one, perhaps, but reading out the contents externally? I'm not sure it could gather enough information and training data to ever decode it. (That is, my point is more information theoretic than technological.)
There are some people who, it seems, can “lie” to themselves. They’re so convinced of their own lies they end up truly believing it. Can you imagine a world where people train themselves to be delusional for the purpose of avoiding self-incrimination during this kind of “brain” interrogation?
LOL. I think the system actually interprets you wanting to move your hand as if you're writing, like tracing each letter. It's not listening to your thoughts and transcribing. Actual thoughts are very non linear, I think. Transcribe that and it would be sort of like a Trump speech I guess. Lol
If the police are able to implant a device into your brain to get information out of you, I think you're screwed either way as the good 'ol rubber hose method is likely on the table - at least until people commonly elect to do the surgery or the surgery somehow gets to legendary levels of safety in the dystopian future.
Nah. Generic brain reading is not possible in the forseeable future. Everyone's brain works differently. You have to train a machine learning model to recognize specific types of inputs. You likely cannot do pure thought reading.
If like the sibling said they develop on-skin electrodes or micro-needle patches I 100% agree.
Currently it requires an invasive and risky brain surgery which AFAIK would be a definite no-go. At least in the US, this would likely be considered extremely inhumane unless it could be done without such significant risk of death or brain damage - given the nature of the surgery I find that unlikely.
In the cases where an invasive brain surgery would be permitted, I imagine torture would already be an option. Keep in mind torture doesn't have nearly the same risk of death and permanent brain damage as an invasive brain surgery.
Ignoring the ethical and legal problems with torture, the main problem that is not usually portrayed in films is that people say whatever the interrogator wants to stop the pain.
interrogator> We have been torturing you for 10 hours. If you spell "John" we will stop torturing you and torture John instead.
interrogated> [The pain is too much. We have a deal. Sorry John.] JOHN
A better interrogator can be more subtle, like
interrogator> We have some evidence [1] that John is the one that put the bomb, and that you are innocent, but my boss is not sure. If you confirm this info we will send you immediately to your cell.
[1] It's a lie! Also, John is innocent in case you are wondering.
Oh that's interesting, make them think the thoughts at least for extraction. This first version of the tech requires the user to mentally imagine moving muscles to trace letters so yeah it's not likely to be an option... yet.
I wonder what happens to these BCI results when the user is on psychedelics a la MK Ultra
EDIT: Wait your point was slightly different then I first read but that makes perfect sense!
It wouldn't require any training at all. It's reading motor inputs.
You're assuming that everyone's motor cortex/nervous system is wired exactly the same way. Given what we know about the variability of the human body, I wouldn't expect this to hold for the entire population.
This too will be weaponized. How? Capture a North Korean official, put one of these in his brain and siphon off all the juicy intel.
Capture a narco trafficker. ditto.
Capture a terrorist. ditto.
Have we made the world better?
It seems that whenever we develop ANY device for improved information processing, it disrupts the world we live in and displaces the uniquely human style of information processing-rendering humans less necessary. It is paradoxical. We make our lives 'easier' and human talent is lost.
This doesn't work like this. Braingate (and all brain implants of this nature) require very active effort/cooperation in order to produce output. Here's what it feels like:
- Imagine, very hard, the act of moving your arm to write a message.
- Do everything that you would do to speak a word, except the actual act of articulating a word. Get as close as you can while stopping the actual muscles involved from firing.
These are not mind-readers. They hook into your normal brain circuitry. It mostly works the same way that you can type out a message on your keyboard without thinking about the keyboard: it's a brain HCI, `/dev/input`, not kernel-space.
This bypasses the desired capability, that of getting info from the hard disk.
If you connect input in a feedback loop with an output and train on a particular output, of course you’re going to get the output you’re training for. It’s just not going to bear much resemblance to the data that’s on the hard disk without cooperation from the host system.
It depends on how much control the host system has over the output (can consciously drown out the signal). Suppose you just show YES and NO in big flashing letters while asking your subject the questions you already know the answers to and measure the output when training the device. Then it boils down to whether it can pick up what you "really think" over what you "try to think" better than the current generation of polygraphs (which is very bad at its job). So these technical specifics would decide where exactly it falls within the range from "comically unreliable" to "dystopian nightmare".
I think you are overestimating the power this kind of stuff has.
Quite a big leap from being able to identify a letter someone is thinking of REALLY HARD with the express purpose of making the software recognize it to randomly implanting something in someones brain and being able to browse through their thoughts.
https://news.ycombinator.com/item?id=27134049
https://news.ycombinator.com/item?id=27157369