Hacker News new | past | comments | ask | show | jobs | submit login
Artificial intelligence gave a paralyzed woman her voice back (ucsf.edu)
215 points by gehwartzen on Aug 24, 2023 | hide | past | favorite | 73 comments



> Rather than train the AI to recognize whole words, the researchers created a system that decodes words from smaller components called phonemes. These are the sub-units of speech that form spoken words in the same way that letters form written words. “Hello,” for example, contains four phonemes: “HH,” “AH,” “L” and “OW.”

This is nifty. Also I'm oddly comforted by the fact that this system doesn't "read thoughts". It just maps slightly upstream from actual speech to the relevant speech/motor production regions of the brain. So no immediate concern for thought hacking...

Separately, this makes me wonder what such a system would be for deaf people (with signing ability) who have lost their ability to move their arms. I imagine–optimistically–that one could just attach the electrodes to a slightly different area in the motor cortex and then once again train an AI to decode intent to signs (and speech). So basically the same system?


I think not all deaf people have the same mapping of words and their precise phoneme to the typical expected muscle movements. If this mapping differed from a regular person this system would not be that useful on the interpreting speech side. I think we've had pretty good gesture recognition for a while, on the other hand. I bet it's possible to decode individual signs right now but sign language also has a different grammar from typical spoken English and a lot of meaning is context based so it might be tricky in that way, more of a translation problem.


Oh yeh definitely. I meant more specifically: might it be possible to capture the electrical signals (much like this current system) from the parts of the motor cortex that create the series of muscle movements that form a 'sign', and then creating a 2d projection/display of these muscle movements and then ... downstream, a gesture recognition solution as u mention (a big downstream challenge).

It sounds like a lot. Was just a thought experiment of how such spinal blocks/paralysis would affect deaf people and how they'd be able to continue to communicate with their deaf partners. Definitely niche but nonetheless interesting, and I think possible using the same general approach as the OP article.

But yeh to then translate gestures to speech is a distinct and incredibly challenging problem on its own as you allude to. Perhaps in the future they can tap into signing/speaking-translators' brains and have AI learn those mappings in a fuzzy way.


> I think not all deaf people have the same mapping of words and their precise phoneme to the typical expected muscle movement

In fluent sign language, there is something analogous to phonemes. In linguistics these days, they're just called phonemes, and considered equivalent to spoken language phonemes. They're a fixed class of shapes and locations. They combine in certain ways that make up morphemes, which then make up words. It does work very similarly, perhaps identically, to spoken language.

The distribution of handshapes and they way they interact resembles spoken language. For example, it's somewhat hard to say "strengths" and people often produce a slurred "strengfs". The way it slurs together is rather predictable. It's very hard to say "klftggt", and so it just doesn't occur in natural language. Same with sign languages and hard-to-sign combinations.

Phonemes have an exact realization, but they also exist relative to each other, the distance and direction between them is important. This is probably part of why an American can fairly easily understand New Zealand English, despite nearly all of the vowels being different. Another analogy: in tonal languages, if there's a low flat tone, then 3 rising tones, then a low flat tone, that final low tone may be quite a bit higher than the first low tone -- but it will be interpreted as a low tone, as it is judged relative to the previous rising tone. Vowel qualities besides tone work the same way. And so do hand gestures.

There is a lot of variation by dialect/region/community in sign languages. More than in a language like English. This makes it more complicated, but it shouldn't be insurmountable. And of course, not all deaf people speak sign languages as their native language. They would struggle just as people who learn any other language later in life do.


>I think not all deaf people have the same mapping of words and their precise phoneme to the typical expected muscle movements. If this mapping differed from a regular person this system would not be that useful on the interpreting speech side.

...unless it could be trained on the individual?


This is nifty. Also I'm oddly comforted by the fact that this system doesn't "read thoughts".

If you're like most people, you "talk to yourself" internally all the time, so this doesn't seem that far from reading your thoughts.


Actually, I think it's been found out recently that thinking that everyone has an "inner voice" is a presumption by those who do. Apparently it's only 25%~ of the population or so that actually have an inner monologue.

It got revitalised again recently resulting in much commotion from people on either side of the fence: shock that someone might never hear their own voice or talk to themselves inside their head, and shock that someone might have this voice talking to themselves, for somebody who has only known silence.

I think there are a couple studies that back it up, but a lot is anecdotal as people describe their side of the fence.

I have an inner voice myself, but I have thoughts and sensations that my inner voice does not voice, and some where it does.

The human brain is so freaking cool.


But as far as I know no motor signals are sent to my mouth when I talk to myself (i.e. internal monologue), so this wouldn't read my thoughts. I'm not sure what you're saying.



In fact, this has been going on for quite a while but I expect that LLMs will blow the field wide open:

https://en.wikipedia.org/wiki/Silent_speech_interface

This year though: https://news.mit.edu/2018/computer-system-transcribes-words-...

And https://fortune.com/2023/03/09/ai-generate-images-human-thou...


They might not be acted upon, but they could still be sent, up to a "gatekeeper" brain segment.


Is it really that common? I certainly don't.


Do You Have an Inner Voice? Not Everyone Does https://science.howstuffworks.com/life/inside-the-mind/human...


That would be interesting. I would think if you are conscious you have to have an inner voice.


Nope. Thoughts, yes, but you don't by any means need to think in words or audio all the time.


The notion of 'thinking in words' reminds me of how LLMs work, but it certainly isn't me.



If we're talking about actual vocalizations then I do occasionally but it's just stuff like "whoops!" or "god damn it!" even if there's no one around.


A ML system which skipped the brain and just read the physical movements and converted them to voice goes a long way (I understand there are camera and glove based apps that can do this, but I'm not sure what the accuracy is like)


A similar study came out, also in Nature on the same day from a team at Stanford:

paper: https://www.nature.com/articles/s41586-023-06377-x

one press writeup: https://spectrum.ieee.org/brain-implant-speech


The woman from the title is from Regina, Saskatchewan, Canada, and the CBC did a feature on her story. Her husband is pictured at her side in a Saskatchewan Roughriders tee shirt and Toronto Blue Jays ball cap, having dressed with his Canuckness set to 11:

https://www.cbc.ca/news/health/paralysis-brain-speech-1.6943...

My hope is that she'll be able to cheer on their teams.


Interesting thought experiment… does the right to remain silent prevent a USB device from being placed on your head?


The 5th amendment protects you from being a witness against yourself [0]. So to me it seems pretty clear that in the US this would not be allowed.

But then again, it seems as though being forced to reveal your password is not necessarily a violation of the 5th amendment [1]. I can't quite understand why the supreme court hasn't made a decision on this one yet. There are a lot of conflicting decisions now.

0: `nor shall be compelled in any criminal case to be a witness against himself`

1: https://www.aclu.org/news/privacy-technology/police-should-n...


The thought process behind being compelled to provide a password is that you can be compelled to provide it to (indiscriminate) computer; which is currently thought of similarly to being summoned, as opposing this would be contempt of court or obstruction of justice.

Forcing someone to do something without payment and to their own detriment should run afoul of both the 13th and 5th amendment respectively; but if it was already reasonably and obviously known that an encrypted drive contained CSAM or national security secrets, and you have already been duly convicted of that, then it would not apply ("except as a punishment for crime whereof the party shall have been duly convicted," ), and you can be coerced into the "labor" of decryption - although double jeopardy would seem to apply, so you cannot be further charged for anything found, once decrypted.

I suggest making your passwords themselves incriminating, just to throw in another constitutional hiccup.

Not that any of this would matter in practice, but it is quite a legal thought experiment.


But the third party doctrine lets the prosecution have all that sweet incriminating data once it's on someone else's server.


Ah but isn't the third party doctrine predicated on the data being voluntarily given to a third party? "a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties."

See Smith vs Maryland: https://www.theatlantic.com/technology/archive/2013/12/what-...


I'm not sure that right will protect you. The right applies to "something you know" not "something you have". You have a brain.

A good analogy is smartphone passwords. Authorities can't make you share your password (something you know), but they can make you unlock you phone with a fingerprint (something you have).


You are forgetting "something you are"


I'm not familiar with any court rulings on that


Well, time to get more familiar because I need you to prove my point for me.


That is an interesting thought experiment. IANAL, but I'm pretty sure putting something on your head to essentially coerce information would be a violation of the right to remain silent. I don't know whether it would fall in the fingerprint vs spoken password space in terms of subject-to-search-warrant.

Fortunately this tech is currently very person-specific and have to be trained to the person. So to thwart it you'd just have to think applesauce over and over.


Yes, just like it prevents the prosecution from bringing in a psychic that will speak your thoughts to the court.


In France it's common for the court to ask to a psychoanalyst or a psychologist (not to a psychic) to tell what the defendant was "really" thinking and feeling, in order to decide to decide whether the defendant was responsible for his actions or not.

There is no crime or offense when the defendant was in a state of insanity at the time of offense, or when he was constrained by a force he could not resist.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9387428/


This is also possible (not "common", but possible and no one bats an eye as it happens multiple times a year) in America, it's called an insanity defense and it requires expert testimony by a psychologist.


Yes, just like it prevents the prosecution from bringing in a cop that will testify against you to the court.

Yes, just like it prevents the prosecution from bringing in a hair expert that will testify to the court that your hair was on the crime scene.

Yes, just like it prevents the prosecution from bringing in a blood splatter expert to testify to the court only you could make this specific splatter.

As you can see, truth or efficacy isn't a prerequisite of being admitted by the court.


There's testimony about you, and there's your testimony.

One is self-incriminating, the other isn't.


Currently, they can't compel you to give up your fingerprint, but can force a password. So I think thoughts would count as a biologic rather than a generated phrase in that aspect.


I think you have it backwards? In most jurisdictions I think biometrics can be forced but divulging knowledge cannot.


Wasn't it trained to her brain specifically?


"The BRAVO3 team recreated Ann’s voice using language learning AI, and footage of Ann’s laugh-inducing wedding speech from 2005."

Gonna admit that this idea brought tears to my eyes, it's just the icing on the cake, really. And such a heartfelt decision.

Unfortunately in practice the voice still sounds quite robotic. Perhaps they could ask elevenlabs for help?


I noticed that she selects characters by using her glasses as a pointing device and moving her head. Surely they could use an eye tracking device like Tobii instead?


Maybe there is some medical reason not to for her. But as a healthy user of head tracking for gaming, I would rather have head tracking, so I can move my eyes without interacting with the screen.

Maybe it also doesn't work well with multiple people watching?


Also, instead of the button interface, she should check out Dasher text input:

http://www.inference.org.uk/dasher/


Not quite sure what I missed here, how is this AI rather than machine learning?


AI has been synonymous with ML for years now.


ML is part of the larger field of Artificial Intelligence, which includes all techniques to make a computer do anything we consider as part of intelligence.


[flagged]


To me, you need help then


Did you miss the point? They are trying to cover up the bad of AI with some little feel good story.


This kind of brain-reading certainly seems to be in the same general species as lie detection.

So it must exist. An AI device that takes a thousand data points off your brain and tells you lie or not.

What would we do if we had a ~100% accurate lie detector? How would that go down, socially?


It is not in the same species, these systems have to be trained on every individual brain, they are not reading some objective signal that is the same for everyone.

> For weeks, Ann worked with the team to train the system’s artificial intelligence algorithms to recognize her unique brain signals for speech. This involved repeating different phrases from a 1,024-word conversational vocabulary over and over again until the computer recognized the brain activity patterns associated with all the basic sounds of speech.

To use this type of system for lie detection, if such a thing is possible, you'd have to get each subject to give you thousands of example statements with truth/lie labels. This obviously defeats the purpose, and also doesn't really seem possible - does lying for a training exercise produce the same brain patterns as lying to actually cover something up? Probably not.


Your objections are that training is required and that where and how the lie is uttered might bear.

Mere technical hurdles IMO.


I would presume that someone being subject to a lie detector may have different incentives than those running the lie detector, and they may intentionally taint the data.


I think the point is, who would deliberately train a system to detect their own lies?


> who would deliberately train a system to detect their own lies?

Maybe an employee required to do so as part of on-boarding to a corporation or government org?

Or maybe someone in the process of a passport or visa application?

This podcast episode [0] with Sean Carroll and Nita Farahany scared the crap out of me on this topic, it seems inevitable.

This is the time to regulate neuro access prior to it becoming big business, and part of the TSA screening process.

[0] https://www.preposterousuniverse.com/podcast/2023/03/13/229-...


It still seems trivial to game. Make up some "tell" for your lies. Maybe every time you need to lie during the training period you squeeze your muscles, or think of hard math problems. It would be very hard for the AI not to fit to this fake "tell."


Those sound like techniques to fool a polygraph.

There are new infrared sensors in development that should see much more complex behavior in the brain, than say EEG. I’m sure even those could be gamed theoretically, but it will clearly become increasingly difficult as the technology improves.


I was thinking it might be something everybody does in highschool or something. Every morning you spend an hour training your AI shadow. Everybody gets one. So useful, like a cell phone.


If that's the sensitivity, what's the specificity? How well does it translate from the population it's trained on to other populations? In what contexts is the type of lying it detects useful to detect? I would assume using it on someone in a criminal investigation context without their permission would be a 5th and 6th amendment violation (as it would almost entirely subvert the usefulness of legal representation).


Unless someone figures out a way to do this without surgery from ten feet away the answer is that it won't.


Ya I was thinking the same thing. Requiring a brain operation to make it work would be a deal breaker. Need an MRI or something.


That seems like a pretty big leap. This doesn't require language understanding, just a translation between muscle movements and sounds. Lie detection is way more complicated.


It could detect intent to deceive. Or the brain-mode specific to lie-crafting.


There can never be a 100% accurate lie detector, only a 100% "thinks they're telling the truth detector". Ultimately human memory is deeply flawed and we're capable of having entirely false memories and swearing on our lives to things that never occurred. A machine which can perfectly read our brains can only get this messy imperfect mix.

Even in the sense of political intrigue, is it so hard to imagine someone so brainwashed they truly believe the lie they are telling you?


I would only call something a lie if it’s conscious and intentional, otherwise it’s a mistake


We don't have lie detectors even for AIs and we can look at every single bit inside them


To be fair, (most) humans have enough self awareness to (most of the time) know when they're lying. Theoretically, it's an easier problem.


Unintentional lie is an oxymoron, because that's just a mistake.


"Unintentional lie" overlaps with https://en.wikipedia.org/wiki/Self-deception.


no - this is not conceptually related to lie detection. Yes it uses ML to decode something, but that's where the similarity ends. This is decoding the patterns of brain activity used to generate speech.


Well yes. I'm thinking that if we can decode the brain in this one way then we can probably decode it in other ways too.


It would make trials much more straight forward events and allow us to deliver justice at scale, since we would now have a digital engine of truth.


And then we demand that all our police and lawmakers and public servants get hooked up to the machine and then the next day it's illegal tech.


since we would now have a digital engine of truth.

What are we living in, a dystopian sci-fi universe?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: