Could someone explain to a dummy like me: what are the actual technological bottlenecks for brain-machine interfaces?
Like... setting aside all the things that, given enough time/effort, we'll reach some useful maximum. I'm told that with ML (eg: image classification) we'll eventually train the systems enough that they'll do a pretty amazing job.
So is it converting analog brain signals to digital? Is the rate of data transmission even relevant here? What would happen if we had enough "brain data" from a single person to saturate a 10 gigabit network? Do we have the software to do anything meaningful with that?
Right now the state of the art options are either surgically implanted electrode arrays (invasive and very limited in what they can detect), electroencephalogram type helmets (non-invasive but only get very vague signals), or fMRI type imaging (more precise but still only gross detail, and requires an enormous complex machine).
There's no obvious way forward with any of these that produces what you or I might consider a true brain machine interface. We don't have the tech AND we don't understand the brain enough.
Fortunately you don't need a brain-reading device to produce something useful, just like you don't need a teraflop computer to go to the moon. I've written recently about an EEG helmet that can be used by profoundly disabled folks to navigate a UI, type, and so on, and that doesn't require a precise signal at all. So I think what you'll find is while the Musks of the world are chasing a sci-fi dream of what they think the technology ought to be, most of the utility will come out of using what it's actually capable of in a smart and compassionate way.
> Fortunately you don't need a brain-reading device to produce something useful
If we're ever to achieve any measure of "immortality", BCI is probably the only way.
We could clone monoclonal, brainless humans with universal HLA haplotypes for spare parts assuming we could get past the religious/ick factor. Beyond just organ harvesting, a full body (ie head) transplant could rejuvenate the immune system and potentially reverse many of the effects of aging assuming we could get past central tolerance. (Maybe not an issue with immunosuppressants or monoclonal lines without B/T cells, but that sucks and I think it can be optimized.)
This would be the best next step in increasing human lifespan dramatically. It wouldn't save you from physical accident-induced death or irreversible brain atrophy, though.
Completely sci-fi conjecture:
If we could build clusters of machines capable of running the same distributed work that the human brain does in real time, and if we could get enough signal out of the brain through much more advanced (and invasive) instruments, we might be able to model a subset of a person's memories and run them at real time.
If instead of just copying signal, perhaps we could leave a human hooked up and "amplify" their capacity for thinking by supplementing their brain with a computer. We might be able to copy memories into the new computerized system while the person is still alive and thinking. If the way consciousness works is amenable to such a process, we might be able to perform a one way "move" operation. Essentially digitizing a person as a destructive operation, and killing the body when the process is complete.
Of course we're not anywhere close to anything like this. It's all hypothetical and not at all close to the science.
Cloning humans though: entirely within our capacity. We'd see incredible medical benefits in doing it, including substantial lifespan increases if we routinely replace older parts sourced from clones. People are just too religious.
(Monoclonal brainless humans would lack a consciousness. They're not much different from plants in that respect. Entirely ethical to use for parts and experiments.)
A full-fidelity brain scan (done using a giant X-ray free electron laser) will probably vaporize your brain in the process of scanning it. Not vaporizing your brain could imply missed data.
If you haven't already, you need to read as much Greg Egan as you can. His short stories "Learning To Be Me" and "Closer" are as disturbing as they are wonderful, but it's the novel "Zendegi" where Egan really starts to explore the challenges of BCI and actual integration or digitization of a human mind.
I'm not sure this is strictly a religious matter. It could be rather deeply rooted into the perception of humanity, the human "self" that we have, and the reciprocity of it towards others: how would you make sure that a "grown" clone is not "someone" already?
Regarding your second point, I would strongly caution against considering even the most egrigious offenders for forced organ harvesting. There are currently serious allegations regarding China's black market organ trade (briefly, that political prisoners, dissidents, and minority populations are quietly executed and harvested to supply China's thriving organ market). As medical advances in transplanting improve and expand the ways we can repair the human body, this will only become more of a problem in the parts of the world where government designated "undesirables" can be quietly disappeared.
It is simpler, cleaner, and less prone to malfeasance and corruption to limit organ harvesting to registered consenting individuals and lab grown tissue, where a chain of custody for the tissue can be established.
If you are growing single organs such as a liver, I would assume it to be more cost effective to simply grow the individual organ. If you are growing more complex structures and organ systems such as entire limbs or, for instance, a large portion of circulatory system (I have no idea how that would work surgically speaking, but we're already so far out in the weeds in this thread) I could see it making sense to grow the supporting structures in tandem.
The big challenge for non-invasive BCI is that most interesting information content in the brain (e.g. speech) is encoded in high frequency firing activity, and the skull acts as a low-pass filter.
There are no high frequency signals in the brain. It's all relatively slow electro-chemical reactions.
There are however a huge number of slow signals running in parallel. In total these would add up to a huge bandwidth if constrained on a single channel.
To put it another way, the scull is almost entirely resistive. And you can't make a low-pass-filter with just resistors.
I’m using high frequency in a domain specific way here: neuroscientists consider gamma (30-100Hz) to be high (temporal)frequency for action potential activity. But I wasn’t just referring to temporal filtering—the skull acts as a low-pass spatial filter, too [1].
Extracting a control signal with few degrees of freedom from an EEG signal is doable right now. A reasonably motivated undergrad with a few hundred bucks of OpenBCI gear should be able to get “MindPong” up and running pretty quickly. But…it’ll be a little janky: the signal quality won’t be great, especially if you’re out in the real world, trying to do normal stuff at the same time. The content of that signal is also limited, in part due to the skull and scalp that sit between your electrode in the brain. This also make it difficult but, surprisingly, not impossible to perturb the brain with electric current (tDCS, tACS), magnets (TMS), or other techniques.
What about putting something inside the brain itself? We know information is likely much more accessible without the skull/scalp filtering, but we don’t completely understand how information is represented in the brain, how to modify those representations, or even really how to get the raw data out: most neural implants have a pretty short lifetime before they’re ruined by the immune system, mechanical strain, etc. We’re not totally ignorant: there’s been some amazing progress decoding motor and speech intentions and, after a 20+ year hiatus, cool new electrode technology (Pandromics, Neuralink, etc) but there’s a lot to be discovered and invented.
To sum up, we have some fairly crude stuff working now, but it’s looking for a killer app, especially in humans. Building something more like a movie requires work on many different fronts, ranging from materials science to build the electrodes to neuro/ML to understand what those electrodes see.
> To sum up, we have some fairly crude stuff working now, but it’s looking for a killer app
I think it's quite a bit earlier than that. If you get motor imagery working, which is the most reliable signal (your brain is electrically very active when visualizing motor tasks), your accuracy rate is still crazy low. And it doesn't generalize across people since our brains are all pretty different. Some people can't make the EEG work.
That’s fair. Most of the existing stuff is pretty janky, in that it works for some people in some circumstances sometimes.
I’d say it’s a bit like mid-90s speech recognition: it works well enough to be intriguing, but the hassle and inaccuracy make people favor other alternatives. I’d take an eye tracker over an EEG speller, for example, if I had ALS.
Some of this is due to hard technical problems, like building better electrodes or squeezing more information out of the signals, but I think there’s a lot of low-hanging fruit too. For example, many spellers don’t include language models, which, as someone originally trained in speech recognition, absolutely blows my mind.
That makes sense. Granted even if they did language models, most spellers can only be used for short periods of time because of eye strain. Is that low hanging fruit or just small improvements on a still unviable approach? And if you're using motor imagery spellers...you're adding a model to like 2-4 bits of input. The efficiency there is horrible.
I'm with you on "I'd prefer an eye tracker." Modern advances in eye tracking (saccade-based stuff!) are pretty sick, and make the technology usable for long periods of time. You have higher bandwidth input than motor imagery, and don't torture the eye like P300.
I feel like the "killer app" is accessibility. There's so many disabilities (quadriplegia, SMA, amputations, stroke, ALS etc.) where having a BCI that lets you reliably control a wheelchair would be a godsend.
I always see futurology/inspiration porn articles about impressive demos of this stuff, when is it hitting the market? It doesn't have to be the next iPhone, it just has to let people control their mobility or communicate.
Accessibility gear needs to be robust, but a lot of the BCIs, especially the noninvasive ones, tend to be a bit janky. As a result, you can zip around in VR once the experienced lab tech sets you up, but maybe don’t have the DOF to control a real wheelchair or a system that disabled people can set up themselves. Some of this is probably just need some good systems engineering, but there are some legit technical challenges too.
I think this will start to change soon: there’s a lot of money flowing into neurotech and hopefully, some of it will end up with people who are more serious than hype-y. (If nothing else, I’d like a job :-))
maybe it doesn't have to be perfect, as long as there's some safeguards? like if you had some sort of collision radar or a way for the user to emergency stop (e.g. maybe a blink pattern), it might work well enough to give back some freedom.
I hope it changes too. I have a progressive disability and the lack of innovation/competition in assistive devices is starting to make me nervous as I start to need more.
The brain is more like a CPU implemented in FPGA than a CPU synthesised into silicon.
Signals are only somewhat local - you get increased activity in this or that region and can correlate it with what happens in the brain - like power analysis attacks on crypto chips - the algorithm has to recover the data from the limited, noisy signal we are able to pickup.
You can't just put electrode in every single neuron to read its state like you can't put an electrode into every single piece of conductive metal on a chip - it would be to tight to fit and everything would stop working.
Like all of WaitButWhy's articles, I think this one [0] helps a person understand from the fundamentals. I linked to part 3 which is the section for Brain-Machine Interfaces in particular, but it behooves you to read from the beginning
Our brain was not built to directly transmit signals nor to accept input wires to read / write signals. I am sure one or the other can eventually be overcome safely. But we are not there yet.
The corpus callosum is pretty much exactly a signal bus between the two hemispheres. Granted, it's a 200-million wire bus, but it basically transmits entire "thoughts" between the hemispheres. I think the dream brain interface would be a tap on the CC.
So many challenges. One is that we can only receive information from certain parts of the brain—those where the neurons are perpendicular to the scalp. As you know, the brain is really convoluted, so this makes for a very messy signal. Another issue is that the raw noise makes it extremely difficult to detect faint signals.
An approach for this is training models that have expectations based on the stimuli themselves; e.g., to find neural resonance with different frequencies in the stimulus.
the main problem is not software, AI, etc. It's that we can't get the data reliably- i.e. current sensors are not sensitive enough to pick up brainwaves/activity (esp with interference from muscle etc). The gold standard is electrodes implanted in a brain but is a bit invasive as an interface.
This is not too far off what experimental systems already do.
My lab has equipment that collects data from 288 electrodes in the brain, each at 16 bits/sample x 30,000 samples/sec. This works out to 140 mbps, not counting overhead or other types of data we collect, and it’s not abnormally large. If we had a research question that required it, the vendor sells versions with up to 512 channels—-and you can gang them together for even more.
However, this probably isn’t the direction BCI is headed. A lot of the signal is redundant. Some of this is because nearby neurons tend to do very similar things, and some of it is because signals propagate pretty well through the brain so electrodes pick up the same signal in different places. As a result, most non-research applications don’t need all of that data and it’s increasingly possible to do some preprocessing right at the brain: folks at Imperial have made all kinds of cool ASICs that extract spikes.
Do you ever feel the impulse to "ctrl-f" when you're at the grocery store?
Imagine the feeling of going to reach for something, and accidentally just envisioning moving your arm. Might be like "stepping into air" when you're almost asleep.
When I used to regularly play a particular lane-based MOBA, any time I'd need to react quickly while driving, I'd instinctively feel the need to burst some variation of QWE depending on which character I'd recently been playing.
Sometimes if I practice difficult scales (those with novel movements for weak fingers e.g. left pinky and ring) on the piano before typing on a keyboard, I'll mistype more frequently.
I'm also concerned about how such devices will affect and be affected by cavitation. Cavitation inside the skull has been found to happen in concussion injuries in explosions. It's also thought that it can happen with the smaller impulses coming from punches and everyday impacts. Are the devices subject to cavitation, particularly the long, thin parts, and what happens to those long thin parts inside the brain when they are subjected to cavitation?
EDIT: This device is only penetrating the scalp, but other devices like Neuralink penetrate into the brain.
This particular device doesn’t even penetrate al the way through the scalp, so the skull would likely obliterate these long before they reach the brain.
Motion is a problem for things that are inserted into the brain, but people have been getting DBS and sEEG implants for 30 years, and it’s manageable, though not totally solved.
It's barely even penetrating the scalp. The needles (really better described as spikes, they have a fair amount of draft, about 20°) only puncture the strateum corneum - outer layer of dead skin cells and lipids.
I would guess they are meant as consumables. The needles are 0.8 mm long and 0.35 mm wide, pyrimidal spikes. Definitely too shallow to cause bleeding but that's still quite intimate contact.
- cost must be reasonable, if expensive - anything over something like $2,000 is going to severely limit your customer pool. A semi-automated manufacturing line will help with this
- for good measure, a crack marketing team
Another consideration is that the BCI industry is moving insanely fast. By the time you set up a production line and start mass-producing, there might be news about a novel much-better process that overshadows your product and destroys sales. This is effectively what happened with VR in 2012-2018 when everything was super expensive; only as of recent can you recommend an Index or Quest 2 to people without fearing a huge leap in VR quality to be coming in the near future.
> anything over something like $2,000 is going to severely limit your customer pool.
Eh, not necessarily, early BCIs if actually useful will likely be targeted towards people with disabilities and thus funded through private or socialized health insurance. Once the hardware undergoes commoditization consumer grade BCI will become inevitable with low price points. VR evolved slower because there was no middleman like insurance buying hardware for their customer pool (the counterpoint being Apple Watch which you have been able to get at a huge discount with some insurers).
(I think this is something different, but possibly of interest to people whose imagination is captured by electronic interfaces with the nervous system)
"A research team led by a 2018 BBRF Young Investigator, Sung Il Park, Ph.D., reports that it has developed and tested a new technology enabling unprecedented exploration of nerve-cell function inside organs of the body outside of the brain.
The new technology, called optoelectronics, uses tiny wireless implantable devices to manipulate the activity of individual nerve cells in the organs of awake, freely moving animals. This makes possible experiments with the power to reveal the specific (and often multiple) functions of different kinds of nerve cells in the body's periphery."
Idk if I was a kid I would be playing around with this tech. Sadly, as an adult I am just scared of this tech. So funny how 20 years changes your brain and perception of tech. 100% a younger me would be in my ceiling room soldering stuff like this together and hooking it up to pc somehow.
Focus more on what you want to see: there are great uses for this tech that would make life better. You can be apart of building it or you can support the people and organizations that are in this for the right reasons.
I don't think that's what's happening here. It appears there are electrodes at each extreme and intersection of each strap. The micro needles are likely part of a single electrode, with the purpose of avoiding hair, reducing electrical impedance by penetrating the outer layers of skin, and reducing motion artifact by anchoring to the skin.
You're looking at one electrode. There's ten of those things (though they want to add more). The point is to make good contact with the scalp through the hair without needing to shave and/or apply contact gel.
I have a Muse 2 [1] and a Flowtime [2]; both are consumer-oriented portable EEGs using Bluetooth to transmit data displayed on their respective mobile apps.
The Muse hardware is seriously inhibited by the same user-exploitative trends we see everywhere, accompanied by meaningless promises. Many researchers used the Muse SDK and API until they discontinued it in 2019 [3] in favor of Muse Direct - another way to force users to feed Muse their data and lock them into a subscription.
Unsurprisingly, this was also abandoned [4] and they claim "We're working on a solution that allows you to collect raw EEG data using the free to download" app, which would still allow Muse complete control over your authentication, use and data. Ugh.
At least the community has reverse-engineered the LSL protocol and built a Python package [5].
Flowtime claimed that it offers a closed API, as they don't have the resources to maintain an open API [6]. I remember somewhere they claimed it was on their roadmap, but I searched again today and couldn't find any support of their claim.
BrainBit [7] and NextMind [8] seem to provide more robust developer tools, although I have no experience. Naturally, each of these runs more than double the ~$200 cost of the Muse and Flowtime devices.
I have no formal neurological education but I strongly believe non-invasive EEGs, combined with novel, gamified training techniques coupled with machine learning will usher in a new era of digital interactions. Like the transition from tactile smartphone keypads to the eventually-ubiquitous full touchscreen, I expect many design iterations in both hardware and software will produce utterly fascinating products.
I can't wait to get excited again for a product launch.
I've been kicking around getting one of these for years, the deciding factor for me is essentially "what kind of demos can you pull off with these?"
ex. a simple "yes / no" detector would be meh, pointer control would be amazing, some sort of mood detector/brain trainer type apps would be _awesome_ since it'd be producing value for me
Like... setting aside all the things that, given enough time/effort, we'll reach some useful maximum. I'm told that with ML (eg: image classification) we'll eventually train the systems enough that they'll do a pretty amazing job.
So is it converting analog brain signals to digital? Is the rate of data transmission even relevant here? What would happen if we had enough "brain data" from a single person to saturate a 10 gigabit network? Do we have the software to do anything meaningful with that?