Hacker News new | past | comments | ask | show | jobs | submit login
Human use of high-bandwidth wireless brain-computer interface (brown.edu)
284 points by bemmu on April 4, 2021 | hide | past | favorite | 210 comments



So back in the 60's, people looked back at the progress over the previous decades, imagined the future, and thought about space. We'd have commercial space flight any day now. The most poignant scene I can think of: in 2001 A Space Odyssey, the character flies on a Pan-Am spaceship to the moon, and then goes into a phone booth to make a phone call.

Fast forward and it turns out that we had been near the top of an s-curve when it came to space tech, but near the bottom of the s-curve of computers, and few people back then were imagining (could imagine?) how different the world would be 50 years later with everyone carrying around internet-connected supercomputers in their pocket.

I think we may be in the same situation today, where people imagine the future and think AI revolution and computational everything, but are mostly missing that we're at the bottom of a biotech s-curve that is going to blow "computer" progress out of the water over the next 50-60 years.

My guess is that in 60 years our computer technology will be largely similar to today, just faster and nicer. But in the same way that the mature industrial revolution made high-precision manufacturing possible which made incredible computers possible, our mature computer technology is now enabling incredible progress in biotech. And the explosion of biotech will lead to mind-blowing changes that are difficult to even imagine.

From this article: no more keyboards / mice? No typing, you can "think" to write. What about recording your own thoughts and then playing them back to yourself later? How much further can that tech go? And there is so much more beyond BCI, we are just understanding the basic building blocks in many areas, but making amazing progress.

I'm excited about it.


Imagine if we could connect your brain directly to a computer. Imagine if you could do things like instantly and precisely recall any Wikipedia article, any news story, any mathematical formula. Imagine if arithmetic goes from a skill you learn to a thing your brain does with 100% accuracy.

Now imagine if your need for speech goes away: why bother using it when you can just “text” from your brain directly to mine and I instantly know what you said without me having to “read” anything. Instant communication. Instant connection to anyone. Instant ads beamed directly into your brain by Google and Facebook.

Now take it a step further: your mind is now a part of a collective globally connected network. The boundary of where “you” exist and where the rest of the world exists is erased. You can feel what other people feel. You can see through the eyes of an Oscar winner, a surgeon, a head of state, a porn star. Police body cams become police mind cams: what was the cop thinking when they took any given action? What we currently have as YouTube celebrities and Instagram influencers become Mindgram stars. You can see and perceive as them.

Now take it a step further. Death isn’t death. Like the paradox of rebuilding a ship one plank at a time, your mind stops existing in your body and occupies a collection of other bodies. Artificial intelligence mixed in with real intelligence mixed in with remnant intelligence. We can’t imagine what this feels like but we are marching towards it getting ever closer every year.

Now take it a step further. People want to get away from this hive mind concept. They disconnect. They play games. They make games where all NPCs are now simulated to the point where they believe they are real. They are here for the benefit of the players but even the players can’t tell the difference when they are in the game.

Now take it a step further. Inside the simulation someone introduces Hard Seltzer. The in game year is 2021 and a player just read that some NPC somewhere had just created a brain/computer interface. He rips off his headset and goes to unplug the computer because fuck this game, all the DLC clearly ruined it.


> Imagine if we could connect your brain directly to a computer. Imagine if you could do things like instantly and precisely recall any Wikipedia article, any news story, any mathematical formula. Imagine if arithmetic goes from a skill you learn to a thing your brain does with 100% accuracy.

You're talking about transferring information FROM computer TO brain. We have no idea how to do it.

Transferring information FROM brain TO computer is achievable with modern tech (and that's what this link shows), but not vice versa.


The awesome thing is that our brain is great at learning how to use and make sense of new inputs. The big one being literacy, you never think about it but you learned to interpret strange little patterns first as sounds that you hear in your head and then as entire concepts like “cat” or “happy”. The same can be said for spoken language or mathematics or musical notation. I don’t doubt that the human brain will have little trouble learning that X pattern of electrical inputs to a group of neurons means “cat” or the sound of A or even an image of a bird. It won’t come instantly and it won’t be identical to the thing it represents without wiring directly into the visual or auditory regions but it will give us a new sense and a new language.


It’s not really all that good at taking in new inputs, at least if our brains are anything like other mammals. There have been experiments where animals have certain senses depraved (eye sewn shut, for example) early in life, and then opened after their brain had fonished developing. Their brains never «learned» to take in the new sensory signals afterwards. At least not to the extent of a regular individual with all their senses since birth.


I’d be more interested in studies on people who have been deaf from birth and received cochlear implants later in life.


Yep, famously done with two kittens, one with agency to move and the other paws dragging on the ground - it was effectively blind and did not react to stimulus because it did not create meaning.


I remembered those body modifications where people insert small magnets under some finger's skin.

After a while a new feeling arises...


At least half the value or learning arithmetic is that it shapes ones neural network in some fashion in a way to make it better at certain types of thought. Skipping that learning process presumably skips those physical changes as well.


We could train our "soft" neural networks very efficiently with a computer interface. Maybe not as fast as dedicated software neural networks, but the human mind responds very quickly to feedback loops (sometimes destructively).

Which makes me wonder, what will an overtrained brain look like? What kinds of illnesses are we unleashing on the world by attaching an interface like that directly to the brain?


My pet theory is that anxiety is an over trained brain reaction.


In a lot of recovery circles there's an underlying concept of "getting out of your head" where the methodology that arises in each circle attempts to get a person to leave the circular thoughts in their heads and do/think something else.

I think this is why psylocybin is so effective for depression: it induces a state of plasticity in the brain that gives someone an opportunity to fill in the ruts they had been mentally pacing in.


Transferring information TO BRAIN is sort of the raison d'être of computers.

This comment FROM brain TO computer, FROM computer TO computer, FROM computer TO your brain.

It's awesome.


I think the point is that this brain computer interface speeds up the from-brain-to-computer part of the interface. The from-computer-to-brain transfer is the same old fashioned way.


We have perfectly good analog inputs, I'd rather we start with improving those rather than open the digital 6th sense box.


How much can you improve the human ear? More importantly how much can you improve the speed with which you perceive with the human ear and actually understand and retain information from it? You can probably double it’s efficiency. But can you make it take in information with perfect clarify at 10x the rate? 1000x? A direct interface into the brain could hypothetically bypass the ear entirely. And there is precedent for this already: we went from pointing and grunting, to speech, to writing, to digital writing, to the web. Imagine what it might have been like for me to convey this message to you if we lived in a hunter gatherer society before human speech was a thing? Now flip that forward: what specialized tools could we use to speed up communication more? About the only things we have left are real time translation devices and an AR capable of augmenting what we are looking at with relevant labels and articles. Beyond that we have no place to improve without inventing a radically new way to interface, and in nature you either improve or you die. Nothing stands still and neither will this.


Lets start with some nematodes and work our way up. Would probably need to own a Cyber-Scooby before having some digital inputs implanted.


Yes that’s true. But I think two way communication is a goal for a lot of research and whole orders of magnitude more difficult, likely not impossible.


But much more speculative. Just because we can imagine something doesn't mean science can/will achieve it. Brain-to-computer communication seems much more straightforward to achieve from a technical perspective, and has enough potential to revolutionize aspects of our lives on it's own.


Actually, and fortunately, it is impossible. This sort of brain model was preemptively debunked by Kant in Critique of Pure Reason and other works.


I think it's pretty hard to argue that any philosophical work, regardless of how important or impactful or insightful it is, can "debunk" anything.


> Imagine if we could connect your brain directly to a computer.

Please, no. I’d just get even more frustrated at how slow the damn thing is.


I'd argue we already connected ourselves to computers, and we're just using the safest but slowest adapters available right now.


We could argue on "safest", but definitely the compromise on speed, ease of use and safety that I can think of that I’m the most comfortable with


I'm curious, what would a solution look like that's even safer than a touch interface (mouse, keyboard, screen) or voice?


Well, removing audio and images, leaving only text, would make it safer.

I, and probably many others, wouldn’t have stumbled upon some of the things I have. They thankfully are now only blurry memories to me, even though merely evoking them still is nauseating.

It would also dramatically reduce the impact of much of the bullshit content out there, since words appear to have much less emotional impact than images (and appear to be much less appealing), and thus be safer to society as a whole.

A text-only interface would also be much less useful and much more annoying to use.


> since words appear to have much less emotional impact than images

To some. My imagination is quite good and as a child I consumed vast quantities of print media. A lot of it wasn't appropriate for a child to consume.


Sure. It can already be a lot. Some of my worst nightmares come from my own imagination.

But we’re talking safer, not absolutely, perfectly safest.

Also, how would you compare that to the close-up picture of an Iraqi soldier burned to a crisp by a Hellfire missile, hanging out of the carcass of his scorched vehicle, staring straight into your eyes with his charred, empty eye sockets?

Or suddenly being confronted, at 12 years old, by dozens of neatly aligned child pornography thumbnails after clicking on some unclearly named link when exploring I2P, FreeNet and TOR because "decentralised", "anonymous" and "censorship-resistant" networks seemed "cool"?

There’s a reason crap people’s magazines are full of pictures, paparazzi photos can be worth what they are, and websites with a photo of a smiling human (and a clear CTA) convert better than those without.

¯\_(ツ)_/¯


Same here, as evidenced when one rainy Saturday afternoon the family were all together after lunch, grandparents drinking tea, me cross legged on the floor reading when I innocently asked the room “hey, what does cunt mean?’

Grandma turned puce and dad snatched the book from me, wtf are you reading?

James Herbert’s Rats trilogy. Aged 8.

Warped ever since.

Text is plenty.


I couldn't understand why my parents didn't like me calling my brother an orgasm after I had read the term in a book and liked the sound of it :)


There is so much about it that I think would be wrong, difficult, bad and we as a species can’t even imagine what it would be like. How do you install an ad blocker on an interface like that?


Very good question.

Regarding the ad blocker, one thing I definitely can’t wait for are true AR glasses that could act as a real-life ad blocker.

Being out in the street doesn’t mean I have in any way agreed to being constantly drowned in and have my attention stolen away by all this bloody noise.


The notion of a neurally connected Facebook or Google scares the heck out of me. Apart from the countless petabytes of data they would collect, just imagine a world where the powers that be can actually tap into your thoughts, perhaps even implant ideas that they think you should have. At a certain point, we lose our individuality and become subsumed by the global AI, game over.

But before that distant dystopian point is reached, I do hope we develop ways for paralyzed people to regain sensory control and live normal lives.


> At a certain point, we lose our individuality and become subsumed by the global AI, game over.

In a way, we already have. Each and every one of us is constantly influenced by and influencing untold numbers of people, and most beliefs and knowledge are more or less "standardised".

Most people follow the school -> (college ->) 9 to 5 -> retire consensus, and even those who believe themselves to be outliers actually behave how outliers are expected to behave, all of us furthering the goals set by others, some of which died have even died long ago.

Actual individuality is quite rare and usually expressed at a very small scale.


I think your identification of individuality and a measure of uniqueness is a mistake.

We're not individuals apart from others, the others are presupposed. The I is an abstraction in the sense that it presupposes social terms to understand itself. You need a reference, like culture, to be correctly understood as "alternative" (although 'peripheral' in terms of some specific aspects is more correct).

If you're not in a community at all, you're not going to reproduce.

Actual individuality is merely recognizing the exercised autonomy by an agent. You are still an individual even when you behave according to existing mores you did not create.

(The extra social esteem bestowed to relative difference is a cultural trend and a historical phenomenon. It does not determine our species, only our current conditions and predicaments.)


I fail to see how your arguments contradict my words.

Each ant constantly exercises it’s autonomy. Would you nonetheless argue that it has any individuality in the way the gp intended the word?


I make two points that "contradict" here:

Primarily, I argue that we're a social species and cannot employ individuality without a culture and community. This contradicts any notion of being artificially programmed by a deus ex machina. On the contrary, we program and reprogram each other reciprocally and continuously. A strong AI shaping of human kind would be parasitic to existing practice, not a new thing. Kind of like how propaganda presupposes a practice of telling the truth.

Second, and perhaps wrongly so, I address the seeming notion of difference being a moral good; a sentiment I interpret in your final sentence. I might very well have misunderstood it.

Being different is not good, in and of itself. Striving to be different seems to be a recipe for unhappiness, because the internal resources cannot explode the context without a major risk of ending up in social vacuity, where your actions are systemically misinterpreted and the individual itself misinterpret his or herself on the rebound.

The so-called historical individual in Hegel , Nietzsche's grand most free of all superman, relies on contingent reality to succeed; in other words, the successful "being different" is a harmony of the individual's actions and intentions with existing but as of yet unexpressed social trends. Again, individuality presupposes sociality, completely opposing the view that "true individuality" is so rare.

It's unfair to the ant to talk about them in human terms, and I am not an expert in ants. But a human being expressing her taste is an expression of her individuality, even though it's exactly the same thing "everyone else" think as well. What matters is that it expresses her as an individual, not that it differs from the majority.


How would individuality be expressed at large scale, anyhow? Funny thought.


By figuring and trying out new ways of being instead of seeking to conform to archetypes.

By trying out new ways out of repeating situations and creating new behaviours instead of either repeating the same habits or trying to adopt someone else’s response to them.

At an individual’s scale, those would be large and have big impacts.

Much more so than the colour of my living room’s wall, my type of car or defining myself by wearing either shirts or T-shirts and trying to impose on everyone else what I consider to be professional or unprofessional.


What's so dsytopian about that?


>Imagine if you could do things like instantly and precisely recall any Wikipedia article, any news story, any mathematical formula. Imagine if arithmetic goes from a skill you learn to a thing your brain does with 100% accuracy.

I don't think this will be that big of an advantage. I know the first 20 digits of PI off the top of my head, but if you ask me to do something with this information it will still take some time. Having to look up the first 20 digits online would probably be a small portion of that time.

You can have instant access to information, but processing can still take time. I think the arithmetic part is actually more interesting. Being able to look at a table of numbers and running a statistical analysis on it would be very useful.

At the same time though, I think all of this is way harder than we think. Look at programming via voice. The current best UI for it is still pretty lackluster.


> You can have instant access to information, but processing can still take time.

You likely have processing power in your pocket. With efficient brain-to-computer and computer-to-brain interfaces, the processing part could be taken care of too; just externalized.

We would probably end up being a lot _worse_ at things like arithmetic though, since we'd use external processing for even more things (I do believe we're already loosing in things like memory and simple processing by having access to a smartphone at all times).


> He rips off his headset and goes to unplug the computer because fuck this game, all the DLC clearly ruined it.

> The most poignant scene I can think of: in 2001 A Space Odyssey, the character flies on a Pan-Am spaceship to the moon, and then goes into a phone booth to make a phone call.

Interesting that you commit the same fallacy as the parent talks about: you talk about all this complexity in biotech but then assume that there's going to be a headset with a computer in order to connect to the simulation, rather than it being directly implanted into one's brain.


I added that for a bit of color :)


eXistenZ (1999 film) was a mindfuck when I saw it.

But yeah, I think we're getting closer and closer to a true hivemind. It would have to suppress the individual personality, otherwise a lot of people will likely go insane.

Of course, it could be that's acceptable losses or they're cut off from the "advanced civilization" and left to live somewhere far from the cities.

That is, of course, if half the planet isn't flooded and turned into desert by then.


Stephen Baxter's book Coalescent shows how a human hive mind could form via an set of circumstances that could be enforced or simply arise due to the environment. His "eusocial evolution" is a pretty damn creepy idea!

https://en.wikipedia.org/wiki/Coalescent


> Now take it a step further. Inside the simulation someone introduces Hard Seltzer. The in game year is 2021 and a player just read that some NPC somewhere had just created a brain/computer interface. He rips off his headset and goes to unplug the computer because fuck this game, all the DLC clearly ruined it.

Lmao. Hard seltzer isn’t that bad


It’s proof we are in a simulation. What else but a random item generator could have come up with it?


Other countries had it for years. It's called alcopop in the UK and chuhai in Japan (except theirs goes up to 9%.)

"Alcopop" always sounded like a fake word to me but they also insist on calling vaccines "jabs".


This reads like an Asimov story! He really did have some very well informed predictions that seem accurate even nowadays.


> Now take it a step further. Death isn’t death. Like the paradox of rebuilding a ship one plank at a time, your mind stops existing in your body and occupies a collection of other bodies.

Douglas Hofstadter talks about this in "I am a Strange Loop" [1], but he argues that our 'soul fragments' as he calls them are a representation of ourselves in others. Depending on how large of a fragment they hold in our brain, we can perceive the world as they do, and think as the other person. They get to experience the world through us, in a sense, given that we 'allow them to'.

It is an interesting idea, and helps reconcile the death of our loved ones.

[1] https://www.amazon.com/Am-Strange-Loop-Douglas-Hofstadter/dp...


Great post, I enjoyed reading it. To take these ideas further, Permutation City, by Greg Egan.

https://en.m.wikipedia.org/wiki/Permutation_City


I read Permutation City recently and found it in line with what we talk about now, which is so impressive for its time (1994). From OP:

> Now take it a step further: your mind is now a part of a collective globally connected network. The boundary of where “you” exist and where the rest of the world exists is erased. You can feel what other people feel.

This reminded me more of a book I've read online [1]. The synopsis doesn't do it justice, but shows the concept:

> All minds are networked together. Everyone collectively votes to punish or promote anyone for the slightest thought. Cater to your orbiters, and they'll vote to give you candy or slaves or even governorship of a metropolis. Challenge or defy your orbiters and ... well, good luck with that. You'll be down-voted until you die.

[1] https://www.wattpad.com/story/84753330-city-of-slaves-sff-co...


> Now take it a step further. People want to get away from this hive mind concept.

Why would they?

All the preceding paragraphs sound like Borg collective, but hey, if it's voluntary, it actually doesn't sound bad. As long as we can keep adtech away.


Yeah... if it's voluntary.

Working is voluntary.

Sure, you want to move to a farm far away from all the madness or you want to sit in a workshop designing robots all day, but you need money, so you "volunteer" to work some job you barely tolerate in or near a city for all of your best years.

The ads are just constant slaps in the face.


You should check out the Nexus series :)


Or watch Forbidden Planet - "Monsters from the Id"


> Now take it a step further: your mind is now a part of a collective globally connected network.

I think you just described the Borg from Star Trek


> we had been near the top of an s-curve when it came to space tech

Near the top of an s-curve in getting-to-space (“heavy lift”) tech, more like.

I’d say the field of actual in-space tech (i.e. technology that takes advantage of low-gravity / low-pressure / low-oxygen environments) is still pretty nascent. We still treat space as “Earth, minus some guarantees” (i.e. something we have to harden for) rather than doing much with the unique benefits of space.

It’ll probably take having a long-term industrial base operating from space to see much change there, though.

Imagine, for example, living on a space station, and having your food cooked using cooking techniques that assume cheaply-available vacuum and/or anoxic environments. :)


Yeah I agree, there are "generations" of technology, and I think that people in the 60's looked at the progress of transportation tech from 1910 to 1960 and thought "at this pace we'll all be zipping around the solar system like it's nothing by the 2000s." It was not easy to form an intuition of why the first generation of space tech was going to hit physics-imposed limits that would "slow that progress."

To be fair we still made lots of progress with space tech after the 60's, and I think via SpaceX and others we are hopefully now starting a new S-curve unlocked by cheaper access to LEO.


60s space tech also hit a lot of limitations of computers of the time. SpaceX's Super Heavy is in some ways a reimagining of N1's first stage, but with control software that makes it viable to deal with engine failures in flight.

But a big problem is also that the progress in space tech wasn't organic. There was no economic incentive, it was driven purely by propaganda, national pride and political goals. Once that fell away it took half a century for economic usecases catch up to a point where private investment was viable.


An interesting thing to think about was people used to think of getting somewhere. Now we think of things coming to us. In a way, we did achieve the "zipping around", it's just that we did it via the internet and wireless communication. Of course, it is not the same, but it is similar.


Space tech might be considerably further along today, had we in the U.S. not limited our R&D after 1969. A reusable shuttle that cost over $1B per flight was interesting innovation in 1981, but it actually represented a dead end for the U.S. rather than the beginning of a new era of exploration.


Once the moon mission was accomplished, we lacked a clear target on which to stay focused. Build cool things, but what for?

Researchers could concoct all sorts of narratives, but it'd lost the spark that held the layman's attention and permitted the spend of political capital.


From what I gather from the era the next goals were pretty clear, even to the general public: "go to Mars" (or in the Soviet case "go to Venus"), and then go to Alpha Centauri.

If the Soviets would have won the race to the moon this might even have happened. But instead the Soviets decided to focus on space stations, and the US declared themselves winner and did largely nothing (by rejecting NASA's Space Transportation System proposal, which was also about a space station and a way to get there cheaply).


The space shuttle was a bad system for a number of political reasons. Its per launch cost was comparable to developing new heavy lifts or developing new missions to the outer planets.

Effectively nasa was spending their RnD money on opex and not getting any political capital back for it. Tragically the program had been set up with the promise of a space truck, if nasa admitted they didn’t deliver a space truck Congress would have been unlikely to fund a new program. Had the budget been allocated entirely to either technological development or novel explorations America’s willingness to fund nasa could have been substantially different.


"food cooked using cooking techniques that assume cheaply-available vacuum " - what would those be?

Vacuum isn't something that's hard to get if you need it, all you need is a motor driving a pump, so if any industrial food process or one of the fancy restaurant chefs would have a good use for it, they would be already using vacuum in cooking. A kitchen vacuum sealer is <100$ (I'm assuming that would count as "cheaply available"), and it's not particularly useful though for most other cooking purposes that come to mind.


> cooking techniques that assume cheaply-available vacuum

That also assumes that the atmosphere you're venting for that "cheap vacuum" is cheap as well.


You don't have to vent. You can just compress into storage.


So to get your cheap vaccuum you first have to make a vacuum in a chamber by sucking the air out before opening it to space? You can just skip the open it to space part and do it on earth!


While I agree that we will have mind blowing biotech improvements in the next 50 to 60 years, I don’t believe it’s physically possible for biotech progress to be as mind-blowing as what happened in computer tech.

In the last 60 years, computers have gone from $160 billion/GFLOP to $0.03/GFLOP; transistors are now smaller and faster than synapses by the same factors that wolves are smaller and faster than hills, and the sort of computing tech that was literally SciFi in the 60s — self-teaching universal translators, voice interfaces, facial recognition — is now fairly standard.

60 years of biotech? If the next time I wake is after 60 years in a cryonics chamber[0] and was told every disease was now cured, that every organ could be printed on demand, that adult genetic modification was fine and furries could get whatever gene-modded body they wanted up to and including elephant-mass fire-breathing flying dragon, and that full brain uploading and/or neural laces a-la The Culture were standard, I would believe it. But if they told me biological immortality was solved (as opposed to mind upload followed by download into a freshly bioprinted body with a blank brain) I’d doubt the year was really only 2081 — not all research can be done in silicon, some has to be done in-vivo, and immorality would be one of them.

[0] this would be very surprising as I’ve not signed up for it, but for the sake of example


If we have full-on adult genetic modification capable of the, ah, dramatic example you provide, we’ve certainly figured out a way to get around in-vivo test difficulties. For better or worse, any biomedical advance comes up against that problem sooner or later.

Therapies to slow aging have the particular problem that it could intrinsically take decades to show an effect, sure- but that’s simply reason to be a bit more ambitious and aim for therapies to reverse aging, which could be tested rapidly in already-old patients. :)


> But if they told me biological immortality was solved (as opposed to mind upload followed by download into a freshly bioprinted body with a blank brain) I’d doubt the year was really only 2081 — not all research can be done in silicon, some has to be done in-vivo, and immorality would be one of them.

But why, biological immortality is already here for many animals like jellyfish. Honestly seems closer to me than uploading.


Because 60 years isn’t enough time to tell if you were completely correct, or if there was something you missed.


Depends on if the tech is something you do to an embryo and have to wait for, or if it's something you can do to someone who is already alive.

If you apply the process to someone who is already 100 and they hit 160 with no further degradation, it would be, at the very least, a pretty good indicator your process had worked.


I thought so too, until I started my PhD back in 2017. The topic was the application of machine learning to neurosignal decoding.

We are absolutely at the top of the s curve in this field. There has been no real progress in two decades. The AI boom has just made things worse by redirecting all of the grant money away from better sensors (which could get results in certain niche applications) and towards new algorithms, which have hit an accuracy ceiling because the data their human operators feed into them is complete shit. The field is so beyond saving that an algorithm that achieves slightly better performance on a single old dataset due to sheer chance is presented as a breakthrough.

Data rates would be in the tens of bits per minute with error rates of 20% on a good day and no feasible plan to improve either of these woeful specs.

I'll be dead before we have a BCI with enough bandwidth that anyone but a dying MND patient would want to get one installed.


I agree with you, and I'd say that we are perhaps in the bottom, or the middle of the S curve in software. Despite all the technological progress in hardware, our software is very slow and buggy. We end up increasing complexity in the name of 'productivity' and going up in abstractions, but we end up lacking fine-grained control and performance in modern systems.

I hope that we see a paradigm shift back towards writing robust and performant systems instead of stacking abstractions. Sure, Monads and Transformers are all fun to use, make code coincise and are very satisfying when they compose well, but, what's the hidden cost, and is it worth it?

As a user that encounters bugs at a disproportionally high rate, I'd say no. The trade-off in increasing abstractions is not worth the trade-off.


I think this has more to do with management timeline expectations and income valuation than software development. I do understand that they're almost inseparable, as software has to make money. But, timelines need to take into account the "craft" of software creation and not just the desk hours, for lack of better terms =/ Short timelines and quick turn arounds don't leave time for refactoring and quality code creation. First passes tend to be the final draft, more often then not.


This issue however, exists in more than just commercial software. Personal example, OSes feel bloated, I switched from BigSur to Mojave in my 2018MBPro and I saw significant improvements in responsiveness with virtually no change in usage behavior. Even software whose supposed features are stability and speed feel bloated, e.g. weird bugs in Unity, photoshop is sluggish without any significant increase in the visible features.


It makes sense though, the cost of hardware per performance has nosedove, whereas the cost of human labor has probably gone up? Why pay devs to write fast code when you can buy faster computers faster and cheaper?

I wonder when the trend reverses. It must, at some point, mustn't it?


> Why pay devs to write fast code when you can buy faster computers faster and cheaper?

Because you can do more in the same time. Speed and responsiveness are features, the issue is, the general population has come to accept that bugs are not only acceptable, but just that, 'bugs' that you can shoo away by restarting the machine/program.

> I wonder when the trend reverses. It must, at some point, mustn't it?

I hope so. As we have seen with spectre and now AMD's equivalent, speculative execution is risky and very complex to get right. We can't rely on ever increasing complexity on CPUs and fabrication processes, at some point quantum mechanics will bite back.


Your experience of using Haskell is that it causes more bugs than less abstract languages?


No, on the contrary, my experience with Haskell is that my code is mostly bug free, but ends up less performant because you can accidentally create huge trunks in the heap and it consumes too much mental stamina.

However, there exists an intermediate plane of abstraction over C and under Haskell that is absolutely horrendous and results in all sorts of weird bugs and unpredictable situations.


I've heard people say that Haskell would be better with eager rather than lazy evaluation, because of the mental burden that it causes. IMO that doesn't seem like a hard problem to solve. We can design pure functional languages with eager evaluation.


Haskell would be better if it used polarity and focusing to make both strict and lazy evaluation first-class citizens in the language. A stricrly-evaluated counterpart to Haskell is just ML, which we've had since the 1970s.


I've coded in OCaml, which wasn't pure immutable, but rather immutable by default. Because it has mutability, that removes the focus on immutable data structures, making it a very different language.


You can make any module strict by using the XStrict language pragma in Haskell.


What about Rust?


It solves memory problems and certainly doesn't shoot your foot with destructors, weird moves and so on like C++, it lacks a garbage collector which imho is a big plus (though, as C# and Java show it is not necessarily a big problem), and I prefer the trait system over than Cxx's virtual classes/interfaces. I also like that the compiler is very helpful, it introduces some friction while writing, but I suppose that is my lack of expertise.


With the track records we have of power and technological abuses, I'm not sure I'm excited about having a direct interface to my head.

In fact, I would be more excited about IRL laws tuning down what some are doing with the current indirect interfaces to my head, such as fake news, propaganda, advertising and manipulations of all sorts.


Pretty much. I'm less excited. Because I was super excited 50 years ago when we were at the bottom of the S-curve on computers and I had no idea they would eventually be commandeered to rip off my parents. I think of it as a "wider" view of the impacts vs a "narrow" view of the benefits.

For me, the nagging question is what happens when biotech has figured out biological systems to the point that everyone stops aging/dying[1]. Does that 10 billion people, give or take, become "the humans" for the rest of time?

[1] Lots of evidence that there is no "reason" for cell senescence, it's just an evolutionary afterthought (you succeeded in reproducing, now go die) and like other things can be "fixed."


I wish I could find the source, but when I've looked into this in the past I was reasonably convinced that the population would go up a bit, but not catastrophically, with the basic idea being that people die all the time for lots of reasons, only one of which is "aging" (aging is multiple things, blah blah). So people's lifespans would be much longer on average, but not infinitely long. Something more like 300 or 400 years, with a pretty big standard deviation.


Is not dying even a good idea? It might be the best way to create a stable system at society level - crash-only computing works pretty well in practice compared to never-crashing systems like Lisps.

Main problem is we take too long to build a replacement person.


> but near the bottom of the s-curve of computers

I think the most interesting exemplar of this is Star Trek. As everyone knows, Star Trek is based on 19th century naval warfare. The battle scenes are hilarious-- a captain calling out orders at human speeds to his crew that executes them.

It's been obvious for 40 years that computers would do the fighting, but in 1966 it wasn't obvious, so the paradigm was Horatio Hornblower.


star trek is not heavy on fighting anyhow, so these battles are plot devices... how would a computer executing and finishing a battle even before humans figure ouut that somethin is happening help with the plot....

in the same vein, why would strikes on a ship result in sparkles on the bridge....


We were also at the bottom of the s-curve of social malware and adtech, which turned out to the real outcome of commoditised computing.

The rhetoric of personal creativity and freedom was flattened by something far less interesting and more toxic.

I wouldn't be so keen to rush headlong into a bioware-connected world until that problem is solved.


This indeed is also my main fear. All the nefarious, intrusive, methods of advertising will get a hundred times worse, but now be directly beamed into our brains. Our hypercapitalist economy will only accelerate and exacerbate this.


> What about recording your own thoughts and then playing them back to yourself later?

More realistic is that your thoughts will be listened by police in real-time plus you could transmit them to other people through instagram. Amazon will be doing food delivery from app to mouth, and people largely will not be moving much, as food will be wholly automated end to end and any hobbies people will be doing via VR goggles. There will also be contraceptives controlled by ministry of reproduction and will be working or not based on couple's social score. edit: I forgot - exposure is going to be a real currency.


I guess I don’t see what is supposed to drive the S curve in biotechnology over the next 60 years. Advanced information technology is a necessary but not a sufficient condition for biotech mastery, and the other conditions are just not in place.


Just to name one thing, I think CRISPR is likely to be seen as fundamental a technological building block as the transistor.


Yeah CRISPR-Cas9 is an incredible breakthrough. I’m just not sure the regulatory environment is geared up for exponential growth, and my assertion is that reverse engineering biology will be a much steeper climb than the history of transistors or rocketry.

I guess my question to you would be, if we were to take a bet on it, what metrics would we use to know who was right? Biotech patents? In the case of rockets and chips we had thrust and transistor density respectively. I’m not sure how easy it would be to quantify progress, though I have a sense of what some of the tangible applications will be when they show up. Maybe mortality rate would be a good measure? Or human population size (a proxy for mortality + food production)?


> What about recording your own thoughts and then playing them back to yourself later?

Wet dream of surveillance tech.

50 years ago, if you had suggested that someday all your social connections would be instantly available to all serious secret services as well as big corps, people would laugh at you. You'd be the negative guy, against progress. 1984 fans were the paranoid tinfoil hat guys. Today it's reality. And only few actually care.

You don't want tech to automatically transcribe your thoughts. That's the only domain today where real privacy still exists.

There is a nice Black Mirror episode about this.


I think the big revolutions are going to be in biology - fast super computers allows us to make advances we couldn't have made any other way.

Computers are an enabling technology for basically every other advancement - in fact it's hard to imagine breakthroughs at this point that don't involve computers in some way - even 'just' as a tool for collaboration.


'Computers' are more like, industrial consciousness, though. What happens when we can grow brain cells on demand? (and support their function). Imagine upgrading yourself like you upgrade your computer. More brain. Less sleep. More hands. Armored skin. Photosynthesis.

Imagine you're a shapeshifter. You can copy any aspect of any living organism on Earth, integrate tech directly into your body, and the smartest people are coming up with new useful things to add.


Aaaand it's all banned. Enhancing your performance? Heresy!

That's assuming such research is allowed in the first place. Humans are innately repulsed by biology, by organisms. Imagine an arm with its skin ripped open to the bone, gushing blood everywhere. Imagine your guts hanging out from your belly.

And the ethical/moral rules we built over thousands of years will not allow the majority to sit by and watch some atrocious (from their POV) experiments.


So, humanity becomes the borg?


Also, taking economics to a new level, once we have got past the scarce-resource/pollution based short term thinking and it no longer is viable to have a volatile structured economy, where there is no longer a need for the profit motive. Super computing could easily take this task up of organising and fairly structuring the economy.


> So back in the 60's, people looked back at the progress over the previous decades, imagined the future, and thought about space. We'd have commercial space flight any day now.

We do have commercial space flight. Commercial space flight has exploded over the past few decades. The sky is full of communication satellites, imaging satellites, sensor satellites, even the occasional vanity satellites.

Everything worth doing in space, we're doing.

What we don't have is things that aren't worth doing in space, like Pan-am flights to the moon.


"What about recording your own thoughts and then playing them back to yourself later?"

Oh hell nah, it's enough craziness in real time, I sure as hell don't need to replay that.


> Fast forward and it turns out that we had been near the top of an s-curve when it came to space tech

It’s true that the types of rocket engines we have today are not that much more advanced than the ones we made 50 years ago. But rocket motors have never been mass produced, and we haven’t poured billions of dollars into research (let alone commercial production). We haven’t had the decades of improvement that came with integrated circuits, but we haven’t done the decades worth of R&D either.


That's actually a good prediction. Biotech is untapped field for moonshot-scale breakthroughs, even the current mRNA vaccines is only a small part of what will be possible in 10-15 years.


I think this is why the grand-parent is perhaps making an error in thinking that e.g. space travel or computing are silos unto their own.

Having worked in wet labs and deep learning labs, I think we've a lot to gain from increasing our ability to simulate experiments in silico and automate biological processes.

A lot of the room for improvement has been carved out by improvements in machine learning.


I agree with you :)

> But in the same way that the mature industrial revolution made high-precision manufacturing possible which made incredible computers possible, our mature computer technology is now enabling incredible progress in biotech.


True, I missed that!


There’s a startup called MindPortal (YC W21) that seems focused on using tech to review thoughts and control things:

https://www.ycombinator.com/companies/mindportal


I agree.

Your comment made me think of Natalie Woods' last movie: "Brainstorm."

https://en.wikipedia.org/wiki/Brainstorm_(1983_film)


> From this article: no more keyboards / mice? No typing, you can "think" to write. What about recording your own thoughts and then playing them back to yourself later? How much further can that tech go? And there is so much more beyond BCI, we are just understanding the basic building blocks in many areas, but making amazing progress.

While this itself is certainly an interesting concept, I'm worried at its consequences when implemented in our hypercapitalist economy: We'll almost certainly, along with this incredible interaction technology, have advertising beamed directly into our consciousness or something similarly intrusive. It's honestly terrifying how much worse intrusive tracking and advertising would get with this technology.


The Mood Organ from Do Androids Dream of Electric Sheep


we have something today that's 3-4x faster than pecking on a smartphone keyboard: voice to text.


I extremely doubt that, I type faster with predictive text than I speak.


Entirely depends on the individual. With a virtual on-screen keyboard, I can rarely type even one word without error. It's like my fingertips are just too big to hit the keys accurately. Swipe-keying is somewhat better/faster but I'm much better with real physical keys. Speech-to-text used to be pretty bad but with my current phone it's better than typing, for me. The downside is I hate talking to computers.


Have you used SwiftKey? I find it corrects 99% of my errors, to the point where I just press keys in the vicinity of what I want to type and it comes out correct.


I've used talon voice for software development as a replacement for typing a while back due to an arm injury. It ain't faster than typing.


But only 90% accurate, thats 10% not accurate enough.


but it is improving, and will be good enough long before there's a viable brain-computer interface.


I see it the other way:

We’ve had a revolutionary S-curve with computing/artificial reasoning in inventing transistors — but we know we’re still at the bottom of two related S-curves, quantum computing (an exponential increase in many problems of interest) and IOT/smart systems where our automated reasoning is embodied in something. We know somewhere up those curves lies the ability to make new kinds of minds.

I think both of those will prove to be bigger than bio-science... and more over, bio-science will require them to a) do the experiments and b) find uses for the technologies.

I think human augmentation will turn out to be like spaceflight: humans are near the top of their S-curve already.

Instead, I think biology research won’t come into its own until AGI research does and we have an idea of how to make new biological systems.

Of course, that might kill us all. Horribly.


From the moment I read "full broadband fidelity," I began looking at this press release as a product of dark-pattern science communications rather than an announcement of scientific progress. The news is that the connection is wireless and high-bandwidth. Low-bandwidth wireless communications have already succeeded elsewhere. Innovation could have occurred regarding the interface device in the brain, the broadcast chip in that device, the physical link link layer, the protocol layers above that, the external receiver, et cetera, but there are no details we can clean except that the connection is 'virtually' as good as a physically wired connection. Wherever details are missing, we can assume neither that they were overlooked by the writer, nor that they were deliberately left out. We can assume, however, that they did not contain any information which furthered the author's purpose.


I guarantee that at some point in the future, if we make it far enough, there will be an overwhelming social argument made that everyone should get super integrated brain interface devices implanted, "for the good of everyone." The argument will probably go something like this:

>The brain interface device X smooths out volatile emotions, reducing risk of angry outbursts that result in violence. By not getting a device installed, you are putting everyone at risk to your violent outbursts. Employers and businesses have the right to exclude someone who is at a higher risk of inflicting violence.


Yep, get a [Brain Passport] for [Public Safety] or be denied public services. Actually, this is something that will likely happen. It will first start with violent criminals and then gradually make its way into the general public - with appropriate cherry-picked data and statistics showing its advantages.


I suspect you're right, and I already don't want this technology to exist, or its creation to be pursued.

A brain-computer interface will, IMO, most likely be used to control brains, not computers.


This unfortunately makes a lot of sense in the current context in which we live, but I am optimistic enough to believe that sometime, somewhere in the world, people will join forces to push back against such use of the technology and in favor of a "free-er" society of individuals.

I like to think of the ideas behind the formation of the USA as a similar spirit.


I would be against it, I'd rather live in a forest and hunt wild rabbits and...

> smooths out volatile emotions

I volunteer!

Seriously, that's how one would buy me. Reduce my emotions? Maybe remove them? Plug me in, buddy :D


If such a brain interface allowed X evil actor to actually control people, you wouldn't have arguments, you'd have a direct takeover (or several different evil actors dueling).

But if this interface was simply like a drug or some similar effect, I doubt there's be enough of a combination of interests to get people on board.


This is reminiscent of the novel Manna [0]

[0]: https://marshallbrain.com/manna1

FWIW I think this is probably many, many decades away from being even possible.


If anything, the argument to get brain interfaces implanted would be that it’s cruel to children not to implant them.

It’d be like withholding a vaccine against a genetic flaw, when the vaccine is cheap and sitting on the shelf ready to use.


This stuff is no doubt promising, not least for disabled people but this neuralink-type stuff seems terrifying. Anyone who is excited about having an Internet connection into their brain needs their head examined. Us humans don't exactly have a good track record of avoiding awful unintended consequences when we introduce new tech, no matter the benefits.

It all starts out with an innocent sales pitch, "we're just connecting people" etc etc, but whatever we build ends up reflecting human nature and our social and economic context in all its myriad ways, good and bad.

We don't control technology or even have the foggiest idea how anything we build will pan out. We just make it look like there was a masterplan after the fact when in reality it was a headless blunder.

I'm the sort of person that yearns for computing to be done sitting on a chair looking at a big screen. I don't even like mobile internet devices that much in terms of what they've done to us. Beaming this straight to our brain? I'm out thanks.


I've been thinking about this quite a bit: there seems to have been a shift sometime during the last 50 years where instead of computers and computing being a tool to be mastered and controlled by humans, we've seen computers switch the power dynamic and render us humans the tools.

To me, that clash of "visions" was supremely represented in Apple's "1984" advert.

I'd love for computing to get back to that utopian vision of the bicycle for the mind but if you look around these days it's more of a train-wreck of the mind.


"I'd love for computing to get back to that utopian vision of the bicycle for the mind but if you look around these days it's more of a train-wreck of the mind."

Same here. I'm increasingly not interested in machines or software that I don't control at leaf to some extent, that I'm not free to modify, that aren't about empowerment, learning and creativity for the user. It's not just based on a personal desire, although it is partly that. It's the only way we can stay free. This isn't just some high-minded hacker ideology, it's literally about liberty.


"50 years where instead of computers and computing being a tool to be mastered and controlled by humans, we've seen computers switch the power dynamic and render us humans the tools."?

What?? Where exactly in the current world do you see this? It doesn't reflect reality at all...


Walled gardens, ad tech, lack of full ownership over your machine, less general purpose computing. I see where the writer of this is coming from for sure.


You may like the Adam Curtis documentary series "All Watched Over by Machines of Loving Grace", which basically posits that same thought.


"Anyone who is excited about having an Internet connection into their brain needs their head examined."

Perhaps you can have this done with a POST request in the future and kill two birds with one stone.


Being careful is always good, but the dangers are overestimated and the benefits underestimated, by most people.

In terms of security, you could solve 80% of problems you mention by a hardware onoff switch, which would cost next to nothing. A large part of rest of the problems could be solved by limiting the information flow into a certain format/context: imagine just a 4K screen inside your mind that you can look at if you wish, but you always know that information comes from that screen and not from somewhere else. Even just such a dumb usage would be tremendous upgrade to humans everywhere. How do we know it won't give you any "thoughts"? Just limit the information flow to pixels. The interface won't even know what information you look at your internal screen, the most mischievous thing it will be able to do is to maybe make things black-and-white, or make something flicker in an annoying manner.


Last night someone put on a history documentary about Oliver Cromwell's invasions of Ireland. They showed that the invention of the printing press around that time led to all the news of the Irish rebellion having a further reach inside of England. Oliver Cromwell channeled that nationalism into his own political gains and caused a wake of destruction through Ireland.

It's exactly the same phenomenon as we see now with social media.

This is the natural cycle of human innovation in communication.

I want these interfaces to happen despite the growing pains we will have.

Besides, it doesn't matter what we want. They will happen either way.

Think about it in terms of larger time scales and the growing pains of new technology seems like a more worthwhile cost.

I know it's dismissive to label real human suffering and death as growing pains, but this stuff is inevitable, the results are predictable, and in a large enough timescale, the technology will yield immense fruit.


So the innovation here is not the neural probes (200 neurons, same state of the art), nor the connection from the neural probes to outside the skull (physical port, not wireless), but only that there is a wireless dongle that sits on top of the physical port and connects to a server somewhere? Yawn?


I think, yeah "Yawn" from a tech perspective. But for average people, the idea that these folks are hooked up and streaming from their homes, with minimal visible hardware is new. It feels more sci-fi than someone in a research center with a ton of wires on their head.


> In the current study, two devices used together recorded neural signals at 48 megabits per second from 200 electrodes with a battery life of over 36 hours.

So roughly 10 Gbps, not bad. But this isn't about the raw tech.

For people doing medical research, it lets them gather data all day instead of just appointments. That's the big change.


I didn't read it as 48mbps per signal, but rather 48mbps per device. Did I read it wrong?


I actually read it as 48 Mbit/s for the 2 devices combined, or 24 Mbit/s per device.


Ah, yeah, thanks..

"In the current study, two devices used together..."


I doubt thats physically possible, its like trying to access your RAM by reading EM fluctuations from the outside of your computer, sure, you can see when theres a lot of read/write instructions but theres no way you read memory.


It seems like it's closer to measuring which areas of the "die" are busy. Like "the MMU is doing something hard" or "this adder circuit in the ALU is idle", etc.


Call me a cynic, but I don’t have a lot of optimism for brain-computer interfaces. I can barely control my own thoughts, let alone understand how they are made or where they originate. We would need to make an exponential leap in our understanding of the brain and our consciousness within it to make this in any way a viable input method.


IMO, the bigger problem, which nobody talks about, is that if we have brain-computer interfaces, it will be trivial to use them to control our emotions. Once that happens, it seems to me we'll basically stop being human. People are going to want to feel whatever emotion is convenient in that moment.

Don't enjoy your terrible dead-end job? Now you do. Don't enjoy your abusive relationship? Now you do. Don't feel comfortable with societal issues at large? Now you do. Empathy gets in the way of doing your job? No problem.


I don't know, I think that is a big jump and definitely not trivial.

"Reading" neural activity is much different than "writing", and modifying the circuits/neural activity precisely enough to modify emotions.

These devices are typically cortical surface level electrode meshes, placed over the motor region of the cortex, while emotions are thought to come from various deep brain structures. Not saying it won't happen, but we are much, much, further from the latter than the former.


I don't know about that. You're right that emotions seem to come from deeper structures, but these structures are also more primitive. We're able to modify emotions with something as simple as amphetamines, so controlling them with a few well-placed electrodes is maybe not so difficult. Seems to me that as brain interface technology starts progressing, we're going to hit an S-curve of technological progress that will make it advance very rapidly in one or two decades.


It's definitely possible, but I guess what I am saying is that research in this area hasn't really been explored in the context of humans.

In the lab, we use targeted genetic manipulations such as optogenetics [1] or chemogenetics (see DREADDS [2]) to achieve precise circuit manipulations that can (maybe/kinda) change emotional state (see [3] and [4] for manipulation of fear in mice, sorry may be pay-walled check sci-hub). But these are impractical in humans at the moment because they require specific genetic backgrounds (a CRISPR modified mouse expressing a specific artificial DNA sequence in certain types of neurons from birth), viral injections to add other genetic constructs that interact with the from-birth one, and implanting lights or adding drugs directly to the brain where the cells are. Precise electrical manipulation is not really done, even in animal labs because it is not precise or controllable for these types of things.

Again, I have no doubt that we will get there, maybe in a few decades too. But the techniques are much further from human use than the "reading" technology demonstrated here.

[1] https://en.wikipedia.org/wiki/Optogenetics [2] https://en.wikipedia.org/wiki/Receptor_activated_solely_by_a... [3] https://pubmed.ncbi.nlm.nih.gov/28288126/ [4] https://www.nature.com/articles/npp2015276/



On the other side of this, some people require external emotional regulation because their brains fail to do so for them and take medication for it. So having this as a treatment option isn’t necessarily something we should avoid pursuing for cases where medication isn’t an option for whatever reason.


Yes obviously, just like there is a legit case for brain implants for paraplegic or wheelchair-bound people. However, it's easy to see how things could easily go way too far and lead to a world of VR-addiction and dehumanization.

https://cdn-images-1.medium.com/max/1200/1*gQVf0RpFjaYfS7GJJ...

As always, technology is a tool, and a double-edged sword. It's just hard to predict how it will change society sometimes. IMO brain implants are actually way more dangerous than genetic engineering ever could be. People creating designer babies with blonde hair and a higher IQ is nothing compared to the risk of people no longer being able to feel empathy and sadness in response to problematic situations. Maybe we'll even stop feeling love, because it's just too inconvenient.

Oh and uh, yeah: brain implants could also make it possible to implement the notion of thoughtcrime. I hope, for your own sakes, that your political beliefs and opinions are in line with that the majority has deemed correct.


We already have segments of the population who have to deal with various forms of addiction and dehumanization and this has not stopped us in the development of new medications and technologies.

Should we prevent the development of this technology due to its potential for abuse? Should we develop the technology for its potential to benefit ourselves?

Obviously brain implants that can read and write thoughts come with an extraordinary amount of power and it is both wonderful and terrifying to imagine the potential benefits and dangers that it could provide us.

I think we will do what we have done through history and someone somewhere will develop the technology if it is possible eventually regardless of our qualms with its potential to destroy people.


But one thing that separates BCI from everything is the implant itself. Sure, people have addictions but they also have that choice of stop doing that and focus on things that are important to that person (I know that overcoming addiction is way tougher than just writing about having choice as I'm still going through it at the moment, but one can surely get out this). The thing that differentiates this from earlier technological advancements is that now there is some hardware directly wired in your brain whereas previous technology was outside of your body and I don't think that I would able to accept that without having an impending worry about implant having access to my brain directly.


This is the problem with computer->brain interfaces (which we don't have), not brain->computer interfaces (like moving mouse pointers via direct brain->computer connection).


If you have a good brain->computer interface you can just use traditional methods in a tight feedback loop to manipulate the brain. We have more than enough methods to reward or punish people. Simply reward them for good thoughts, punish them for bad ones, and I don't see how their behavior wouldn't change.


> Once that happens, it seems to me we'll basically stop being human. People are going to want to feel whatever emotion is convenient in that moment.

I would say it is the other way. Many animals have emotions. It is sophisticated abstract thinking that makes us humans. If one can get full control of their emotional part of brain, that would make them truly human.


If we could actually get control of our emotional part, it will be used for the military to create superhuman soldiers, devoid of any empathy and augmented with a rage mode switch that turns off fear and fills them with adrenaline.

Personally, I'd use it to deal with my bipolar.


That's the purpose of basic training already, but we don't want soldiers with no empathy. Soldiers are far too valuable to turn them into rage monsters, we need team players who will actually save each other if they're hurt and won't randomly attack civilians.


IMO we're also pretty close to being able to have robotic infantry drones, which are going to make enhanced human infantry kind of moot anyway. Imagine if you could take something like the Boston Dynamics ATLAS robot, stick a gun on it and paradrop it into enemy territory. It might need to be a bit lighter and more agile, but we're not that far off. It doesn't even necessarily need AI, you could remote-control the thing.


Robots need to be faster, lighter, more agile and with longer battery.


You're ready to be hired by whoever will market these devices.


I was pretty excited when I started learning about BCIs in college. Then I realized that it’s not that my hands and eyes are some sort of limited bandwith, but rather that my brain is not really able to increase the throughput. How many of you code at the speed you type?

While I appreciate that they are game changers to people with accessibility problems, they’re essentially not worth the risk to anyone else.


Well look at something that might change your mind: there is this device called Muse headband that actually uses your brain signals to help you meditate. Meditation is a tool of mindfulness, and with some progress it helps exactly that: control and know your own thoughts.

So already at this (primitive) iteration of tech there is a device that connects to your brain and helps you do something, something that you would find very useful no less, if your goal is to learn to operate your brain in the best way possible. Now this is even possible with this (primitive) iteration of this tech. The effectiveness is reported by a lot of people, and it is also calibrated on tibetan monks who meditate for a living. There is also a similar institutional device with more thorough processes called "40 years of zen" (but you have to go to a laboratory).

Now imagine what the next iteration would make possible, and for example how good would a device like this be if it could actually read direct signals from the brain instead of having to read them through the skin?


This is what I was thinking. As a creative output device My brain thinks way ahead while I’m typing the current code. This tech will actually slowdown the process with respect to output . What this will thrive at is, giving input to the brain about data by brain generating just enough query. So ultimately future super human will be the guys who can generate precise query much faster.


The steam engine was in use for generations before thermodynamics theory was discovered. So in many cases, it is possible to engineer a technology without fully understand the underlying principles that govern its behavior.


In many ways, the current explosion in deep learning and its applications to different fields are similar - we don't quite have a fundamental understanding of why things work, and theory is still catching up to how it is being applied.


How long until programmers are forced to ditch antiquated methods of input like hands and keyboards in favor of streaming thoughts directly to your IDE? Can't wait to be forced by market forces to adopt such an interface and then promptly get ads streamed back or get brainhacked


That attitude will have to go, cause that space for that attitude is need for some upgrade. I dread this world. One could glimps it in the Firefall novels of Peter Watts. "Experts" who upgraded themselves into crippled "savants", able to outperform all baselines, but incapable of feeling there own fingertips.

One can already feel that pressure, regarding substance abuse to stay awake longer and perform better with amphetamines and be more creative with hallucinogenics.

Imagine having to sacrifice ever more parts of yourselves, to stay relevant. What a horrific freak-show we will become..


> Imagine having to sacrifice ever more parts of yourselves, to stay relevant

The Little Gods by Jamie Wahls explores this very notion, in the context of a parent and her child, and how each respond to expectations of an augmented society.

http://compellingsciencefiction.com/stories/thelittlegods.ht...


That was a interesting read - thank you.


Don't know about you, but the thoughts in my head are such an unordered and incomprehensible mess that it is really hard for me to see any benefit of direct streaming. Typing the thoughts slowly down and rereading and evaluating consequenses multiple times is the only way to get any sense out from my head. And I like to think myself as relatively good thinker...


All I can say is, I'm glad I'll be dead in a handful of decades.


Suppose that while you're still around, a brain extension enables you to greatly extend your lifespan. Would you agree to the implant? Totally hypothetical, of course; I myself would not have a ready answer. But for paralyzed and nerve-damaged people, it seems to me adopting this technology would be a no-brainer, so to speak.


I think they already covered that in The Matrix


What about other thoughts? Does my employer get access to those?


Only for advertisement and performance reviews. You will have to change who you are to fit into the company, im afraid..


> You will have to change who you are to fit into the company, im afraid..

As if you don't have to already? The false consensus paradox in corporations is overwhelming.


Just meditate on your breath or play pazaak in your mind to distract the mind reading


I think it'll have to wait until a non-surgical BCI gets decent performance. The study in the article uses implanted electrodes - forced surgery would be a nightmare.


Wow, that's pretty surreal. An actual person with 2 high density connectors on their head. Each one streaming 48mbps of neural data. Parts of Philip K Dick's stories are almost real. Though I get that the data is coming out is still pretty low-fidelity and crude.


This is huge. From what I’ve read, a lot of neuroscience is bottlenecked by having a hard time reading neurons through the skull. This will remove the bottleneck in whole new types of brain/mind/consciousness research.


This does not read neurons "through the skull".

It wirelessly transmits the data from probes already in the brain. The innovation is that they do not have to be physically tethered to get the data.

> The unit sits on top of a user’s head and connects to an electrode array within the brain’s motor cortex using the same port used by wired systems.


I’m aware, that’s why I said this will remove the bottleneck :)


Neuralink's tech is definitely more advanced, but they haven't gotten it into humans yet and there are still issues with the longevity of the threads inside an actual human brain.


Does it look like they have some good avenues of fixing that?


I can’t wait until I forget to charge my brain reader overnight and can’t access my computer.


You have not received the proper number of ad imprints this week. Network functionality will be restored, once your Facebook BCI chip detects that you have fully met the terms of service, which you had legally accepted in order to receive this free implant. Until such time, you will not be able to access Facebook's BrainNet. We urge your compliance, so you may once again virtually chat with friends and family and work remotely with your employer.


"You have been disconnected because payment was declined..."


More like

"We have updated our privacy policy. Please read and accept these terms to continue"


High-bandwidth wireless link, for existing human brain interfaces.

They miniaturized the reciever and slapped a wifi chip on it.

Cool.

They aren't beaming thoughts into brains tho.


BCI could unlock human immortality.

BCI is a hard problem, and the risk to reward ratio for current generation tech is too high except for a few isolated cases: non-invasive, which is low-resolution, and disease remediation, which is basically a measure of last resort. Given the poor payoff, the technology isn't invested into.

If we can get out of the gravity well / steep energy slope that prevents us from reaching the pinnacle, we can maybe one day become capable of performing brain copies and uploads, which effectively achieves immortality. This would be the most impactful technology ever developed for humans, should we still be relevant at that point. There's a huge hill to climb in getting there, and it's unlikely we'll see it within our lifetimes, if ever.

AGI, if developed first, would probably see little need in co-opting messy and overly-complicated human machines.

And there's always the chance we destroy ourselves first.


Well, a brain copy is not exactly the same as immortality; it just means your memories and an amalgamation of the neural networks that form your unique personality can be duplicated. The entity that results would be a separate individual.


Im not sure I would notice or care.


I think what they are saying is that you would not continue to perceive and have experiences through that digital mind; it may as well not even be a duplicate of your mind, as far as you’re concerned.

Unless there’s some yet unknown quality of consciousness that says if your brain system stops, and another brain system just like it arises, that’s enough to have continued perception through that new mind. Is that what you meant?


Does that matter? We still value digital photographs. They're not the original scene, but they have value.


Of course my death matters if I'm trying to be immortal


Next step: add a telnetd server and give it the root password of 123.


So what constitutes high/low bandwidth in this context? The article doesn't mention any specific numbers (eg: 1Mbps).


How does 48 megabits per second compare to a tasks a computer does?


Next stop: telepathy.


Neuralink ???.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: