Hacker News new | past | comments | ask | show | jobs | submit login
How much bandwidth does the spinal cord have? (reddit.com)
317 points by jacobedawson on July 12, 2019 | hide | past | favorite | 110 comments



The top voted answer is assuming a neuron firing/not firing (once per refractory period) can be counted as a bit for the purpose of calculating bandwidth.

But I think the peripheral nerve system uses firing frequency to encode intensity, so I'm not sure you can really equate 1 firing with one bit.

For example, over 15 refractory periods there could be anywhere between 0 and 15 firings, thereby encoding 16 different possible intensities. That would effectively be only a 4 bit message per 15 periods, not 15 bit per 15 periods.


The classic goodput vs throughput debate. E.g. Gigabit Ethernet is most commonly actually physically transmitted at 1.25 gigabit/s because of 8b10b coding. Ethernet requires interpacket gaps, and the packetization itself changes overhead. Then of course you have other protocol overheads underneath you could consider, and how much overhead the data you were transmitting has due to encoding of it itself.

In the end there is really "bandwidth" which is how much symbol "space" is available and various levels of "goodput" which is the rate of whatever you're calling useful data.


> Gigabit Ethernet is most commonly actually physically transmitted at 1.25 gigabit/s because of 8b10b coding.

Fiber often uses 8b/10b. Over copper it's much more complicated.

USB 3.0 is an example of simple 8b/10b. 4Gbps data, sent at 5Gbps.

Ethernet over twisted pair, well...

10mbit Ethernet turns each bit into either a 01 or a 10 and sends those on the line.

100mbit Ethernet applies 4b5 encoding to get a 125MHz signal, then it uses this to drive an encoding called MLT-3. In MLT-3, the final output cycles through -1, 0, +1, 0 over and over, and each bit decides whether it advances or doesn't advance.

Gigabit Ethernet does a complicated encoding to turn batches of 8 bits into 4 separate voltages from -2 to +2 (so 5 levels). It then sends these simultaneously over all four wire pairs at a grand total of 125MHz.


baud vs. bps


Other things like unusable symbols (e.g. IFG added to above) make it more nuanced than that.


Timing is important with frequency encoding intensity.

So, max frequency is encoding 100% as fast as possible, but this means it’s possible to have effectively 15x100% or 14 at 99.99%, 99.97, 99.98 ... or really any combination depending on when each pulse specifically ends.

This means you need to decode how accurate the timing information is to figure out the bandwidth but each pulse encodes a floating point number not a bit.

PS: For the most ridiculous upper limit. If a neuron could fire 500 times+ a second with a 1ms refractory period each encoding a number between 0 to ~10^40 or ~2^130 as the plank unit of time beyond the minimum. That’s ~50,000+ bits per neuron per second.


Good point and nice estimation! Measured bits/spike (e.g. in cricket sensory neurons) tops out at 3.2 bits/spike [1]. Most sensory neurons across all species top out at ~300 bits/second.

So if we assume a similar bit rate for spine neurons, we get 7.5 GBps, not 16.625 GBps.

[1] : https://www.nature.com/articles/nn1199_947


Though, it probably never reaches even remotely full capacity except in rare events like seizures or when jumping into cold water. Also, there is a fair amount of redundancy in neural information processing due to the noisy computation substrate. I would not be surprised if the channel capacity is something like 100 Mb/s, but transmission rates are less than 10 Mb/s most of the time.


You're fine tuning a parameter of a rough-order-of-magnitude estimation - I think each of the parameters can be discussed in the same fashion, your best hope is that errors cancel out and you're finding a result that is only a few factors out


So, sort of like 4-QAM then?


Do bundles of nerves (such as a spinal cord) have cross-over attenuation, limiting their bandwidth like copper cables? This was one of the drivers in telecommunications towards adoption of fiber optics over copper cables. The 600+ pair copper telco cables could not sustain 600 separate DSL circuits, as the high frequency signals would interfere and cause attenuation. In many FTTH deployments, fiber was a cheaper transport than installing additional multiple pair copper cables to support the required customer demand. Of course, the enhanced bandwidth capability of fiber optics, together with the ability to have many more "circuits" over a a single fiber, were other drivers towards adoption of optical broadband. If there is not cross-over attenuation with bundles of nerves, is that only because of the mylar sheath? Obviously any significant interference might have an effect on the calculations of spinal cord bandwidth.


The nerves have a layer of fatty acids called Myelin that act as a sheathing/sheilding around the nerve fibers to prevent crosstalk. https://en.wikipedia.org/wiki/Myelin


Myelin's function is to speed up signal conduction along the axon, not crosstalk between axon fibers.


Sometimes, it's called "multiple sclerosis"


And sometimes ALS. There are a few diseases related to sheath deterioration, weakening, shortening, etc. It's pretty important stuff.


Similar question on Quora from 2012[1] per sense:

  Vision: 10Mb/s => 100Mb/s
  Hearing: 30Mb/s
  Touch: 135Mb/s
  Smell: 100k neurons
  Taste: 100kb/s
  Proprioception: ??
  Balance: ??

  Total: ~10Mb/s=>~1Gb/s
Internal brain bandwidth is also worth mentioning as this is the last remaining wetware advantage over hardware due to the three dimensionally fully connected heterogeneous cortical substrate. I can't seem to find a figure on that though.

https://www.quora.com/How-much-bandwidth-does-each-human-sen...


I really question the hearing one, at least.

Shouldn't this basically just be the bandwidth of a headphone signal which is at the lowest quality where you can hear degradation if it goes any lower?

For me that's something like 3MB per minute, per well-compressed ogg files, or 50KB per second. Yet that answer says my hearing bandwidth is actually 600x higher than that.

I can imagine some small difference, but to say my ear has evolved to transmit 600x more sound data than I can actually perceive sounds off.


> Shouldn't this basically just be the bandwidth of a headphone signal

One issue with this approach is that it's very hard to tell the difference between data that was never sent versus data that is there but our developing brains were trained to ignore because it didn't turn out to be useful.

A simple example of this would be language differences, where certain important language features simply aren't noticeable to non-native speakers, despite having the same sensor-hardware.


Well, don't neurons get pruned as you grow up? So you don't necessarily have the same hardware.


I was thinking more about everything leading up to the brain, such as the composition of your retina and optic nerve.


I'm wondering... For the most efficient lossless audio format in existence now, is it still inefficient with respect to what a human can actually hear? For example, can it store frequencies above what a human can hear (20 kHz)? Are there opportunities to make it more efficient, assuming that we only care about humans? Can a FLAC file store a sound that only a dog could hear? If so, maybe the bandwidth is even lower than 50KB/s.

Or going in the opposite direction - do our hearing neurons transmit data that the brain throws away? If so, then the bandwidth goes higher than 50KB/s.


Lossless encodings don't throw anything away that was present in the original source file; if you start with a recording chain that can store 200,000 samples per second from an adequate microphone, you can resolve 100KHz - epsilon frequencies. Typically, though, the source files that pass through lossless encoding have previously been limited to 22KHz.

Lossy encodings, on the other hand, all work on the basis of a model of perceptibility that throws away information deemed less perceptible.


You can hear frequencies much higher than normal by pressing the sound source against the cranium.


Follow up question I googled.

What is the resolution of our Eyes?

Ans: 576 Megapixels [1]..... Holy..... And here we are, trying to rely on couple of cameras for autopilot.

[1]http://www.clarkvision.com/articles/eye-resolution.html


The resolution of the camera has nothing to do with our ability to understand the road. I can drive reasonably well with just a camera feed of the road, but the computer can't. So the bottleneck is in the image processing capability of the machine, not the quality of the camera.


Correct. You could follow a 1970's Atari green/black pixelated set of lines just as well. And optical illusions such as the Kanizsa Triangle demonstrate even that much info is more than ample, since your brain only needs the tiniest of hints to calculate shape and size in order to calculate direction.

https://www.illusionsindex.org/i/kanizsa-triangle


Your comparison to a simple driving game might be misleading. A two-color game is probably an easier challenge, not a harder one, because the game developer has reduced your input to almost pure information rather than throwing a lot of visual data at you to interpret.

In other words, it isn't that your eyes and brain are so good that you can still drive at an Atari level--it's that the Atari level is optimized for your eyes and brain.


I agree to an extent, but it is a complex comparison to make, so it can easily look that way.

Our brain creates a model of the surroundings based solely on memorized bits of the environment (allowing us to blink without the world going dark, and to track things in our peripheral views) plus prediction, which is something Atari exploits so we can convince ourselves the bars and dot have motion. We even temporarily convince ourselves that the blooping is the sound of impact.

A long way to say that our sensors don't send complete objects like an OOP function would. Instead, they transmit just enough info (hints, maybe) to refresh the mental model. Similar to how we can see 4 dots and mentally translate it into a square laying in some particular direction in 3D space, even without lines or actually being 3D.

The "bit" analogy breaks down when you consider this aspect. The data our brain works with isn't the same as the data a computer works with.


Yes but keep in mind that this data is immediately and heavily compressed and encoded - the data sent into the brain from the retinal ganglion cells has been estimated to be on the order of 10s Mbit/sec from careful studies on rodent RGN statistics. https://doi.org/10.1016/j.cub.2006.05.056


Not an expert, but this is also my understanding: The nerves are more like a neural network, not just wiring to the brain: https://heartbeat.fritz.ai/the-ancient-secrets-of-computer-v...


It’s not just resolution; the dynamic range (the “f-stops”) of the human visual system is incredibly wider than even high-quality film and digital cameras, meaning that cameras are much less capable at handling details between wide ranges of light and dark.


Though post-processing from neural nets seems to close this gap.

https://www.youtube.com/watch?v=bcZFQ3f26pA


> Ans: 576 Megapixels

I’m skeptical. Since the human eye has less than 150M sensor cells, and our sensors aren’t as good as 8 or 10 bit resolution pixels, that answer is overestimating by at least 4x, and possibly a lot more. https://en.m.wikipedia.org/wiki/Photoreceptor_cell


He misconveyed the source. The claim was about the detail the eye can percieve in a given field of view, over time and by moving the eyes over it, not the amount of information the eye captures at any one time, which is vastly lower.

It's mostly irrelevant for cars TBH, since crashes are quick events.


That’s a simplification. The events leading up to a crash are important. The spatial understanding of the situation on the road, the behavior of other drivers and other factors all play a role in the seconds before a crash.


Yes, but there's a time-accuracy trade off when you see things by scanning around. A good, wide camera trades off remembered perepheral details for lower-resolution but temporally up-to-date details, which matters more in a crash I'd expect.


I can tell the difference between two 24-bit colors that are 1 value off in 1 channel, and that's not counting that digital displays can't fully exploit the darkest or brightest ranges of our eyes' sensitivity. Have you ever seen an HDR display showing well-made HDR content? Spatial resolution is not so easy, since we have much higher resolution at the center of our FOV.


Sure, that is true, you can sometimes see the difference in two 8-bits-per-channel colors. But, it takes both time and more than one pair of photoreceptive cell in your eyes to see that difference. There’s a tradeoff between color resolution and spatial resolution. Your spatial resolution at the sensitivity of seeing the difference in 8-bit colors is way lower that 150 megapixels.

BTW, 24 bit colors that are carefully done, paying proper attention to color space and gamma, have been specifically designed to be at the JND (just-noticeable difference) with a 1-bit change in value. If you can always and reliably see the difference between any pair of 24 bit colors that are only 1 bit different, then you have abnormally good color vision. It’s statistically unlikely for that to be true, but some people do have it.


> Ans: 576 Megapixels [1]..... Holy..... And here we are, trying to rely on couple of cameras for autopilot.

We only have high resolution at the point of gaze point. Everything else is blur, so it's not all comparable with a camera.


Worse: we think we have high resolution outside of that point, but it's actually our brain making up the details it thinks are there. We don't even really have colour outside a narrow field, and all of that is obfuscated by blood vessels dropped in front of core parts of our visual field.


Resolution has very little to do with object detection and obstacle avoidance.

(not saying Musk isn't full of shit)


Something I posted earlier about spinal cords and high fidelity music:

https://news.ycombinator.com/item?id=18750902

Speaking of spines and copyright issues:

In K W Jeter's excellent dark cyberpunk novel "Noir", intellectual property theft is viewed as literally killing people by removing their livelihood, so copyright violators were punished by having their still-living spinal cords stripped out and made into high quality speaker cords in which their consciousness is preserved, usually presented to the copyright owner as a trophy.

"In the cables lacing up Alex Turbiner's stereo system, there was actual human cerebral tissue, the essential parts of the larcenous brains of those who'd thought it would be either fun or profitable to rip off an old, forgotten scribbler like him."

https://marzaat.wordpress.com/2018/01/27/noir/

>There’s a lot to like in the novel.

>My favorite section is the middle section where the origin of the asp-heads is detailed via McNihil’s pursuit of a small time book pirate and the preparation of the resulting trophy. The information economy did, in this future, largely come to place. As a result, intellectual property theft is viewed as literally killing people by removing their livelihood. Therefore, death is a fitting punishment. McNihil, in his point by point review of the origin of asp-heads, notes that even in the 20th Century there was the phrase: “There’s a hardware solution to intellectual property theft. It’s called a .357 magnum.”

>Actually it’s decided that death is too good and too quick for pirates.

>Their consciousness is preserved by having their neural network incorporated in various devices. (Turbiner likes to use stripped down spinal cords for speaker wire.)

>This sounds like a cyberpunk notion but, in other parts of the novel, Jeter takes a swipe at such hacker/information economy/internet cliches as information wanting to be free (McNihil destroys a nest of such net hippies) or the future economy being based on information. Villain Harrisch sneers at the notion stating that information can be distorted but atoms – and the wealth they represent – endure.

>Still, his novel is chock full of the high-tech, low-life that characterizes cyberpunk.

(I'd quote some more, but as a high-tech, low-life net hippie, I'm afraid of having my nest destroyed and getting my spine ripped out!)


I reject the premise that the nervous system has "bandwidth" in a sense comparable to digital communications. Yes, nerves fire in discrete action potentials, but every step of nervous transmission also involves a processing step. Let's not forget: a huge benefit of neural nets is dimensionality reduction, which is at once compression but also the extraction and abstraction of salient information. Does this represent the gain or loss of information? It's a basically meaningless question; the question is how does the system as a whole work, and how well?

Nor is it clear what the endpoint of a communication is. This is another issue. Does information get counted twice if it's used by both unconsciously by the brainstem as well as rising into awareness and is used by the neocortex? The list of questions can go on.

This bandwidth thing is one of the questions I find frustrating, on par with people wondering if a simulated piece of brain has feelings (the answer is NO). Why is left as an exercise for the reader.


Shanon pretty clearly teaches us that bandwidth is a fundamental thing you can't circumvent by this sort of argument. It might do other stuff on top (though a lot of communication in the spine is latency-critical and at least some of it is point-to-point), but ultimately this doesn't change that bandwidth is how fast you can throw bits, whatever processing you do on them on the way or form they happen to take.

> This bandwidth thing is one of the questions I find frustrating, on par with people wondering if a simulated piece of brain has feelings (the answer is NO). Why is left as an exercise for the reader.

Now this is a hot take.


But doesn't Shannon rely on symbols and alphabets? What if the information is presymbolic?

Smolensky's pdp paper modeled presymbolic processing with the harmonium, the first restricted boltzman machine.


No it doesn't rely on symbols.

You have a piece of copper and circuitry between your house and your ISP. Back in the day, there was no digital processing. It's just copper, amplifiers, filters, and mechanical switches. Why did dial-up internet never exceed (within a factor of 2) 30 kbps? Because that's the bandwidth of the channel. It's not that modem designers missed a more complicated, neuronal, way to encode the information.

It's really amazing that information and bandwidth can be as fundamental as temperature and mass. There's nothing symbolic about it in that lens. Symbols only come in because we know how to do important computations in the digital domain.

Analog television, FM radio, telegraph lines, telephone lines, smoke signals, the human vocal tract, all of them have bandwidths.


I'm pretty sure the way analog degrees are freedom are mapped onto symbols is very important. In principle, if you have infinite SNR over a limited bandwidth, you can have infinite rate of data transfer - e.g. if you can have infinitely fine voltage resolution (in reality limited by the thermal noise floor, but you can always increase transmit power). So in that sense the mapping between information and bandwidth depends on SNR.

From the wiki page on QAM: "Arbitrarily high spectral efficiencies can be achieved with QAM by setting a suitable constellation size, limited only by the noise level and linearity of the communications channel." [https://en.wikipedia.org/wiki/Quadrature_amplitude_modulatio...]


Unfortunately I used bandwidth informally (how most people use it), to mean something measured in bits per second, not Hertz.

You're absolutely right that SNR plays a role. But I don't see why you need to map to symbols.


Calculating Shannon entropy does rely on the symbolic alphabet and its probabilities of occurrance (see Wikipedia on Shannon entropy); however, I don't know how bandwidth is calculated. What makes you think it is as fundamental as temperature? One thing I know is that thermodynamic entropy has never been fully commensurated with information entropy -- unless someone has a ref otherwise.

Dialup internet did exceed 30kbps -- and that's because the bandwidth of the channel (the copper wire) was not the limiter. That's why DSL works and can in theory reach 1 gbps (https://en.m.wikipedia.org/wiki/Digital_subscriber_line)

I believe it is the channel plus the receiving mechanism plus the sending mechanism plus the encoding mechanism that determine the bandwidth. I am not asserting this, but that is my understanding.


Dial up internet used an analog transfer domain to encode the information. DSL does not, and is an entirely different technology (and is not 'dial-up').

56kbps modems and the like relied on digital telephone lines. I'm not sure where the 30kbps number comes from though - earlier pre-digital modems did go faster than that. Although you could argue those were 'pre-internet' as well...


Wait, but that's my whole point -- it's the same channel, but higher bandwidth.


If you're defining the 'channel' as 'local copper pair', then, sure?

But the other end of the local copper pair switched and became digital, which changes the channel in my eyes at least. There was then a series of bandwidth increases in the digital realm, giving significantly more efficient and effective use of the channel.


It's not the same channel, because DSL requires equipment fairly close to the subscriber. You can't transmit DSL on telephone utility poles for miles, like you can dialup.



I don't particularly like that essay because the author seems focused on the idea that "your brain is a computer" is a metaphor rather than a theory (see [1] for a more nuanced discussion).

The author correctly points out that past eras developed metaphors to explain how the mind might work based on the technological innovations they were familiar with, but I think there's a lot more nuance in the computational theory of the mind.

Namely that the notion of computation is much more abstract, and potentially more portable across disciplines, than some of the historical examples that the author of that Aeon piece brings up.

Anyway, obviously I don't have any real answers but for whatever reason the brain-as-a-computer theory rings pretty true for me and I've enjoyed reading essays and watching talks about the topic [2] [3].

[1] https://medium.com/the-spike/how-to-find-out-if-your-brain-i...

[2] https://medium.com/the-spike/yes-the-brain-is-a-computer-11f...

[3] https://www.youtube.com/watch?v=lKQ0yaEJjok

^-- This is the first part in an ongoing series of lectures that Joscha Bach has been giving at the Chaos Communication Congress; if you watch it and find it interesting you should check out the proceeding installments.


I followed your links and ended up in this rather excellent essay:

https://medium.com/the-spike/your-cortex-contains-17-billion...

One of the final points the author makes is that the brain might be a neural network made up of as many as 89 million neural networks.

That's a staggering concept.

If true, I wonder how anyone stays sane with that level of entropy in the system!


The "neural network of neural networks" thing is a bit of lavish exaggeration TBH, because a network of networks is just a larger network. The human brain has about 100tn synapses, which I think is a less obscure statistic to marvel about.



From the page you linked:

> The better we could communicate on a mass scale, the more our species began to function like a single organism, with humanity’s collective knowledge tower as its brain and each individual human brain like a nerve or a muscle fiber in its body. With the era of mass communication upon us, the collective human organism—the Human Colossus—rose into existence.

I like thinking about us this way too, as being both individuals but at the same time also forming a sorts of organism together.

What originally got me thinking about us this way was something that Sun Tzu wrote in his book The Art of War.

I think this might be the part of that book that made me think of it like this:

> The skillful tactician may be likened to the shuai-jan. Now the shuai-jan is a snake that is found in the ChUng mountains. Strike at its head, and you will be attacked by its tail; strike at its tail, and you will be attacked by its head; strike at its middle, and you will be attacked by head and tail both.

> Asked if an army can be made to imitate the shuai-jan, I should answer, Yes. For the men of Wu and the men of Yueh are enemies; yet if they are crossing a river in the same boat and are caught by a storm, they will come to each other's assistance just as the left hand helps the right.

http://classics.mit.edu/Tzu/artwar.html


> > With the era of mass communication upon us, the collective human organism . . . rose into existence.

> I like thinking about us this way too, as being both individuals but at the same time also forming a sorts of organism together.

You may be interested in Paul Stamets' thoughts on this, essentially, that the invention of the Internet was an evolutionary inevitability.

https://youtu.be/90vhfdj1zic?t=977


That is a very provocative article. Actually, I don't quite understand the a-ha moment of the article.

Where he says... "From this simple exercise, we can begin to build the framework of a metaphor-free theory of intelligent human behaviour"

Um, why?

I don't get why the dollar bill example somehow means that we don't store memories in our neurons?

What am I missing?

Thanks in advance for helping me grok this!



Ah yes, I'm glad it's not just me. Thanks for posting, very helpful.


Great piece. Thanks for posting!


The spinal cord itself is pretty limited in terms of processing power; just reflexes as far as I know. So the comparison between something basic like coax or fiber optic cable is fair.

The piece talks about transmitting 4k movies over nerve fibers. I take that to mean: if you removed the nerve from the body and use it purely as an OSI level 1 physical layer. There is no neural net. There is no processing.


If you could perfectly simulate a brain down to every atom, why wouldn't it have feelings? It would be indistinguishable from the real thing...


As a thought experiment that might point you toward why some people say not: if you make an atom-perfect simulation of two 1kg spheres orbiting each other in an empty universe, would that produce any gravitation?


If it was well simulated, yes it would produce simulated gravity. Why should there be any expectation that the simulated reality affect the non simulated reality? If you start down that path then you might conclude that no other person has feelings if it doesn't affect you.


And the simulation would produce simulated feelings, but that doesn't mean there would actually be qualia.


By "actually" are you again inferring the requirement for cross over between realities? If the simulated brain was experiencing something, then that "something" is an experience in the frame of the simulation. Or perhaps your argument is that, because WE dont identify the simulated brain as a person its experience is irrelevant?


By asserting that the simulated brain is "experiencing" something, you're assuming it does have qualia.

We actually have no idea how a bunch of atoms interacting create qualia, or even whether they do. There's no math to tell us that a configuration of atoms makes qualia, or what qualia it makes. We have no way of distinguishing between a conscious being who experiences, and a philosophical zombie with the same behavior but no internal experience.

Therefore we can't know whether a simulation of atoms actually does capture what's necessary for qualia.


If we are going to doubt something that looks and acts like it has qualia, then the same is true for any other human.


Sure, I only really know that I have qualia. It seems reasonable to assume that something which looks like me has qualia like me. But if it's a simulation, it doesn't actually look like me at all.

It does have a sort of abstract mapping to something that looks like me, but since I have no idea what produces qualia, I have no way to know whether something essential is lost in that mapping.


Even though the analogy is not really valid, the answer is probably yes, because simulation (computation) requires energy.


I don’t see how this analogy holds. Gravitation is a physical effect, of course it can’t be produced by software; feelings are not.


I don't understand your rebuttal. Feeling are not a physical effect?


"If"


Your issue makes sense when talking about intra-cortical communication between neurons. It is a reasonable thing to speculate, given the right definition, for signal transmission in the peripheral nervous systems -- i.e. what is the bandwidth of the nerve bundle that transmit signals received by the motor neurons that stimulate your hand muscles to contract


Here is a rule:

>Please don't submit comments saying that HN is turning into Reddit. It's a semi-noob illusion, as old as the hills.

Allow me to point out a few characteristics of this post:

* A submission to reddit

* No actual scientific or academic details supporting his statements. Just some known other quantitive data about the spinal cord. OP even characterizes most of his statements as "gross assumptions"

* This post to HN has been upvoted.

Here is a scientifically reasonable perspective on this subject:

https://www.reddit.com/r/askscience/comments/7l56sb/how_much...


I laughed at this quote (tl;dr) from the article: "about a 4K movie every two seconds".

Hope that doesn't qualify as a spoiler.


This is an excessive overestimate by not carefully considering the known types of afferent (incoming) fibers and their average firing statistics (most neurons fire sparsely). See the comment on that thread by "JohnShaft" who noodles a better estimate of around 200MB/s (incoming) by taking anatomy into account. The motor efferent signals are going to be a lot lower bandwidth still...


If his answer is on the right order magnitude it's kinda interesting that our signalling technology is getting up there in bandwidth.


Latency is quite high though as far as I know.


But I read somewhere that the brain is able to adjust heart beats and breathing at a very fine frequency and and keeping it in sync requires good latency and all this is done without concurrency locks.

So does brain have massively parallel architecture where key things have their own core?


Yup! Some interesting demonstrations of this are in split-brain experiments - see https://faculty.washington.edu/chudler/split.html

However, when it comes to peripherals like heart and lung, the distribution goes beyond just the brain - the heart, for example, has a built-in pacemaker called the sino-atrial node [1], which produces a periodic signal with high precision; the brain just sends it signals to control the period of its oscillations.

Similarly, a number of reflexes which involve responses to physical danger (withdrawal from burns, knee-tapping) aren't actually handled by the brain; the spinal cord sends the reflex commands as soon as the sensory information hits it, to avoid the round trip all the way to the brain.

[1] https://en.wikipedia.org/wiki/Sinoatrial_node


>> withdrawal from burns

Not sure that’s that true though (I’ve heard about thus before). I can definitely consciously burn myself.

On the other hand I can’t stop myself from moving my foot when it’s tapped.


>> Not sure that’s that true though (I’ve heard about thus before). I can definitely consciously burn myself.

That sounds like consciously overriding the reflex[0] withdrawal though, as opposed to accidentally touching something hot, and withdrawing reflexively.

Adding anecdata: I've definitely touched something hot and withdrawn my finger/hand/whatever quickly enough to avoid getting burned when I by all rights should have. You know, something hot enough that anything but incidental contact would result in a burn.

[0] Reflex is probably not the right word. I can't consciously override the kneecap/kick reflex, for instance.


Last time I was tested for the kneecap reflex, nothing happened. The doctor told me to take my hands and hook the fingers of one around the other and pull them against each other, and then the reflex appeared.

I don't know what that has to do with consciousness, since I didn't decide to suppress anything.


if you weren't expecting to be burned you'd withdraw your hand automatically


In biology every single protein (or other biochemical entity) can be thought of as it's own core. It's parallel in an amazing and beautiful way. Or the most boring way possible. It just depends on if you are a computer scientist or biologist how you would see it.


There are to this day computer science researchers having conniptions about the impossibility of massively parallel lock-free architectures based on eventual consistency. Yet literally all they must do is open their eyes and see the truth!


It helps that our brains and bodies are analog. It doesn't matter if a particular variable is changed by different factors at the same time. The total magnitude of all the changes is far more important. Thus, lock-free.


Kind of like CRDT (conflict free replicated data types for the biologists amongst us).


Protiens? Every particle in every atom is solving exponential-time problems constantly, in parallel. That's the quantum chemistry perspective. :)


physical reality seems fairly good at optimising for local energy-minimal solutions

another stupid perspective: think of how many clever algorithms and how much computational hardware would be necessary to simulate a small rock from a QM level up. now, what if there was a bespoke piece of super-custom application-specific technology that could efficiently simulate that small rock? there is: it's the rock itself.


[flagged]


Yeah, it's really rocking my brain.

More (or perhaps less) seriously, this train of thought has some pretty bizarre and mind-hurty implications for the whole "we're living in a simulation" hypothesis.


We're living in a simulation designed by aliens who are very vindictive about proving that their computers are better than ours.


The brain has different parts and many layers that largely work independently in a functional programming kind of way.


The human brain is about as far from functional programming as you can get. Everything is global, referentially opaque, and full of side effects.


The short answer is yes. I assume you've noticed yourself walking and talking from time to time; or standing up while looking around.


Speaking of latency, one of the most bizarre experiences I can think of is when (on a certain cough substance beginning whose name is a type of broadband spelt backwards) one's reaction times double to about 500-600 ms and trying to do a reaction test. Really fascinating but also hilarious to watch someone else try.


Interesting, do you have a source for that? I'd love to read it. I'm interested because my experience with that differs strongly (though it's with shrooms, not LSD): while tripping on low doses I was a god* at guitar hero, significantly better than I was at playing sober, the same was true for another friend of mine as well.

Maybe the task being rhythm-related offsets the increased delay.

*sober I mastered Hard but had some trouble on the harder songs on expert, under the influence was the first time I managed to do several of them in expert.


I don't have a source other than a ... friends experience (wink wink, "which means I am lying"). It's the only area of medicine I'd do research in, Psychedelics seem to have dramatically different effects while still having some constants - across different people.

I could type 10 WPM faster than I usually could but my companion couldn't get above 15 WPM (He's usually bad not that bad). Once you get going it seems to be manageable, as if you have thought-momentum, i.e. Maybe it's a movement issue rather than literally reaction time directly.

> Maybe the task being rhythm-related offsets the increased delay. I think that may have an effect. I can play actual guitar fairly well (I can still play pretty fast Yngwie stuff) under the influence but only once I've "warmed up" but then I'm effectively just reciting motion from memory rather than reacting to anything

There are papers/research on this but they're all behind paywalls, which I can't be bothered to get around rn.


The feeling of playing was very different; it felt like my hands were playing the controller by themselves without any real input or reactions on part of "me".


You aren't really "sending data" though. Each one of the neruons just ends up triggering a circuit. And some of the fire signals are actually inhibitory signals to stop other neurons from firing. Not to mention all of the chemical signaling going on and the different types of patterns/behaviors that get activated through repetitive or limited activity.

So really to calculate bandwidth is kind of pointless. One the neurons use more dimensions than just on and off electrical signals as their state and two, neurons don't really pass information along when you think about it they just trigger circuitry.


I didn't read through everything but it seems people are assuming some sort of 'normal' spinal cord without specifying what that is - so in this bandwidth analogy they have for spinal cords, what is the affect of age, disease, and injury on it.


I think google used to ask this question to candidates.


Solid back of the envelope job.


- Modem - Cable - Fiber

Maybe in the future we'll have... Spinal Tap


This is a place for serious discussion. No humor or attempts thereof will be tolerated as evidenced by the steadily graying color of your comment text.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: