Hacker News new | past | comments | ask | show | jobs | submit login
How Computationally Complex Is a Single Neuron? (quantamagazine.org)
188 points by theafh on Sept 2, 2021 | hide | past | favorite | 118 comments



Sorry but this is the dumbest thing I've heard in the entire internet today. So basically what they did was they trained a deep neural net to predict the behaviour of a simulated rat neuron. Then they claimed that the number of layers and units in the deep neural net is somehow a measure of the complexity of a "biological neuron".

First of all, they didn't try to model the behaviour of a biological neuron, only of a model of a biological neuron, and an incomplete model at that. The article points that out itself.

More importantly, the claim rests on the assumption that a deep neural net architecture is somehow a sensible measure of the complexity of a completely unrelated process (as far as I can tell, deep neural nets are not in anyway related to simulated rat neurons). But this is a big, huge, gigantic assumption.

Suppose I train a bunch of humans to predict the behaviour of a simulated rat neuron. Suppose I find out that I need to train at least 1000 humans to accurately predict the simulated rat neuron's behaviour. What does that mean? That 1000 human brains are as complex as a simulated rat neuron?

Come to that- can a single human brain predict the behaviour of a simulated rat neuron? I'm prepared to wager that, no, it can't, because human brains are rubbish at prediction tasks like that. What does that mean? That a simulated rat neuron is more complex than a human with an entire human brain?

The entire premise of the work is absurd nonsense and it probably only got a mention in Quanta because it's something released by DeepMind. Irritating.


Science is in large part also about story telling, especially in some areas. This is a good story, even if the details make it much less spectacular. They also produced very nice visualisations, which blew up on Twitter. All of these things are something to learn from as a scientist.


The lesson is that while people say they make data-driven decisions, scientists like everybody else just goes for the narrative with the prettiest picture that also looks like what people think is already true.


How computationally complex is a transistor?

I mean, all the ways that even a micro transistor interacts with voltages and heat make it potentially more complex than an AND/OR gate. But that's often what a given transistor is used for.

I'm not saying all the complexity of a neuron doesn't matter or that neuron isn't much more complex than a physical chip piece - just that treating parts in isolation, if you don't understand the system, can very easily result in wrong conclusions.


Another part of the consideration is that transistors are abstracted away into functions like logic gates partially because it makes it easier for the designers. Neurons on the other hand can likely take advantage of all sorts of complex interactions and behaviors as the "maximum local complexity" is not an issue.


They certainly do take advantage of complex interactions. The perfect example of that is ephaptic coupling, where neurons interact with each other through the local electrical fields they induce -- and stuff like this is lost in modelling a single neuron: https://en.wikipedia.org/wiki/Ephaptic_coupling


I remember reading about an experiment that used genetic algorithms to evolve FPGA layouts to create circuits that achieved some task while minimizing transistor count. I think it might have been NASA trying to develop smaller, more sensitive radios, but I'm not sure. IIRC, they found that they could evolve novel circuits that could perform the task with far fewer transistors than traditional methods. But, they couldn't figure out how to transcribe it to an IC outside of the FPGA, as the circuits would stop working. There were also disconnected sections of the FPGA that the GA had evolved that didn't appear to contribute to the output, but couldn't be removed without again breaking the functionality. Obviously, there were some kind of crosstalk issues going on that the GA could minimax to a solution, but it was extremely brittle to change.

EDIT: I was completely wrong about the source and application. Can't believe I was able to find it, because I was way off. This was it: https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50....

Other reading indicates that this technique has been successfully applied in electronics design in the time since 1996. https://en.m.wikipedia.org/wiki/Evolvable_hardware

The NASA thing I was thinking of was this: https://en.m.wikipedia.org/wiki/Evolved_antenna


Adrian went on to evolve circuits with the IC in a freezer, on the bench, and on a hot plate to try to avoid temperature-dependent effects. The GA didn’t know it was supposed to be temperature invariant, or that it was supposed to use the gates as digital devices. Often, the evolved FPGA program would not work on another chip of the same device, because it overfit to the exact chip(s) it evolved on. I was in this group under Phil Husbands at Sussex as a youngster and it was the most stimulating intellectual environment. We had GAs optimize recurrent NNs for robot control, and then had to figure out how they worked, on SPARC stations in 1994. But this ‘nouveau AI’ suffered the AI winter along with the GOFAI we rejected, partially because of the interesting difficulties this team discovered. It was cool seeing these ideas revived 25 years later, with a million times more compute.


Thank you so much for finding this, I have been trying to find it again for over a decade! I was just telling someone about the study a couple days ago.


> I remember reading about an experiment that used genetic algorithms to evolve FPGA layouts

You have seen today's xkcd, right?

(2510 - for you people from the future...)


A single neuron has hundreds to hundreds of thousands of synapses and at each synapse there is a number of receptors and channels on the receptor end and neurotransmitter molecules and vesicles on the transmitter end. Synapses are far from the nucleus so there are reservoirs of mRNA hanging out waiting to be transcribed that needed to be staged there. There's epigenetic state affecting how much and what type of mRNA is produced. All of these are continuously affected by the combination inputs. How much of this needs to be included in the model? I'd like to see them comment on which parts of the original system they consider out of scope.


That's not even mentioning the fact that most receptors are quaternary structures in which the subunits can be switched out. This allows a single receptor to have many variations which means many variations on receptor behavior. Pardon my language, but the complexity is fucking mind boggling. It's beautiful.


But how much of that complexity is "observable"?


You mean what is absolutely necessary?


Imagine you have a 2-input NAND gate, but for some reason it is implemented with 1000 transistors (perhaps for redundancy in case one of the transistors gets hit by a cosmic ray, or perhaps for other reasons). That gate still behaves the same, so for all an (external) observer could measure, it is a NAND-gate, which is a simple device. Internal complexity does not always mean external (observable) complexity.


> or perhaps for other reasons

This is exactly where complexity hides. Simplicity of models relies on abstractions, which in the real world are invariably leaky. The complexity of making a robust NAND gate is very much observable at some level, and only goes away once you ignore the messy details. The more we look, the more this seems to hold for pretty much everything in our observable universe, from galaxies to quarks. The more you dig the more worms you find. There are thousands of sub-fields of molecular biology which try to understand how a single cell actually works, and we still are not done by a large margin. Of course we will always ignore what we can to make workable human models that we can actually reason about.


But does the complexity actually matter for the end result? Only in some systems.


I would argue that it does matter for the brain. The large number of variations on the large number of different types of receptors means a great amount of variation in adaptability of the neural circuits to a great number of edge cases. But it also means there's a lot of possibility for maladaptation, such as with some presentations of mental and non-mental illnesses. Neural circuits can "remember" firing patterns through some of the varying adaptations, and not all circuit memories have the same function or the same effect.

The parent comment about varying transistor combinations was not quote correct in my opinion, as these variations in receptor makeups DO change how the neuron and circuits respond to stimuli.


This makes sense to me. It's like we're peering into a portion of the main logic in a function with one frozen global state and ignoring the idea that there are zillion global variables that can alter that logic.


Needless complexity has costs associated with building it and maintaining/running it, so I'd expect in the majority of cases it would be selected against strongly enough to disappear over time. Which implies the majority of complex systems are complex for a reason, because if a cheaper less complicated equivalent was equally good then that would win out.


Biological matter can't exactly opt out of being made of jiggly proteins immersed in water. And nerves can't opt out of the million things a cell needs to do to maintain itself. That's the kind of thing that adds immense complexity whether it's useful or not.


> Internal complexity does not always mean external (observable) complexity.

Yet you mention observable reasons at the beginning, before abstracting it right past spherical cows on friction less planes to its purely mathematical concept.

Especially with attacks like row hammer one could argue that redundancy or the lack thereof has a significant observable impact on how modern systems behave.


i was first introduced to the brain as primarily an electrical entity, but a few years back saw a fantastic talk about how there's an entire biological substrate of computation that takes place in the genetic cell signaling that electrical recordings and activity don't even capture.

it was fascinating. :)


Curious if that talk is public/if you have a link to it? I'd be interested


https://www.youtube.com/watch?v=_kac0_DDVw0&list=PLBHioGD0U1...

one of my favorite talks at a really fun conference.


There's also evidence of mRNA transmission (packaged into capsids) between neurons!


waiting to be transcribed

*translated


Relevant read for the curious: "The thermodynamic efficiency of computations made in cells across the range of life" https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5686401/


Looks like what the Quanta article refers to is not so much "computations made in cells" as "how computationally complex a faithful model of a neuron would be."


Based on the Quanta article, they're only measuring complexity in terms of how complex a neural net needs to be to faithfully learn a neuron's output (presumably by SGD.) That's probably only a weak upper bound on the complexity, though. There may be more parsimonious ways to simulate the neuron which are expensive to represent in a neural net, or which are hard for SGD to learn.


It's amazing to live in a time where we have the technology to investigate all the incredibly well architected systems in molecular biology. The researchers must be having so much fun, I feel happy for them.


- The comparison here is between "A system of thousands of linear differential equations running in parallel" (The Hodkin-Huxley based modeling that is used to simulate the voltages of the biologically detailed model) and "A typical multi-layer neural network". One could probably calculate the computational complexity of the first and the second and assess how much it compressed the original function

- The goal of artificial neural networks is not generally to simulate with 100% accuracy the firing of real neurons, but to perform their function. The results from ANN applications so far show impressive abilities that match equivalent (but specific) functions of hundreds of thousands of real brain neurons


A single neuron has somewhere on the order of 5x10^16 atoms in it. There's no rule that says it couldn't potentially use a large fraction of those atoms for its functionality. That's your upper bound on computational complexity.


> There's no rule that says it couldn't potentially use a large fraction of those atoms for its functionality

There is no question that a neuron uses a large fraction of its atoms for its functionality, as is every other cell. What I think you're postulating is that these atoms form a functionally uber-complex network with mind-boggling combinatorics - but that position does not align well with current science. Single neurons are not that smart. For starters, they don't have the I/O address space or bandwidth to be, nor do they have the energy budget to behave like a block of Computronium.

Neurons have not been opaque black boxes to us for a long time now. We understand a lot of the biochemical processes going on in there. We may not have a complete picture in many areas regarding many neuron types, but there is not enough unexplored space in there to allow for a cell-sized quantum supercomputer or anything like that.

The upper bound on a neuron's computational complexity is dictated by the intricacies of its protein machinery, which is many many orders of magnitude lower than the number of atoms making up the whole.


While that's true, you need to consider that neurons don't act alone in a real system. Not only must one simulate the dozens of types of ligands and receptors in a typical neuron, but also how those ligands are modulated not only by neighbors, but also at a macro-scale (for example, if an organism increases serotonin levels due to some external stimuli, that will have significant effects across the nervous system, even parts that ostensibly have nothing to do with responding to that stimuli.)

So given ~20 types of receptors, ~8 response types ranging from inverse agonist to full agonist, and conservatively, 20 relevant concentrations per ligand, we have over 3000 states dictated both by macro and local conditions.


Forgive me if I misunderstood your intent, but this feels a little like moving the goal posts on me. In my comment, I was addressing the specific claim that we know little enough about neuronal complexity so the number of atoms in a cell could be seen as a reasonable upper bound on the complexity of the neuron. When you say I "need to consider" the complexities of response modulation and the combinatorics of the entire neuron population acting together, I disagree (purely in the context of the original claim, because that was specifically about the capabilities of a single neuron).


I don't see much discussion of quantum mechanics as being leveraged inside of neurons.

But I'm just not convinced chemistry is fast enough for thought to happen any other way than quantum.

Life has been far ahead of human state of the art for a long time.

Often, we don't even know what to look for until we learn basic physics ourselves.

For example, barely 100 years ago, we figured out that light is energy delivered in photon packets. Only then could we understand processes like photosynthesis. Something life figured out billions of years ago.

As we learn more about quantum mechanics, we discover that photosynthesis seems to harness quantum effects to improve conversion efficiency.

I'm convinced the brain is using advanced physics we don't fully understand yet.

Yet, that the brain and neurons are quantum is something I haven't seen much of in literature.

Once we figure out room temperature quantum computing, I wouldn't be surprised if we find that the brain has been doing it all along.

We just don't know what to look for, until we learn enough about physics ourselves.

Disclaimer: I believe there is an intelligent creator. (https://www.jw.org/en/bible-teachings/science/)


Chemistry is extremely fast, and it's all based on quantum mechanics. Each atom/molecule is colliding at least billions of times per second at standard pressure and human body temperatures. Example calculation: [0]

[0] https://socratic.org/questions/calculate-the-number-of-colli...


Doesn't every interaction between particles involve quantum mechanics? Meaning every chemical reaction? Even simple things like heating water on your stove is a quantum process. Thought isn't something special here. Unless I completely misunderstand quantum mechanics.

If you are proposing that quantum processes aren't really random, that somehow human thought affects how they work, I doubt we have any evidence of that one way or the other. I would call that a fringe theory, but I can't say you are wrong.

I've recently being seeing more about a theory that consciousness is inherent in matter, and that particles choose which quantum path they take. This is way beyond anything we currently know about physics though.


Not fast enough? Just look at a battery, and see how fast when you complete a circuit it begins to output current. Chemistry can be very fast.


> But I'm just not convinced chemistry is fast enough for thought to happen any other way than quantum.

How fast do you think? I know my reaction time to unexpected external stimulus is up in the 200ms+ range, and sometimes even basic things can take multiple seconds to think through with my conscious mind. This is an enormous amount of time chemically speaking.


I mean, there are some rules, actually. The structure of a neuron is not a black box about which we know nothing. There will be many atoms used to construct the cell wall, for example.

Your analogy is like saying my computer has X number of atoms, so they might all be used for computation! Instead, we know that, for example, the case is not used for computation. The power supply is not used for computation, etc.


That's not a informative upper bound. We already know most of those atoms are not contributing to it's computational complexity.


How do we know this?


Read the article. They accurately modeled the biological neuron's behavior with about 1000 'neurons'. They didn't need quintillions of neurons. So clearly the computational complexity is vastly lower than the number of atoms or molecules.


I believe they modeled the behavior of a model of a neuron, not an actual neuron. Models are intrinsically simplifications of the reality.


But that model represented the input/output behavior of the neuron. Of course it doesn't represent the behavior of every single atom of the neuron itself, but that isn't relevant to its computational behavior.


why is that known?


its


Yes, and we don't know how much of that possible computational complexity is required for producing our consciousness.

However, if we are modeling particular functions e.g. motor control of the limbs, we can black box a lot of that complexity.


I'd be interested to learn how many biological neurons it takes to approximate one artificial neuron.

Mathematically, over some finite interval, approximating y=sin(x) using a Taylor series, or approximating y=x**n using a Fourier series, can both require a large number of terms. But this doesn't imply that a sine/cosine function is more computationally complex than a monomial, or vice versa.


The article asserts that on average they need ~1000 "neurons" to approximate one biological neuron.

In the opposite direction, assuming we had manipulation techniques create new neurons, position them and stretch out axons arbitrarily and control synapse linking strength (which we can't now), then IMHO there could be a 1-to-1 correspondence; the same octopus neurons that McCulloch-Pitts model tried to describe could also each approximate one artificial neuron for inference. Of course, learning is a different issue, brains don't exactly do back-propagation and gradient descent.


Is there any correlation between a perceptron and a neuron? When I first encountered the concept of a perceptron, I was told Minsky and Papert modeled the perceptron on the biology of a neuron. Is this correct? If so, how has this initial model stood up and we learn more about the physical structure of the neuron.


The answer is 1.

You only need to put all your synapses at the soma and voila


It seems to be fairly complex. I hope we don't go down the path of building biological-artificial computers though.


Why not? As a someone in the field, I hope that we do.


Surely you recognize the plethora of ethical concerns behind this? Certainly there is much to be learned in terms of energy efficiency of computation but I'm undecided as to whether we should go down this path. If we do, I believe it should be regulated from the get-go or else it will be too late by the time the biological substrates reach animal-analogous intelligence. Perhaps if done for classical computing research without trialing machine learning we can avoid asking ourselves at what point the biological machine has sentience and deserves agency?


I'm sorry, but I fail to see the ethical dilemma of creating an artificial biological computer.


I think it's more likely that it would be ethical than unethical. Sentience is "about" the functions necessary for animal life. These functions are a huge waste from a computational perspective. A good bio-computer would be comparable to a fungus, or head cheese[].

[] http://www.technovelgy.com/ct/content.asp?Bnum=687


I see your point and agree that it would be extraneous to model awareness for a biomachine whose purpose is efficiency. But what of the inevitable research that will expressly attempt to create AGI using said unaware computation paste?


Yeah, that's a concern to a degree. But I'll justify why I'm not really worried about it in two ways.

1) The computations we want to do may still be expressed better without consciousness. Think Chinese Room thought experiment, or philosophical zombies. In other words, we may be able to make something that can understand and execute complex commands, perceive and react to its environment, and communicate information — without having feelings, desires, or consciousness per se.

2) I'm less worried about creating new things to be unethical to than I am about the dis-ethical treatment of currently sentient beings.


> In other words, we may be able to make something that can understand and execute complex commands, perceive and react to its environment, and communicate information — without having feelings, desires, or consciousness per se.

> 2) I'm less worried about creating new things to be unethical to than I am about the dis-ethical treatment of currently sentient beings.

Both of these relate to one of my major concerns:

We don’t yet have a testable definition of “feelings, desires, or consciousness” [0] such that we could even tell if an AI did or did not have this characteristic.

[0] and any of the other various words frequently thrown around as attempts to create such a definition. Every time I make this claim I get responses along the lines of “yes we do: $foo!”, but in my experience so far, all of these turn out to be one of (1) circular definitions, or (2) things humans don’t have, or (3) things which plants and the enemy characters in Wolfenstein 3D do have.


I'd define consciousness as something like "subjective experience". You have consciousness if you can experience the experiencing of things.

I don't think that's a circular definition. I think humans have it. I don't think plants or video game characters have it.

However: it is not testable, and it's not obvious that it could be. If someone claims to have subjective experiences, I don't think there's any way to know whether they're conscious or whether they're just an automata acting conscious.

If consciousness is just being able to experience experiences, then that's a function that doesn't have any outputs, how could you possibly test for it?

And this applies equally to humans, c.f. solipsism. It could also apply to all matter in the universe, it's just that most of that matter has no computational capability, and no way to express itself even if it did.


> I'd define consciousness as something like "subjective experience". You have consciousness if you can experience the experiencing of things

> However: it is not testable, and it's not obvious that it could be.

If it isn’t testable, do you really have a definition rather than a tautology? What does it mean to have “subjective experience”? One instance of a game AI is subject to different inputs than another instance of the same AI, which you reject (as would I: I listed it as an example of how bad the definitions have been and definitely not as a claim they’re sentient).


It is perfectly consistent for something to be well-defined and also untestable.

For example, you could ask me any question about the contents of my mind. Let's say you ask me if my favourite colour is green, and I say it is. You don't know whether my favourite colour really is green or if I'm lying. The fact that you have no way to ascertain my true favourite colour doesn't mean favourite colours are ill-defined! You just can't know what anyone else's favourite colour is.

(Not that I'm claiming my "subjective experience" definition is well-defined -- I accept that it's pretty vague as definitions go).


> (1) circular definitions, or (2) things humans don’t have, or (3) things which plants and the enemy characters in Wolfenstein 3D do have.

I'd be curious to read more about these 3 arguments and their flaws.


Perhaps this sort of research would further our understanding of what sentience really is


> I believe it should be regulated from the get-go or else it will be too late by the time the biological substrates reach animal-analogous intelligence.

I assume you are also a vegan? I would put my energy into fighting animal meat and animal husbandry today than worrying about a level of technology we probably won't reach for at least a few decades if not a few centuries.


My words here are not exclusively stemming from my thoughts on animal suffering, but also from wishing we could have preempted the current wave of ML-powered surveillance had we known where it would now be two decades ago and enacted sufficient regulations.


So perhaps we should regulate, rather than hope we never do something that could progress our understanding


My reticence comes from not being fully confident in our ability to regulate technologies with an essentially luddite governmental body when it comes to tech they do not wish to fully understand. Perhaps if we change how they learn about new developments we can expect them to make informed decisions regarding something like organic AI.


Perhaps we should have better educated regulators then.


Perhaps with the proper source code, sentience can already be created with existing hardware? I don't think regulation of hardware is a solution to what is fundamentally a software problem.


We already have ethical models for how we treat animals. It seems logical to apply the same models to artificial intelligences if they are able to suffer the same way animals are.


Do you believe that using a biological computer for research is ethically any different than an animal analogous intelligence in silicon?


I think we will have to navigate similar ethical straits in the development of both organic and inorganic sentience.



It was tried in the 1980s using nerve tissue from the Permian Basin Superorganism (PBSO). The computers actually made it to market, and were substantially faster than silicon on the market at the time. However, the cost and unreliability of maintaining life support for the biological components made them impractical for anyone except a few dedicated customers. By now, Moore's Law has well outrun what was capable with those early biocomputers, and since 2007, doing research with tissue from the PBSO has become substantially more difficult.

You can see a scan of an ad for the system here: https://bit.ly/3gYXs3v


That's not a real system. It appears to be a mock ad derived from the organism you referenced which is a part of some kind of a fictional lore.


I’m getting old I guess. That’s a Heathkit HERO-1 personal robot from the 80s.


800 Exa-FLOPS (~ 1 million NVidia A100 GPUs)

50TB disk

128GB RAM

Not too bad for 1984, Castle Wolfenstein was probably running reaaallly smooth.


Marketed by the Sirrius Cybernetics Corp.


Thanks for sharing, very interesting. Haven't read the paper yet, but it seems promising as it mentions Hawkins and Subotai, from Numenta, who have been arguing something along these lines for many years. It is available in biorxiv [1].

[1] https://www.biorxiv.org/content/10.1101/613141v2.full.pdf


OK I am still parsing the pre-print, but their approach seems very interesting. And equally interesting would be to see what happens if you apply HTM [1] to the same inputs model, e.g.:

> We next applied our paradigm to a morphologically and electrically complex detailed biophysical compartmental model of a 3D reconstructed layer 5 cortical pyramidal cell (L5PC) from rat somatosensory cortex (Fig. 2A). The model is equipped with complex nonlinear membrane properties, a somatic spike generation mechanism and an excitable apical nexus capable of generating calcium spikes

[1] https://github.com/htm-community/htm.core


Probably less complex then a 1, but more complex then a 0.


A more interesting angle would be "how complex is the genetic description of a neuron"? Or "How many degrees of freedom are there in the genetic description of a neuron"?


Why? What would that tell us about the computational complexity of a neuron?


It might give us an idea of how many types of models we would need to correspond to all genetically reasonable types of neurons.

(my interpretation of the comment) For example, if there are like 8 free variables on neuron formation from the genetic blueprints, and then we find they produce reasonably different neuron types about 16 times over the range of that variable. We could end up with 4 billion possible models that we would need to simulate all possible neurons; then I guess you'd want to reduce those by saying that these two were like these other two and starting condensing them down; I'm not sure where this would go, but maybe we would find something cool or find a new insight by figuring out all possibilities


The difference between being able to describe a neuron's behavior and being able to simulate that behavior is the difference between being able to say "ten trillion trillion PN junctions" and being able to build a working computer that fits that description.


This paper has found that you need 1000 artificial neurons to simulate a real one. This doesn't say much about the actual complexity of a neuron though, e.g. I'm sure you need many neurons to simulate a Fourier transform, even though the Fourier transform only has a few mathematical terms (in other words, the true complexity of a FT is much lower). What would tell us about its actual complexity though is if we knew how many "variables" the formula describing a single neuron has, which should be approximated by how many degrees of freedom in nature does its genetic description have.


i wonder if its the case that neurons are something of a Turing Tarpit doing something like implementing a Turing Machine but only using a limited number of operators other basic operation having to be implemented from the ones present (ie making a on XOR out of NAND gates for example) just highly optimized for a particular use-case.


To me, it seems like going up a layer in biological "abstraction"...


Because the process that constructs a body part is only partially genetic, you'd be leaving out meaningful data.

Mistaken idea: genes are like a program in code that defines what biology does.

More accurate idea: genes are like the NVRAM of a running program that has run continuously with in-place updates for 4 billion years.


> More accurate idea: genes are like the NVRAM of a running program that has run continuously with in-place updates for 4 billion years.

They also represent the NVRAM of the compiler that builds both the program and the compiler itself :)


What an amazing comment, that just blew my mind. Talk about a paradigm shift.


So, we know that human-like general intelligence is possible because we display it right? We're pretty sure it has something to do with that soft stuff inside our brain-baskets, and how that soft stuff is interconnected. We've found computational primitives that we call neurons, so the premise goes, if we can understand and copy how it works we can engineer intelligence. Right? So going to genetics, while interesting and worthy, isn't really going up a level of 'abstraction' because we're trying to get a systems level overview here.

It is a process somewhat like studying a CPU by shocking prongs and occasionally slicing it real thin to see inside it.


That isn't really pertinent or I'd argue interesting in the context of this article, at all!

It might be helpful to know that (some | many | most) people think of biological neurons as being sort of poor-quality artificial ones, or at least that there is a direct correspondence between an organic neuron and an artificial one. This paper is making the argument that a given biological neuron is more like a network of artificial neurons, which is interesting and important because a lot of research into intelligence right now is going into making 'biologically plausible' models and testing those. If their results are essentially that each neuron has the sophistication of a network then say a future model of a cortical 'cell' (a cluster of neurons we think have a purpose as a group) should take this into account.

Genetics are interesting but neurologically we're still at the phase of knowledge where we are making up wild-ass conjectures and shooting holes in them for want of anything better to be doing.


I suspect not very complex at all. My rank speculation is that neurons are mostly a medium for brain waves and the real processing happens with the waves themselves. Looking for how the brain works in the neuron is like looking for how oceans work in the water molecule.


This has long been the assumption, but most recent research seems to suggest the opposite direction.

Not to mention, even if inside-neuron computation turned out to be largely irrelevant for the whole-brain computation, it's still fascinating to look at how much computation, sensing and planning is happening inside any cell just for the cell to live in its environment. We tend to make a huge distinction between single organisms and their constituent parts, but human cells are not all that hugely different from bacteria in terms of their need for doing smart work to live.


Yeah, brainwaves seem to matter much less in science news I've seen over the last decade than they did before.

I think brainwaves are probably mostly an epiphenomenon of the electrical signals that make up our brains' computation. Of course, EM waves being what they are, there is feedback between that computation and EM waves. But I think a good way to think about it might be like a computer giving off radio signals during certain computations, like [].

[] https://news.softpedia.com/news/emitting-radio-waves-from-a-...


Where do you think these “brain waves” come from? Brain activity is the result of the interaction of neurons and external stimuli.

Just as transistors are fundamental to processors, neurons are fundamental to brains, and it is important that we understand these basic building blocks if we ever hope to understand the brain and intelligence as a whole.


>Where do you think these “brain waves” come from?

Propagation delay and circular loops. No it doesn't have to be "external" stimuli. There's no explicit input and output layer in how a real brain is constructed. That is an artificial constraint put on artificial neural nets to make a programmers job easier. A real brain can and likely does have self stimulating loops.

>Just as transistors are fundamental to processors, neurons are fundamental to brains

Transistors are also quite simple as individual units and the computational complexity comes about in how they are connected.


"A real brain can and likely does have self stimulating loops"

This is what I understand causes epilepsy. At least when they are out of control.


Wow, that sounds very new-agey. You should also weave in a mention of quantum physics.


It really isn't new-agey. Brain waves certainly exist, can certainly be measured, and the presence or absence of certain frequency bands is certainly relevant to various brain functions (EG ~40Hz alpha waves and being conscious). There really is evidence that certain groupings of neurons act like band pass filters, and that something similar to the frequency tricks of old telephone equipment is used in the brain to pass multiple messages to different recipients while using the same neural "wire".

Furthermore, the brain is far to wet and warm for quantum mechanics to be relevant. Coherence times would suck.


It is an old idea. In a 1875 the first EEG showed waves. Then, naturally, theories like "alpha-rhythm is a consciousness" sprung into existence. Only at the end of XX century they became deprecated.


So the content of a signal has no meaning? That seems absurd on its face.


The difficulty I would think, is determining what is the real activity and what is just a side-effect.

For example old CRTs gave off RF signals that could be decoded to read the contents of the screen but the CRT display was not designed to be a radio transmitter.


Actually the opposite. The signal which is propagating has meaning, an individual neuron being active generally does not.


Anirban Bandyopadhyay, who has published a lot of research on the topic, has presented good evidence that microtubules have quantum properties.

From one published paper:

"We demonstrate that a single brain-neuron-extracted microtubule is a memory-switching element, whose hysteresis loss is nearly zero. Our study shows how a memory-state forms in the nanowire and how its protein arrangement symmetry is related to the conducting-state written in the device, thus, enabling it to store and process ∼500 distinct bits, with 2 pA resolution between 1 nA and 1 pA. Its random access memory is an analogue of flash memory switch used in a computer chip. Using scanning tunneling microscope imaging, we demonstrate how single proteins behave inside the nanowire when this 3.5 billion years old nanowire processes memory-bits."[1]

As he discusses in an interview[2], the idea that the membrane is the only important part of the neuron is an idea from 1907 and is fundamentally incorrect. While he does not claim to prove Orch-OR and has his own theories, he is sure the internal structure of neurons with protofilaments of various sizes capable of up to terahertz frequencies has vital functionality that needs its place in neuroscience.

There are about 5,000 microtubules per neuron. If these are quantum devices, we are many thousands or millions of years away from AGI, if it's even possible to achieve it without replicating those structures.

[1] https://aip.scitation.org/doi/abs/10.1063/1.4793995

[2] https://www.closertotruth.com/series/quantum-physics-conscio...


His book is also out now, Nanobrain: The Making of an Artificial Brain from a Time Crystal

Anirban is awesome and his lab is even crazier


As someone who's extremely skeptical about the premise of AGI, I don't see how these results (if they are replicable) would imply anything much about it. Computers are not exactly storage-limited as it is, and latency and (cross-sectional) bandwidth to memory tend to be a lot more impactful than raw capacity in terms of impact on effective computational power. Besides that, neurons have a lot more stuff in them than just microtubules (since they're living cells), which gives us a lot of wiggle room to improve on them even if our actual circuits are potentially less space-efficient than the biological alternatives. IMO, the appeal of studying biological organisms for ideas for computation is a lot more about things like energy efficiency and robustness than pure performance.


Because the current approach to AGI depends on absolutely nothing happening inside the membrane of the neuron.

If there are 5,000 intercommunicating structures with quantum properties * 80 billion classically connected neurons instead of just the 80 billion node neural network, we may have not even approached the capabilities of a single neuron with classical binary supercomputers. That would explain why there isn't a single example of successful AGI, even to emulate the behavior of simple bacteria.

As far as efficiency, quantum computers require 1,500 square feet and lots of electricity to preserve the state of tens of qubits. They can solve problems with less overall power than classical alternatives, but you can't pack a whole lot of them into a neuron.


But nothing of what you quoted of the article has anything to do with qubits, let alone entangled ones? Again, even if it's accurate and useful, it would be a way to provide space (and maybe energy?) efficient storage, not quantum computation. This is not surprising, because living cells have very different requirements from synthetic computers!

And taking an "outside" view, it's certainly not the case that humans are particularly good at solving problems where the only efficient algorithm we know of runs on a quantum computer... so even if there were any interesting quantum computation going on, it's not clear why that would be needed for AGI.

Going back to "cells are not computers," there are some (apparently good) arguments that plant photosynthesis relies in some essential way on nonclassical electron behavior to achieve its high efficiency for the purposes for which plants use it (biomass production), but that doesn't mean we can't achieve higher performance for the purposes of energy production with solar panels using a much simpler process. Even if you believe that the complexity in living organisms is mostly essential (which I do!) it doesn't mean much for the design of machines for humans to use.

I agree with you that the current methods for trying to achieve AGI are dubiously related to how actual brains work and probably aren't going to be successful without some major changes, but that's a very different point!


> But nothing of what you quoted of the article has anything to do with qubits, let alone entangled ones? Again, even if it's accurate and useful, it would be a way to provide space (and maybe energy?) efficient storage, not quantum computation. This is not surprising, because living cells have very different requirements from synthetic computers!

The structure of a microtubule is a fibonnaci geometry, and the theory is that different pathways along the structure provide superposition using a chain of hydrophobically isolated pockets of benzene molecules within the tubulin protein walls. This provides error correction for part of the state of the quantum system, somewhat misleadingly called a qubit, as well as protection against decoherence. I think consciousness depends on these structures and states because it explains why anesthesia, which bind to aromatics, stop consciousness without killing the brain. That doesn't mean all of Orch-OR is correct, but it is an extremely important piece of evidence, and the only testable hypothesis that I know of for why anesthesia works.

And I agree, living cells have very different requirements. They need to draw as little power as possible, rewire themselves as the environment/computational needs change, and be capable of repairing themselves.

> And taking an "outside" view, it's certainly not the case that humans are particularly good at solving problems where the only efficient algorithm we know of runs on a quantum computer... so even if there were any interesting quantum computation going on, it's not clear why that would be needed for AGI.

There's no evolutionary advantage to solving complex math, but people with different neurology are able to perform incredible calculations. The entire principle behind AGI is that if you can model the network effect of the brain, you get AGI. If that model is completely wrong, the theory behind AGI isn't going to work.

> Going back to "cells are not computers"

Correct, cells cannot be classical computers. They would require too much power for too little benefit. That's why paramecium have microtubules instead of processors, and rely on fundamental aspects of quantum physics instead of comparably primitive turing machines. And to borrow a phrase from Hameroff, proponents of AGI should try modeling the behavior of single celled organisms before they try the brain.

> there are some (apparently good) arguments that plant photosynthesis relies in some essential way on nonclassical electron behavior to achieve its high efficiency for the purposes for which plants use it (biomass production), but that doesn't mean we can't achieve higher performance for the purposes of energy production with solar panels using a much simpler process. Even if you believe that the complexity in living organisms is mostly essential (which I do!) it doesn't mean much for the design of machines for humans to use.

The latest breakthroughs in solar efficiency are literally based on inspiration from biology:[1] "Although we can’t replicate the complexity of the protein scaffolds found in photosynthetic organisms, we were able to adapt the basic concept of a protective scaffold to stabilize our artificial light-harvesting antenna.”

The newest research of building quantum computing devices is moving away from trying to wrangle individual atoms and instead moving to storing and manipulating molecules.[2] So, for the next generation of computing, the complexity of living organisms might be absolutely essential for designing machines. As quantum computers grow in capability, we will be able to model more of the quantum world, because "predicting the behavior of even simple molecules with total accuracy is beyond the capabilities of the most powerful computers."[3] Each generation of quantum computers will bootstrap the next.

Once the hubris and arrogance of people who think nature couldn't have possibly evolved to take advantage of quantum properties is finally over, I think there will be a revolution in every field as it gets cheaper and cheaper to model the planck scale world. The first victim will be AGI, and there is a lot of money and ego desperate to keep that marketing scheme viable.

[1] https://scitechdaily.com/breakthrough-in-stabilization-of-bi...

[2] https://www.sciencedaily.com/releases/2020/09/200902095130.h...

[3] https://www.scientificamerican.com/article/how-quantum-compu...


> The latest breakthroughs in solar efficiency are literally based on inspiration from biology:[1] "Although we can’t replicate the complexity of the protein scaffolds found in photosynthetic organisms, we were able to adapt the basic concept of a protective scaffold to stabilize our artificial light-harvesting antenna.”

Whether or not "the next generation" is "biologically inspired" (which doesn't mean using exactly the same mechanism), solar panels right now outperform plants on a pure energy production basis, and do this without any complex molecular or quantum machinery. This is because energy production is much easier than biomass production. That's my basic point, if you don't care about beating life on literally every axis (particularly energy efficiency, replication, and use of cheaply available materials) it's entirely conceivable you can do better. So if people want to build nanobots there's basically no chance they're going to beat life, but that's pretty different from AGI (even though it seems like the same people are invested in both for whatever reason).



The comic is confidently patronizing and either very old or poorly researched. It assumes that isolated atoms are the only way to interact with superposition, which is not true and not where any of the next generation designs for quantum computing are headed.

“Realization of universal quantum gates is rather challenging in the case of spin defects because the rigorous conditions needed for confining quantum decoherence are likely to limit the coherent exchange of information between qubits as a result of scarce control over qubit-qubit distances. In this respect, a chemistry-driven bottom-up approach to qubit scalability is more appropriate. Molecules are highly versatile, enabling their electronic structures and spin environments to be tuned at will with the use of simple synthetic chemistry tools. Moreover, they can be replicated in large number, functionalized in the desired way, and organized in a controlled manner for the production of large qubit arrays. Undoubtedly, chemical design offers endless opportunities for magnetic molecules to be tailored for specific technological tasks.”

https://www.cell.com/chem/pdf/S2451-9294(20)30130-3.pdf




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: