What I find surprising about this type of news is why the brain would need so much complexity.
It seems to me that a network with 10^11 neurons and 10^14 synapses should have sufficient computational power to carry out the information processing tasks that humans perform using only simple function neurons.
This belief is based on the following observations :
- I have personal experience with ANN's with only thousands of nodes that are able to rival humans at handwriting recognition.
- Current computers are far from being powerful enough to simulate a 10^14 synapse ANN yet they seem to be rapidly approaching human level performance on many cognitive tasks (ie. Watson).
If individual neurons are as complex as recent research results suggest I wonder what all that computational power is being used for. Or is the human brain just hopelessly inefficient as an information processing machine ? Maybe it's such a recent development that evolution just hasn't had time to get things right.
It's not about complexity or raw computational power, it's about functionality. One area where brain research still struggles to model the basics is how the brain actually integrates information over time, and how internal models and representations are formed.
The "classical" synaptic response model was always good at explaining basic signal transmission, but it was essentially stateless. Now we know that neurons are far from stateless, there is extensive chemical modification going on working at different timescales and I guess this "new" discovery is also an important piece that was missing from the standard model. It may explain advanced neuronal states that surpasses simple chemical sensitization and suppression - and it may also provide hints about how feedback works in learning and building internal representations.
ANNs and other AI techniques are getting very good and efficient, but one reason why general artificial intelligence (as in artificial persons) continues to escape us is that we still don't have a good model how the brain organizes and improves itself to form a consistent but autonomously adapting unit which can rightfully be called a mind. I hope that AI people can use these pointers provided by bio research and advance toward this goal.
I didn't think this was new... I remember hearing about this effect last year and having it attributed to Oligodendrocytes, I believe.
That said, it's a very important development, because until the last few years the glial cells have mostly been considered to be support cells (e.g. supplying nutrients to the neurons, removing waste products and dead cells, myelinating axons, etc.). But, now we know that they can affect the surrounding neurons and may play a role in things like learning and memory.
This article was from last year (don't know if it's the one you're thinking of), but the paper just theorizes that glia may be involved in the described mechanism.
I think we can be fairly certain that glial cells are involved in neuronal communications, but I'd not say this paper at all proves that.
What was remarkable about this paper was that they demonstrated action potentials (basically, the neuron's relative charge depolarizing) could start not only in the soma, but in the axon.
We had known previously that the axons could send messenger proteins back to the soma (cell body), thus modulating transmitter productions, and could have an inhibitory or excitatory effect on the cell as a whole. We were also aware of axo-axonic synapses, whereby axons could inhibit other axons (among some other things).
EDIT: The above is just extremely brief background of well-known facts about axon messaging.
Right. The question is not whether or not there eist quantum effects in the brain, but whether they have any effect at all. Right now, there are only theories: no empirical data exists to prove or disprove the theory.
People may have a crappy built-in magnetometer, but it seems you don't need one - just a lot of practice. There are cultures that put such weight on the cardinal directions that they always maintain a sense of where they are facing.
Very interesting. I always had a very accurate sense of direction, to the point of not needing a compass when bushwalking to a map. Yet, when I moved to the other side of the continent, I was completely thrown.
I wondered to myself whether it was something to do with the very different magnetic declination. No evidence, of course.
I moved back and have regained it again. Mysterious.
There have been recent discoveries about synapses as well. It turns out that synapses are pretty complex. Lots of single celled organisms seem to have pretty "smart" behavior for their size. It turns out a lot of that data processing happens around the cell membrane. These mechanisms are the evolutionary roots of synapses.
Indeed, it seems that some kind of 'backpropagation' does happen in the brain, in contrary to what was always believed. This might have impact on machine learning research.
Not really. For instance, people have been looking for 5'-3' DNA polymerase for decades, because it seems like it would be a simpler mechanism for DNA replication than how it really works. But it doesn't seem to exist anywhere.
My statement wasn't meant as a disparagement of biologists. Simply that there is a lot of "conventional wisdom" out there in the world that turns out to be false.
When I started grad school (mol bio/genetics), there was a laundry list of things that "never happened in biology". By the time I finished grad school a lot of those items were removed from the laundry list.
And, as I'm sure you're aware, the inability to find something is not evidence that it doesn't exist.
Basically, when a cell divides, it needs to produce an extra copy of DNA. One for each daughter cell. The DNA in your cells is double stranded, which means it is basically two copies of your DNA stuck together. So each one of those copies needs to be duplicated before the cell divides.
You might imagine that the cell would do this by splitting the two DNA strands and sending some molecular machine down each one to replicate it. That is what it does - kind of. See, the tricky part is that the two DNA strands are pointing in opposite directions. The heads and tails of the nucleotides (A, C, T, or G) in each strand are pointed in opposite directions.
You might think that, if evolution can create a machine that works in one direction, it could create a machine that works in the other direction. Then, one could be used on each strand in parallel. Back in the day (~40 years ago), this is basically what everyone assumed must be happening.
But that's not what happens. We only have a machine that goes in one direction. People spent many, many years looking for these little molecular machines, but only ones that went in the same direction were found. None that went backwards.
So for the backwards strand, it's duplicated in a really convoluted process. Basically, instead of copying it all in one shot, it has to repeatedly jump ahead, work back, jump ahead, work back, creating a bunch of little DNA fragments. While it does this, all the little fragments have to be tied together. It's a very strange process.
It's always seemed intuitively surprising that there's no feedback mechanism within neurons to aid learning. What's the currently favoured mechanism for learning, neurons feeding back to previous neurons?
There's no single mechanism in neuroscience to explain learning in general, because the current understanding is that "learning" is a very vague term that covers many types of adaptation, and each has its own mechanism.
I'm not familiar with any network-level mechanisms, but there are many local (synapse- or dendrite-level) ones. The one I'm most familiar with is spike-timing dependent plasticity (STDP) [1], which modifies the strength of a synapse based on the millisecond-level timing of action potentials. When cell A tends to fire just before cell B, and the two have synapses connecting them, then cell B will increase the strength of its synapses to A. The reverse is true too: if cell A tends to fire just after cell B, then the synapses will decrease in strength. This is a form of Hebbian learning [2].
Yes, recurrent connections are one of the mechanisms that could be responsible for learning. These are used to 'steer' front-end neurons, for example to focus on a certain feature.
There is also 'Hebbian learning', which means that the connections between neurons that fire at the same time become stronger.
If the network has significant feedback, couldn't these slower "backward" signals be understood in a similar fashion as a fast-moving propeller that appears to reverse direction? I'm curious about how they measured this, but I don't have thirty dollars to spend.
It seems to me that a network with 10^11 neurons and 10^14 synapses should have sufficient computational power to carry out the information processing tasks that humans perform using only simple function neurons.
This belief is based on the following observations : - I have personal experience with ANN's with only thousands of nodes that are able to rival humans at handwriting recognition. - Current computers are far from being powerful enough to simulate a 10^14 synapse ANN yet they seem to be rapidly approaching human level performance on many cognitive tasks (ie. Watson).
If individual neurons are as complex as recent research results suggest I wonder what all that computational power is being used for. Or is the human brain just hopelessly inefficient as an information processing machine ? Maybe it's such a recent development that evolution just hasn't had time to get things right.