Hacker News new | past | comments | ask | show | jobs | submit login
How to build a brain – An introduction to neurophysiology for engineers (juliuskunze.com)
221 points by juliuskunze on March 15, 2018 | hide | past | favorite | 24 comments



I can recommend talks from Joscha who is trying to build an AI based on the idea how our brain works. Unbelievable great talks.

https://media.ccc.de/v/31c3_-_6573_-_en_-_saal_2_-_201412281... https://media.ccc.de/v/32c3-7483-computational_meta-psycholo... https://media.ccc.de/v/33c3-8369-machine_dreams

31c3, 32c3 and 33c3 are the yearly german hacker group ccc conferences at the end of december.

2017 was 34c3 and held in leipzig. (google img search to get a feel what i'm talking about https://www.google.de/search?q=34c3&source=lnms&tbm=isch)

All talks are free to watch (there are a lot more on media.ccc.de).


To be clear, this article is about neural computation, while Joscha has chosen to ignore the constraints imposed by this computational paradigm.


Yes but others have asked on how a brain works on system level.


I think that (speaking as a neuroscientist) one important physiological aspect this intro glosses over, that would be relevant to any of you machine learning gurus, is how neurons manage their synaptic weights. Most artificial neural network models I've seen utilize a partial derivative-based backprop algorithm to update 'synaptic' weights. In neurons these weights (in the case of fast excitatory transmission) are proportional to the number of glutamate receptors currently in a given synapse. During learning (let's say associative learning, where I associate the sound of your voice to the visual info about your face) something happens to increase the number of these receptors at key synapses. That way, the next time the upstream neuron fires, it more easily elicits activation from the downstream neuron, because it has more receptors at those synapses. So the same quanta of neurotransmitter release by the neurons activated by the unique auditory signature of your voice will be released at all its terminals, but since the downstream terminals for the visual info of your face have upregulated their receptor number, those neurons are going to be easily activated. (thus when you call me on the phone, your voice will readily activate those visual neurons for your face, and I remember what it looks like).

If you are interested in seeing how the postsynaptic neuron manages its synaptic receptor numbers (and the intricate process the backprop algo is cheating), you might like to check out a 3D simulation I've made to model this process.

Animation: https://www.youtube.com/embed/6ZNnBGgea0Y Code on github: https://github.com/subroutines/plasticity

(note the animation is not too exciting, so you might want to skip forward a few times)

But basically you will see that neurons rely on stochastic surface diffusion of receptors to deliver new receptors to synapses. The takeaway is that during rest (or whatever you want to call it... baseline, non-learning, etc) synapses will reach some steady-state number of receptors. If the synapse undergoes some learning event, it needs to 'capture' more receptors to increase the steady-state amount. It does this by modifications to proteins just below the surface of the postsynaptic membrane that act as velcro to surface receptors floating by. Thus synapses take control of their synaptic weights by managing the surface diffusion rate of excitatory receptors.


I am working between machine learning and neuroscience, and find such explanations really interesting, so thank you. I would agree that there is currently not a lot of evidence that non-local learning such as backpropagation happens in the brain (event though there are some papers trying to find this evidence). But what you describe still falls into the Hebbian principle ("fire together - wire together") right? And this is still one idea originally engrained in perceptrons. While the brain's local learning algorithm is unknown and would be important to know for machine learning, e.g. to get ahead with unsupervised learning, there is quite some recent evidence that machine learning people are on the right track, at least with what deep convolutional neural networks learn in their hierarchy.


The example I gave was indeed Hebbian. I was merely using it to facilitate understanding of the general physiological mechanism used by neurons to manage their synaptic weights. The same general mechanism is also used in (mono)synaptic efficacy management for non-associative learning (e.g. sensitization, habituation, etc).

But I agree there is much we neuroscientists can learn from the machine learning research that provide insight into the biological system. Trust me- we are paying close attention.


Thank you for interesting comment.

Can you give me rough time scale for how quick neurons learn (synaptic plasticity)? I have a recollection that the time scale is in order of tens of milliseconds but I might be wrong.


As @radicalOH mentions, there are several timescales depending on whether you are talking about immediate, short-term, or long-term synaptic potentiation, each coordinated by a cascade of events (that have been extensively documented in you're interested in knowing more).

To specifically address your question about how quickly learning happens in neurons... it's fast. Is it ~10ms? mmm, that seems incredibly fast, and I'm not sure I can provide any definitive answer. I think it heavily depends on what you're willing to interpret as 'learning' when examining events on the molecular level. There are events that engender near-immediate alterations to ongoing transmission; and other events that support lasting electrical changes (necessarily?) develop across longer timescales. You might think of this distinction as the difference between being able to hold a phone number in consciousness via repeating it over and over, and then actually being able to recall that phone number 5 minutes later. In the former certainly something internally must have changed because a moment ago you weren't repeating this number in your head, and now you are. On the other hand, can we really say it was learning if the number is immediately lost the moment it leaves consciousness. Anyway, I"m not going to wax philosophical on that. I'll just show you some data for timescales we think are sufficient to produce lasting increases in synaptic efficacies...

Here is a video showing a single spine just after 2-photon 'glutamate uncaging':

https://goo.gl/rfgjrW

(as I mentioned above, glutamate receptors are the primary receptors involved in fast-excitatory transmission; so uncaging tons of glutamate right next to a spine is going to evoke immediate plasticity changes - particularly under these experimental conditions)

Note that in just a few seconds the spine has doubled in size. Morphological change in the spine are probably not the first events that alter synaptic transmission, but morphological changes to the spine are a reliable sign of a lasting memory trace.

Here is a whole cell recording of the electrical response in a neuron to incoming signals, before and after stimulation like that above ('EPSC' stands for 'excitatory post synaptic currents'):

https://i.imgur.com/0OTMwH5.png


There are multiple time scales synaptic plasticity happens on. For instance, ion channels can be moved in or out of the membrane to modulate excitability. On a longer time scale, gene expression can change. Synaptic plasticity can get really complicated, but Kandel's book has some great chapters on mechanism.


> Note that this means that for transmission over short distances, we are not constrained to all-or-nothing encoding and in fact amplitude-based encoding is preferable due to higher data rate and energy efficiency. This explains why short neurons never use spikes in the brain.↗ p. 36

Fast high-bandwidth interconnects use this, too, in optical fibre modulation for telecom, and not just plain Amplitude Modulation either. I guess the interconnect between the processing nodes of a supercomputer are connected in a similar manner.

I would really like to know how brain waves figure into this, does the brain do "wifi" between distant nodes?

> How do brains information?

You accidentally a word! Not just there, by the way.


While it's just an introduction, what I am actually missing is the proper introduction of the neural processing model. How it works in the physical brain, with potentials and a couple of ! different neurotransmitters is a whole different matter!

By using just the model of neural processing one can achieve amazing results. Whether a neuron activates its output links, depends on the weight sum of the activated input links. If the sum is strong positive the Neuron activates its output, if it's strong negative the neuron even inhibits activation of successor neurons. But, even that basic tidbit is missing in the text.

"The neuron adds inputs in some way..." Really? Oh man. I consider myself an engineer, but that article doesn't say anything about how neurons and neural nets specifically work.


Not sure exactly how biological neurons work, but artificial neurons are just a weight matrix and some type of sigmoid function. Pretty much all a neuron does is perform logistic regression.


The thing with brains is that even if you understand all the building blocks, that really doesn't tell you much about how the brain actually works. Trying to understand the brain from neurons is like trying to understand Microsoft Word by looking at its machine code. Of course the brain has the added complexity that it isn't designed by humans, and so it's difficult to pinpoint where one region ends and another begins.


I disagree that is the reason we haven't understood the brain. If we could replicate the underlying process, we could attempt to simulate a brain and maybe successfully so, even if we don't understand why it works.

The main problem seems to be that we actually don't understand all the building blocks, at least according to the article:

> There are fundamental questions left unanswered: What information do neurons represent? How do neurons connect to achieve that? How are these representations learned?

Until we figure out how and why connections are created between neurons, we only know how the brain responds to stimuli it has already learned, but not how it learns new information.


> Trying to understand the brain from neurons is like trying to understand Microsoft Word by looking at its machine code.

There's a series of articles by a guy who fixes bugs in an old game without source code, in plain assembly. (Unfortunately, it's in russian only.) https://habrahabr.ru/post/349296/


This is really incomplete and written in a way that's not constructive in any way. This is not why things are the way they are.


This predictably omits large scale features such as boundary conditions and resulting pattern formations.

Think guitar string: what matters to tune it is the tension more than composition. In the same way, standing waves in brain tissue reflect geometry and connectivity as much as membrane potentials.


Relevant: "How to Build a A Brain" by Chris Eliasmith https://www.amazon.com/How-Build-Brain-Architecture-Architec...


Also relevant: Principles of Neural Design by Sterling and Laughlin https://mitpress.mit.edu/neuraldesign%20


Also relevant, though maybe deviating a bit from engineering: Principles of Neural Science by Kandel and Schwartz https://www.amazon.com/Principles-Neural-Science-Fifth-Kande...


> There are fundamental questions left unanswered: What information do neurons represent? How do neurons connect to achieve that? How are these representations learned? Neuroscience offers partial answers and algorithmic ideas, but we are far from a complete theory conclusively backed by observations.

Calling the answers "partial" is fair, but I feel like the author understates the number of observations backing typical neural models? The lab I belong to created Spaun, which admittedly uses a pretty simple neuron model, but still matched a ton of neural and behavioural data! There's also Leabra, which I have some qualms with, but still has a pretty large collection of results.


> In the brain, slow conductors make it hard to have a synchronized clock signal. This rules out digital coding,

But clockless processors exist, although they are not particularly popular.


I don't think "clockless" is apt, asynchronous, sure, but the gates are still, well ... gated, as far as I can tell, e.g. in the arm amulet research. There's no central clock, yes, but it's still time discrete, I guess.


> "...slow speed of the sodium pump recovering the resting potential..."

Any neuroscientists out there know if cycles (from graph theory) are possible in the brain?

Put another way, would the signal generated from neuron1 be able to travel around in a circuit of other neurons and come back to activate neuron1 again? Or is this not possible because neuron1 would still be recovering?

I've wondered this for awhile would love it if someone had the answer :)




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: