Hacker News new | past | comments | ask | show | jobs | submit login

Very interesting; I'm surprised at 0 comments after 11h!

Here's the abstract:

> "Neuromorphic technologies typically employ a point neuron model, neglecting the spatiotemporal nature of neuronal computation. Dendritic morphology and synaptic organization are structurally tailored for spatiotemporal information processing, such as visual perception. Here we report a neuromorphic computational model that integrates synaptic organization with dendritic tree-like morphology. Based on the physics of multigate silicon nanowire transistors with ion-doped sol–gel films, our model—termed dendristor—performs dendritic computation at the device and neural-circuit level. The dendristor offers the bioplausible nonlinear integration of excitatory/inhibitory synaptic inputs and silent synapses with diverse spatial distribution dependency, emulating direction selectivity, which is the feature that reacts to signal direction on the dendrite. We also develop a neuromorphic dendritic neural circuit—a network of interconnected dendritic neurons—that serves as a building block for the design of a multilayer network system that emulates three-dimensional spatial motion perception in the retina."




I would say the title alone kept me away from reading it. And this is coming from a person that worked in systems neuroscience and machine learning for a decade.

The scientific community (at least the neuroscience and atmospheric science I know about) does itself few favors with the manner it writes. Orwell's essay on politics and the english language comes to mind:

https://www.orwellfoundation.com/the-orwell-foundation/orwel...


Is your objection to the title just political, or does it have red flags for you on the science front? From the outside, it seems like an interesting thing to do to me (incorporating spatial stuff into a kind of neural network, I mean). But I don't know enough neuroscience to know if that's even a sensible thing to do haha.


The spacial characteristic is important because activity at two nearby dendrites receiving excitatory (or inhibitory) inputs have an additive effect which is greater the closer they are, this is because the signal is based on concentration of ions and when those signals are closer together they can surpass the concentration threshold that's needed to trigger voltage gated ion channels to open.


Interesting, thanks for the basics. That makes sense - I'm a bit confused about how inhibition vs excitation works if they're both based on ion concentration (is it like positive vs negative ions, or a different mechanism entirely?), but I'll do some googling!


Different ions with different charges. Sodium (Na^+) as you can see has one positive charge per ion, as does potassium (K^+), calcium has two (Ca^2+), and hydrogen has one (H^+). This charge is balanced with negative charges, a minority from chloride (Cl^-) and the majority from proteins. When it comes to proteins, they are comprised of amino acids which have varying charges and proteins generally end up with a negative charge from all of the sites where they can be protonated. Just for clarification, protonation is where a H+ (a proton) can ironically bind to a molecule with a negative charge, this is how most acids work (that's a generalization don't @ me, chemists)

So we have a shitload of proteins inside the cell with a shitload of negative charges, we have a variety of ions with positive charges, and we have ion channels which are transmembrane proteins that have a specific structure to allow certain ions of one charge or size or just all ions in general to pass through their pore. Some are leak channels which are almost always open, some open when a ligand (other molecule) binds, some open when they sense a certain voltage.

That takes us to difference in charge. The cell membrane acts as a capacitor, separating charge. There are many pumps that pump out positive ions to maintain this separation and different cells can sit at different voltages where the voltage is simply the difference in charge between the intracellular and extracellular space, this can be measured in whole cell configuration patch clamp experiments where we attach a pipette with a tip smaller than a cell to the membrane and apply suction to break into the cell so we can get a reading but that's a whole topic on its own

When it comes to excitation vs inhibitions, it really is about positive vs negative charges. Excitatory ion channels such as kainate AMPA and NMDA receptors pass cations, the positive ions, while GABAa and glycine receptors are ion channels that selectively allow Cl^- into the cell in adults.

The developing brain is weird and backward when it comes to Cl^- so again, another topic


Wild, that's really complicated. A bit of a tangent, but one takeaway for me there is that neural nets are _far_ more dissimilar to brains than I thought. Sounds like a lot more going on than "signals add up to threshold"!


The thresholding to generate all-or-none action potentials gets way too much consideration. The real action is the flux of ionic currents across membranes. Action potential are easy to measure but important to remember that they are ONLY an intRAcellular form of communication. IntERcellular communication is via quantal and graded release and reuptake of transmitters that evoke graded ionic fluxes (with some exceptions). Brain is best viewed as massive analog computer with some facets of digital computing.

Retinal processing—the focus of this demonstration—is almost entirely computation by graded potentials, although some amacrine cells do spike.


Absolutely, the brain is the most complicated system in the universe that we are aware of. Something I usually tell people when making comparisons between modern ANNs and biological brains is that there is more computational complexity in a single neuron than the largest ANN

All of this doesn't even go into the variation in proteins. For example, AMPA receptors are ion channels, right? One channel is made up of 4 subunits which each is a single polypeptide protein string folded up. There not only are multiple different subunits that can be swapped out to change the properties of the ion channel such as open time, conductance, and ion selectivity, but the subunits can also have posttranslational modifications such as phosphorylation that also changes these properties (besides selectivity iirc)

All of this allows incredible nuance for how this one channel responds to stimulation and add on top of that we get more or less AMPA receptors inserted in the membrane at a synapse. Now multiply all of that nuance times all of the synaptic boutons and other receptor proteins

Yet again adding more complexity are g-protein coupled receptors that aren't ion channels but cause other changes that may be longer lasting inside the neuron from impacting genetic expression or modifying epigenetics to impacting other ion channels or their expression and this is barely scratching the surface of overall complexity of one singe cell

We have billions of these fuckin things all interconnected in a complex web many having thousands of connections that are continuously being modified, created, and destroyed

And there are many more signaling pathways besides synaptic such as volume transmissions where neurotransmitters essentially diffuse away from the release site to hit receptors on the cell bodies of far away neurons, we have neuropeptides that can be released from the same synapse as other transmitters but only under specific conditions that can impact near or far cells, we have glia such as astrocytes and microglia that interact with synapses and help with cleanup, regulation, and of course more signaling


Lovely! And a single excitatory postsynaptic density is an amazingly complex aggregate of 500+ different protein species — perhaps hundreds of each type (see Seth Grant's work: www.synaptome.genes2cognition.org

And yet we typically treat this amazing molecular complex as an “on-off” switch. What a joke of reductionism. A better crude metaphor is that a single synapse is a context-dependent microprocessor.


I think that's a good metaphor where context is the impact of previous states.

Every piece of the system is incredibly dynamic and has an huge degree of nuance which impacts the computation including within the synapse, the postsynaptic density, and the presynaptic neuron. The amount and content of vesicles as well as location and density of voltage gated calcium channels can change the content and quantity of signal sent. And that's before we talk about all of the other regulatory proteins presynaptically


I do not understand the comment regarding the title. Strikes me as neutral and descriptive. Delighted to see sets of neuromorphic approaches used in more sophisticated compartmental models. And modeling well understood directional selectivity is a rational place to start.


It does sound interesting! But the actual paper is behind a paywall and doesn't seem to have made it onto a certain useful website yet. Do you know how to get access?


> I'm surprised at 0 comments after 11h!

There's so much frothy bullshit in this area that I'm not surprised people aren't paying attention.

This is not a comment on the paper itself, just on the phenomenon you noted.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: