Hacker News new | past | comments | ask | show | jobs | submit login

Can I float a conceptual conversion by you? I'm a pathologist (I look at human brain regularly and interact with neurologists, neurosurgeons, and neuropathologists) and my undergrad is in physics. I like your general representation. Here's my conversion:

Let's start with a large matrix operation, something a deep neural net neuron would do. Let's imagine that matrix is a sheet with colored dots instead of numbers.

We don't care so much about the order of the rows, but the direction of the columns holds some meaning. So we can connect the top of the matrix to the bottom, like a tube. Now the left side isn't exactly a beginning, and the right side isn't exactly the end, but there's this sort of polarity.

These ends of our tube (which was a sheet) are rings, and those rings can be reduced to points, something like Grothendieck's point with a different perspective at every direction (or in this case, many directions, one direction per row). But the left point and the right point are still different.

Now I have a polarized bag.

Like a neuron.

I could be silly and imagine the gates and channels on the neuron surfaces could be mapped to elements in the matrix like those colored dots, but I rather doubt the analogy extends quite that far...

And neurons don't absorb many giant tensors and emit one giant tensor. But they do receive their signals at many different points on the surface. So there is this spatial element to it. And there are many different kinds of signals (electrical, neurochemical, all the way to simple biochemical, like glucose). So there's this complexity that an inbound tensor would represent nicely. And they do in fact emit a single signal sent to many neighbors.

Anyway, that's my personal matrix-to-neuron conversion.

Is that sensical?




It feels like a wrong analogy - the large matrix operations are not really want a deep neural network is doing, it's an implementation detail, an artifact of how it's efficient to represent a large number of neurons and their connections.

The results of those tensor operations (not in their total, but each particular output number) may have some very rough analogy to the changes in a particular single synapse "connection strength" caused by various biochemical factors as a result of neuron operation, but the whole tensor operation doesn't map to any biological concept smaller than e.g. "this is a bit similar to how a layer of brain cells changes its behavior over weeks/months of learning". A machine learning iteration updating all of the NN weights is a rough correspondence to the changes that, over time, appear in our brains as a result of experience and learning (and "normal operation" as well).

I have seen an interesting hypothesis on how the particular layout of a dendritic tree and it's synapses can encode a complicated de facto "boolean-like" formula out of all the "input" synapses (e.g. a particular chain of and/or/not operations on a 100+inputs), instead of essentially adding all the inputs together as much of artificial neural networks assume, but I'm not sure about if how such hypothetical "calculations" actually are used in our brains.


I imagined the neurons themselves the "colored dots" in the matrix, but this makes so much more sense... My scaling was way off.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: