Hacker News new | past | comments | ask | show | jobs | submit login

It feels like a wrong analogy - the large matrix operations are not really want a deep neural network is doing, it's an implementation detail, an artifact of how it's efficient to represent a large number of neurons and their connections.

The results of those tensor operations (not in their total, but each particular output number) may have some very rough analogy to the changes in a particular single synapse "connection strength" caused by various biochemical factors as a result of neuron operation, but the whole tensor operation doesn't map to any biological concept smaller than e.g. "this is a bit similar to how a layer of brain cells changes its behavior over weeks/months of learning". A machine learning iteration updating all of the NN weights is a rough correspondence to the changes that, over time, appear in our brains as a result of experience and learning (and "normal operation" as well).

I have seen an interesting hypothesis on how the particular layout of a dendritic tree and it's synapses can encode a complicated de facto "boolean-like" formula out of all the "input" synapses (e.g. a particular chain of and/or/not operations on a 100+inputs), instead of essentially adding all the inputs together as much of artificial neural networks assume, but I'm not sure about if how such hypothetical "calculations" actually are used in our brains.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: