I would agree with most of this, but there is no direct analogy between all of the components of a neuron in neurology, which are typically umbrella'd under the name "neuron" and regarded as parts of one, versus ML's version.
Eg, if a weight can be a synapse, can't a weight be an axon? Axons also "connect" neurons, but their length is more related to the connection strength, so could be considered more analogous to a "weighting".
Yet, axons are not as obtusely "one-to-many" as synapses, but depending on the structure of the ML model, and the view of which aspect of it is more impactful to be highlighting by analogy, either take might be more appropriate.
I suppose it depends on the kind of structure you're working with, and whether you're training and inferring, or just one or the other. In all cases I think a good argument could be made for general neuron analogy abuse.
Oh that's interesting. I don't know too much about the neuroscience, just enough to agree that a real neuron is vastly more complex than a node in a "neural net". Based on your description, an axon is most highly analogous to the bias term, although it would be a multiplicative bias. I wonder if that's been tried.
Eg, if a weight can be a synapse, can't a weight be an axon? Axons also "connect" neurons, but their length is more related to the connection strength, so could be considered more analogous to a "weighting".
Yet, axons are not as obtusely "one-to-many" as synapses, but depending on the structure of the ML model, and the view of which aspect of it is more impactful to be highlighting by analogy, either take might be more appropriate.
I suppose it depends on the kind of structure you're working with, and whether you're training and inferring, or just one or the other. In all cases I think a good argument could be made for general neuron analogy abuse.