I completely blame my own community, rather than you, for writing this, but as an AI researcher, your comment is terribly painful to read. We have little to no idea how actual neurons (let alone entire brains) really work. The things that are often called "(artificial) neural networks" really shouldn't be called that. I strongly prefer terms like "computational networks" or (where applicable) "recurrent/convolutional networks".
Actually we really know a lot about how neurons work. We've got the biophysical properties down, and we understand neurotransmission at the cellular/molecular level for a lot of different types of neurons. We understand signal processing where we transduce sound, smell, sight, touch, taste into neurochemical signals. We even know a decent amount about the early phases of the processing of these "raw data" signals into higher levels of abstraction (e.g. edge detection for vision). What we don't understand is the later phases of processing (advanced layers of abstraction) all the way up to conscious sensation.
> We have little to no idea how actual neurons (let alone entire brains) really work.
I think that slights neuroscience, which has devoted the past 60 years to answering this question, to a fair degree. But I agree that the biomimetic motivations offered up for various flavors of neural net feel pretty bogus. It seems to me like, among the major old-school researchers in the field, only Geoff Hinton still does this.
Fair. I was definitely unnecessarily harsh on neuroscience; my quibble is only with my own community's claims that what we're doing is anything like how the brain works. Thanks to you and sxg for correcting the record.
In a very hand-wavy sense, yes. The same can be said of paths to food by ant colonies. The way that ANNs have been drawn as circles with arrows between them looks like a cartoon version of neurons and synapses, which is the origin of the "neural network" part. The timing of data from hidden node to hidden node, the activation functions, and the hidden node outputs have very little to do with biological neurons. ANNs have more in common with a CPU than a brain.
Wasn't the attempt of modeling a collective of neurons, their synapsis, and the way that some connections are reinforced the genesis of the artificial neural networks?
That's how the first person brought that concept to life, no? He didn't even have a theoretical explanation on how/why that would work for something, right?
> ANNs have more in common with a CPU than a brain
Sure, they are still running on CPUs, but ANNs are still modeled with CPUs to do what NNs do, at least at some levels where experiments showed that they work.
Sure, some of the properties of NNs do not transpose well to ANNs. As someone pointed out in a comment here with an article showing that if you apply the same kind of signal it doesn't work.
But the fact remains: we are being more successful on AI advancements by trying to emulate parts of our brain than we were with other techniques.
We didn't knew that this would happen when it all started, but it did.
Out of the blue, no-one could look at a model of a yet to be implemented ANN and say it would work, and why. It has all been experimentation, taking the brain as a raw blueprint.
And although many other phenomenas that happened with the brain didn't work well with ANNs, neurogenesis apparently did.
It's impressive IMO and quite humbling that we are getting so many achievements out from mimicking nature, and we aren't 100% sure why it worked in the first place.
I just don't want you to get the wrong impression. This is a single paper about a technique for adding neurons to ANNs over time, and it is only one of many over the last few decades. The paper does not have the evidence to indicate that this is a major breakthrough. The industry as a whole generally does not add neurons to an existing model when updating that model. The vast majority of applications also use backpropagation for training, which is not what our brains use. So even if we ignore the implementation on CPUs, ANNs are still far from behaving similarly, even in a conceptual way, from brains. I must disagree that "we are getting so many achievements out from mimicking nature".
It adds an additional parameter that influences RNN behavior over time, so I could see it possibly being useful. I would speculate that this could have value for providing slowly-updating subsystem information to real-time control systems.
Oh yes. If we want to put a bigger one on the list, then there's the whole matter of vast quantities of circumstantial data being left out by ANNs (sight, sound, past memories, emotions, arousal, touch sensations, etc.). But lists like this can go on for very long.