Hacker News new | past | comments | ask | show | jobs | submit login

Wasn't the attempt of modeling a collective of neurons, their synapsis, and the way that some connections are reinforced the genesis of the artificial neural networks? That's how the first person brought that concept to life, no? He didn't even have a theoretical explanation on how/why that would work for something, right?

> ANNs have more in common with a CPU than a brain

How so? which parts are similar?




Yes, ANNs are inspired by the brain.

Here is a list of properties that ANNs shared with CPUs that are different from brains:

* Synchronized activation vs. asynchronous / partially synchronous activation

* Digital signals vs. analog signals

* Instantaneous transmission of signals vs. delay imposed by axon and dendrite length

* Uniform signal vs. use of various neurotransmitter signals

* Rapid activation speed (GHz) vs. slow activation speed (Hz)

* The use of negative signals vs. strictly positive quantities of neurotransmitters

* Low average connections (10-1000) vs. high average connections (5,000-100,000)

* Low energy efficiency vs. high energy efficiency

For a detailed essay on the topic, see: http://timdettmers.com/2015/07/27/brain-vs-deep-learning-sin...


Sure, they are still running on CPUs, but ANNs are still modeled with CPUs to do what NNs do, at least at some levels where experiments showed that they work.

Sure, some of the properties of NNs do not transpose well to ANNs. As someone pointed out in a comment here with an article showing that if you apply the same kind of signal it doesn't work.

But the fact remains: we are being more successful on AI advancements by trying to emulate parts of our brain than we were with other techniques.

We didn't knew that this would happen when it all started, but it did.

Out of the blue, no-one could look at a model of a yet to be implemented ANN and say it would work, and why. It has all been experimentation, taking the brain as a raw blueprint.

And although many other phenomenas that happened with the brain didn't work well with ANNs, neurogenesis apparently did.

It's impressive IMO and quite humbling that we are getting so many achievements out from mimicking nature, and we aren't 100% sure why it worked in the first place.

That's all that I meant to say


I just don't want you to get the wrong impression. This is a single paper about a technique for adding neurons to ANNs over time, and it is only one of many over the last few decades. The paper does not have the evidence to indicate that this is a major breakthrough. The industry as a whole generally does not add neurons to an existing model when updating that model. The vast majority of applications also use backpropagation for training, which is not what our brains use. So even if we ignore the implementation on CPUs, ANNs are still far from behaving similarly, even in a conceptual way, from brains. I must disagree that "we are getting so many achievements out from mimicking nature".


> Instantaneous transmission of signals vs. delay imposed by axon and dendrite length

Would there be anything to gain by simulating this?


It adds an additional parameter that influences RNN behavior over time, so I could see it possibly being useful. I would speculate that this could have value for providing slowly-updating subsystem information to real-time control systems.


Temporal recurrent neural networks have been tried, I think by Microsoft Research.


Also:

* local regular structure vs irregular structure with global elements


Oh yes. If we want to put a bigger one on the list, then there's the whole matter of vast quantities of circumstantial data being left out by ANNs (sight, sound, past memories, emotions, arousal, touch sensations, etc.). But lists like this can go on for very long.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: