Hacker News new | past | comments | ask | show | jobs | submit login

I completely blame my own community, rather than you, for writing this, but as an AI researcher, your comment is terribly painful to read. We have little to no idea how actual neurons (let alone entire brains) really work. The things that are often called "(artificial) neural networks" really shouldn't be called that. I strongly prefer terms like "computational networks" or (where applicable) "recurrent/convolutional networks".



Actually we really know a lot about how neurons work. We've got the biophysical properties down, and we understand neurotransmission at the cellular/molecular level for a lot of different types of neurons. We understand signal processing where we transduce sound, smell, sight, touch, taste into neurochemical signals. We even know a decent amount about the early phases of the processing of these "raw data" signals into higher levels of abstraction (e.g. edge detection for vision). What we don't understand is the later phases of processing (advanced layers of abstraction) all the way up to conscious sensation.


We do know a lot. But the knowledge gap is still tremendous as far as the details of neural synaptic plasticity.


> We have little to no idea how actual neurons (let alone entire brains) really work.

I think that slights neuroscience, which has devoted the past 60 years to answering this question, to a fair degree. But I agree that the biomimetic motivations offered up for various flavors of neural net feel pretty bogus. It seems to me like, among the major old-school researchers in the field, only Geoff Hinton still does this.


Fair. I was definitely unnecessarily harsh on neuroscience; my quibble is only with my own community's claims that what we're doing is anything like how the brain works. Thanks to you and sxg for correcting the record.


I think that what neuroscience has found out in the last 60 years is that neurons and synapses are more complex than they had ever dreamed.

I could say the same about genetics, btw. Biology has turned out to be incredibly complex.


I meant that it's a bit similar in the way that the passing on of signals and how over time neurons prefer some connections rather than others.

This part is a bit similar, no?


In a very hand-wavy sense, yes. The same can be said of paths to food by ant colonies. The way that ANNs have been drawn as circles with arrows between them looks like a cartoon version of neurons and synapses, which is the origin of the "neural network" part. The timing of data from hidden node to hidden node, the activation functions, and the hidden node outputs have very little to do with biological neurons. ANNs have more in common with a CPU than a brain.


Wasn't the attempt of modeling a collective of neurons, their synapsis, and the way that some connections are reinforced the genesis of the artificial neural networks? That's how the first person brought that concept to life, no? He didn't even have a theoretical explanation on how/why that would work for something, right?

> ANNs have more in common with a CPU than a brain

How so? which parts are similar?


Yes, ANNs are inspired by the brain.

Here is a list of properties that ANNs shared with CPUs that are different from brains:

* Synchronized activation vs. asynchronous / partially synchronous activation

* Digital signals vs. analog signals

* Instantaneous transmission of signals vs. delay imposed by axon and dendrite length

* Uniform signal vs. use of various neurotransmitter signals

* Rapid activation speed (GHz) vs. slow activation speed (Hz)

* The use of negative signals vs. strictly positive quantities of neurotransmitters

* Low average connections (10-1000) vs. high average connections (5,000-100,000)

* Low energy efficiency vs. high energy efficiency

For a detailed essay on the topic, see: http://timdettmers.com/2015/07/27/brain-vs-deep-learning-sin...


Sure, they are still running on CPUs, but ANNs are still modeled with CPUs to do what NNs do, at least at some levels where experiments showed that they work.

Sure, some of the properties of NNs do not transpose well to ANNs. As someone pointed out in a comment here with an article showing that if you apply the same kind of signal it doesn't work.

But the fact remains: we are being more successful on AI advancements by trying to emulate parts of our brain than we were with other techniques.

We didn't knew that this would happen when it all started, but it did.

Out of the blue, no-one could look at a model of a yet to be implemented ANN and say it would work, and why. It has all been experimentation, taking the brain as a raw blueprint.

And although many other phenomenas that happened with the brain didn't work well with ANNs, neurogenesis apparently did.

It's impressive IMO and quite humbling that we are getting so many achievements out from mimicking nature, and we aren't 100% sure why it worked in the first place.

That's all that I meant to say


I just don't want you to get the wrong impression. This is a single paper about a technique for adding neurons to ANNs over time, and it is only one of many over the last few decades. The paper does not have the evidence to indicate that this is a major breakthrough. The industry as a whole generally does not add neurons to an existing model when updating that model. The vast majority of applications also use backpropagation for training, which is not what our brains use. So even if we ignore the implementation on CPUs, ANNs are still far from behaving similarly, even in a conceptual way, from brains. I must disagree that "we are getting so many achievements out from mimicking nature".


> Instantaneous transmission of signals vs. delay imposed by axon and dendrite length

Would there be anything to gain by simulating this?


It adds an additional parameter that influences RNN behavior over time, so I could see it possibly being useful. I would speculate that this could have value for providing slowly-updating subsystem information to real-time control systems.


Temporal recurrent neural networks have been tried, I think by Microsoft Research.


Also:

* local regular structure vs irregular structure with global elements


Oh yes. If we want to put a bigger one on the list, then there's the whole matter of vast quantities of circumstantial data being left out by ANNs (sight, sound, past memories, emotions, arousal, touch sensations, etc.). But lists like this can go on for very long.


As a brain scientist, it pains me as well.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: