Hacker News new | past | comments | ask | show | jobs | submit login

Sure, they are still running on CPUs, but ANNs are still modeled with CPUs to do what NNs do, at least at some levels where experiments showed that they work.

Sure, some of the properties of NNs do not transpose well to ANNs. As someone pointed out in a comment here with an article showing that if you apply the same kind of signal it doesn't work.

But the fact remains: we are being more successful on AI advancements by trying to emulate parts of our brain than we were with other techniques.

We didn't knew that this would happen when it all started, but it did.

Out of the blue, no-one could look at a model of a yet to be implemented ANN and say it would work, and why. It has all been experimentation, taking the brain as a raw blueprint.

And although many other phenomenas that happened with the brain didn't work well with ANNs, neurogenesis apparently did.

It's impressive IMO and quite humbling that we are getting so many achievements out from mimicking nature, and we aren't 100% sure why it worked in the first place.

That's all that I meant to say




I just don't want you to get the wrong impression. This is a single paper about a technique for adding neurons to ANNs over time, and it is only one of many over the last few decades. The paper does not have the evidence to indicate that this is a major breakthrough. The industry as a whole generally does not add neurons to an existing model when updating that model. The vast majority of applications also use backpropagation for training, which is not what our brains use. So even if we ignore the implementation on CPUs, ANNs are still far from behaving similarly, even in a conceptual way, from brains. I must disagree that "we are getting so many achievements out from mimicking nature".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: