Hacker News new | past | comments | ask | show | jobs | submit login

When I was a kid I read a book published in the 1980s called Yes We Have No Neutrons. It covered various episodes in pseudoscience like N-rays, cold fusion, and Freudian psychology. But there was also a chapter on neural networks that did not age very well at all.



The phrase "neural network" is itself a kind of pseudoscience, in being neither neural nor a network -- but an essentially pseudoscientific description of an algorithm better called "ensembled regression" or something of that kind.

NNs do not model the brain, and have basically nothing to do with it. I'd imagine rereading that chapter would be a good idea.


I'm an academic working at the edge between biological neural networks in neuroscience, and artificial neural networks in machine learning. It is true that NNs don't model the brain, but it is not true that they have nothing to do with it. There is a whole fascinating field that compares the two and discusses their commonalities and differences.

But above all, "pseudoscience" is something different and is certainly not the right word to describe this issue of terminology.


> But above all, "pseudoscience" is something different and is certainly not the right word to describe this issue of terminology.

The definition of pseudoscience is "a collection of beliefs or practices mistakenly regarded as being based on scientific method."

Arguably the idea that neural networks reproduce how brains work is pseudoscience.

In the very least, it's a deeply misguided marketing blurb. It's now considered a metaphor, just like the ones driving some corners of the artificial intelligence field.


Pseudoscience is an issue of method, not of wrongly held beliefs. Before Einstein, Newton's law of gravitation was believed to be correct. It was not, but it wasn't "pseudoscience", because it was falsifiable and the theory was correct up to a certain error value. "Wrong" and "pseudoscience" are not synonymous.

Moreover, you're dismissing an entire extremely active research field, which tries to understand up to which point artificial and biological NNs are or aren't similar. It's the field, for example, of Turing laureate Yoshua Bengio.

Neural networks CAN reproduce how the brain work (if you're doing computational neuroscience, in which case what you call NNs is something different, and it's an issue of terminology). Even simple binary NNs were originally born to understand the cognitive functions of the brain (McCulloch and Pitts, 1943). The field later diverged (with the advent of backprop), but still today they have a lot in common even inadvertently, for example the representations they learn in navigational tasks (Banino et al. Nature 2018) or in the visual cortex vs. CNNs for computer vision.


This is a terminological question which has nothing to do with whether a textbook in the 80s was right or not in calling neural networks a crockpot theory.

The terminology is now fixed and anyone who has more than a passing acquaintance with these things knows that neural networks have almost nothing to do with biological neurons.


If it's a crockpot theory, it probably just hasn't simmered long enough.


It's a 1997, I've just skim read the chapeter -- and it seems entirely correct. Its views here are consistent with my own, NN is just a name for a technique -- baring little relation to the claims made about it in a hysterical press.


Strings, fibers, ropes, and threads do not model textiles. Trees have basically nothing to do with botany but we find their names acceptable. We're programmers, metaphors are our bread and butter.


Ensemble methods have basically nothing to do with neural networks. The output of a NN is not some kind of "average" or "best pick" taken from the outputs of individual neurons. Rather, there are multiple layers each of which performs a kind of generalized multivariate regression on the outputs of the previous layer, and the parameterization for the whole hierarchy of layers is fitted as a whole. Very different.


NNs with dropout are, trivially, an ensembling. And I think it's not so hard to show NNs, by default, meet a criterion like it -- namely that if we have something like batch normalization between the layers, so they are something PMF-like, then each is taking an expectation.

either way, the technique has absolutely nothing to do with the biological cells we call neurones -- as much as decision tress have to do with forests.

It is metaphorical mumbojumbo taken up by a credulous press and repeated in research grant proposals by the present generation of young jobbing PhDs soon to be out of a job.

The whole structure is, as it has ever been, is on the verge of a winter brought about by this shysterism. Self-driving cars, due in 2016, are likewise "just around the corner".


I sympathesize about the overhyping. I certainly don't know if it's a good idea or not, but if you work for Google driverless cars are already on the road. https://www.theverge.com/2022/3/30/23002082/waymo-driverless...


I don't know if you can use the word "already" for something that's been nearly here for so long.


> [A]NNs do not model the brain, and have basically nothing to do with it.

ANNs don't yet model the structure of the brain but it seems plausible that they could do in the future as the result of some "convergent evolution".

ANNs have a fair model of individual neurons. Artificial and biological neurons do roughly the same thing when evaluated, but they are connected and trained very differently.

For me it's too much of a coincidence that the two most generally intelligent systems (ANNs and BNNs) are both "linear networks of activation functions".

We have not managed to build general intelligence from any other formalism, and neither has nature.

Viewing ANNs as a poor model of BNNs may be looking at the question backwards. You could say that BNNs are trying desperately hard to model the pure mathematics of ANNs within the confines of biochemistry. The fact that a biological neuron is not exactly a ReLU may say more about the limitations of biology rather than the limitations of ReLUs.


> For me it's too much of a coincidence that the two most generally intelligent systems (ANNs and BNNs) are both "linear networks of activation functions".

I am unconvinced that "linear network" appropriately defines BNNs. Can you clarify this?


> Artificial and biological neurons do roughly the same thing when evaluated

They most certainly do not. The idea that biological neurons are a kind of programmable on/off switch is completely wrong. There is plenty of computation happening inside a single neuron.

The fact that we once thought only the interconnect of neurons is important, and not the individual neurons themselves, is actually pretty strange. We know full well that every single cell is capable of computation, as it must react to its internal and external environment to be able to function. Neurons are more specialized for computation than other cells, and of course the biological neural network is a still a huge part of animal intelligence, but the simple model where we ignored what was going in inside each neuron should have been seen as extremely unlikely to be the full picture from the start.


You're confused if you think residual neural net motifs don't underlie how human brains reason about analogies, and if you think they don't control vision processing categorization.


Neural networks model the brain via convergent evolution. I don't have a link for this offhand, but I remember seeing several studies that discovered that if you train ANNs on at least some of the same data sources that the developing brain trains on, they develop the same sorts of structures and match to the same features.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: