Hacker News new | past | comments | ask | show | jobs | submit login

Thank you for this. I spend a non-trivial amount of time telling people working in AI and machine learning (which I also do) that the brain isn't some parameter optimization machine and that analogies from whatever technology or math people are excited about aren't very useful. I wish some neuroscience education and articles like these were some part of the ML canon.



I couldn't disagree with you more. The article referenced by GP mistakes the form for the function. Just because the computer uses different technology than human tissue, doesn't mean it isn't emulating the same ultimate processes that are happening in our bodies.

And even if we don't have the correct algorithms in sight today, there is every reason to believe that whatever processes are occurring in our brains and bodies, can indeed be simulated and replicated virtually.

The only way to argue against this idea is to claim that there is some special magical non-material aspect to our existence... which no article or neuroscience education has yet demonstrated.


The comment was about universal Bayesian brains and other things that are quite a stretch to say the least. Of course, since our brains are made of physical matter, they must perform computations that other physical matter can perform.

The trap is to think about the brain in terms of things we find impressive, and about things we find impressive as being like brains somehow. Therefore analogies to steam engines, computers and deep learning. And these analogies have always turned out to be silly.


> Just because the computer uses different technology than human tissue, doesn't mean it isn't emulating the same ultimate processes that are happening in our bodies

BUT: at least I think we are far from it. Very far. In the sense that we don't need more computing power of the current approaches to get e.g. AGI, we need radically new ones. And I actually don't see why this would be opposed to more neuroscience education, instead of excitement for cool but still quite limited models, and why this would be pretending that there is some "special magical non-material aspect to our existence"

How much can you compress the essential structure and complexity of an intelligent brain? It is an open question, but if in the end you can not compress it "enough", it does not have much practical consequences of it being also theoretically a mathematical object. And on top of that: we already know how to make new ones...


Define intelligent.

Very tiny animal life shows what we would consider intelligent behavior. There is no particular reason to believe that evolution has even come close to size optimization that intelligence can be reduced in, as there are a large number of other dimensions it is working on at the same time, survival being the big one.


>> which no article or neuroscience education has yet demonstrated.

True, but there are some pretty interesting ideas out there. I'm going have to start putting together a list of articles. From the proof that if we have free will, so do particles to some extent. To the notion that quantum computation may happen in the brain. Not saying I believe these things, but the people behind them are pretty smart.


There is no real evidence that we have free will, and the general "suspicion" in the field is that we don't. Yes the brain is made of particles, but their arrangement is very particular and very complex, leaving cognition and all other things the brain does to almost certainly be emergent phenomena. Boiling down to single particles is like trying to reverse engineer a Tesla by focusing on the fact that it has iron atoms in it.


> From the proof that if we have free will, so do particles to some extent. To the notion that quantum computation may happen in the brain.

The question remains, what reason do we have to believe that only a living brain, and not a silicon analogue, can tap into those features of reality?


I agree with you that humans are very different from optimization machines in that they have some freedom in what they choose to optimize. Alan Newell made this point a long time ago, back then attempts were made to describe humans in terms of control theory. It works up to a point, but autonomous behavior needs the faculty to set goals independently of pre-programmed optimization points as well as current situational factors. Humans, Newell argued, should be understood as knowledge systems that operate on their representations of the world, but are equally adept at simulating the world in their heads, and create knowledge beyond current representations.

The article, however, is rubbish. As psychologist, I cringed throughout. It is a blurr of half-baked ideas and ill-understood controversies from cognitive science. The author manages to write an entire article about information at its core without ever properly defining information, not to speak of representation. In the sense of Shannon, or course neurons are channels transmitting information. What else would they do?

And of course we can decode that information even from the outside, even down to discrete processing stages during the execution of mental tasks (https://onlinelibrary.wiley.com/doi/epdf/10.1111/ejn.13817). And if there are truely no representations in the brain, as the author states, how do we plan for future events that are far beyond the horizon? And even if you reject all that, there is DNA in the brain that is literally information and expressed (decoded and made into protein) ALL THE TIME.

Regarding cognition, the good Mr. Epstein has not grasped the difference between computers and computability. I don't think anybody is looking for silicon in the brain. The smart people are asking how it is possible for a complex system to operate in a complex world without an outside unit directing their behavior. They ask "How how can the human mind occur in the physical universe" (http://act-r.psy.cmu.edu/?post_type=publications&p=14305)? How is it that we can do the things we do? How do we set goals, plan steps to achieve them, and choose the right actions for implementation?

I get where you are coming from and I agree with you regarding a dangerous misunderstanding of AI, especially ML. But this article is not helping putting things in perspective. I am willing, however, to concede one point to Mr. Epstein: His brain is dearly lacking information, representation, algorithms, or any such marker usually signifying intelligent life.


The article is bad, but the point of silly analogies to various technologies remains.

Regarding the questions you addressed, my suspicion is that the brain's primary trick is to model the organism and the environment. Planning ahead, reasoning and synthesizing knowledge can all happen if you can do that. I'd argue (and of course I'm biased) that control theory is probably a better place to start thinking about the brain, in that light, insofar as building models of the world is important.


Information processing as basis for human existence is not an analogy once you accept a very basic premise of what information is. It is the literal description of what is going on, even on the biological level. I've mentioned DNA, the immune system is another example.

If you want to be successful in a complex world, survive, replicate, you will profite massively if you know what is going on around you better than that other thing that wants to eat you. If you can grasp the structure of the physical world and predict its changes, you will come out on top. Information processing is an evolutionary necessity, because we are grounded in a physical world. Information is the successful way to deal with the world, because it gives the organism a choice.

Control theory is great if you want to describe real valued in- and outputs and their relationship over time. Like throwing a ball. But at some point we need to become discreet and abstract the real valued domain of space and time into symbols.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: