Theories are now emerging that Universe is running one large bayesian learning algorithm (bayesian inference itself is proven to be an optimal knowledge creation method)
Argh, that article! Those are metaphors at some level of representation!!! You could argue in the same specious way that a computer is not a computer because it is really a bunch of atoms interacting via non deterministic quantum mechanical rules so it can't really implement deterministic algorithms and error free information storage.
The three conditions stated in the article basically are the set up for reinforcement learning, (which can be implemented at some level of abstraction on a computer) and the question about if a representation is required or not is a mathematical one. For linear systems with gaussian noise the optimal control this is an answered question, yes you optimally estimate the state, then you base your controller off the optimal estimate. For more complicated systems it is unclear if the representation is required or not, but it sure seems reasonable that some level of representation is required. In the baseball example they are still talking about keeping a constant optical line, not what happens to raw optic nerve inputs. It has a reduced dimensionality representation!
> You could argue in the same specious way that a computer is not a computer because it is really a bunch of atoms interacting via non deterministic quantum mechanical rules so it can't really implement deterministic algorithms and error free information storage.
Jaron Lanier argues that computers and computation are cultural. Why give them a special ontological status? Yeah, we can think of the universe as computational or informational. We can also think of it as mathematical, mental or just whatever physics posits (fields, strings, higher dimensions, etc). Whatever the case, when someone like the article in the OP states that reality is X, then an ontological claim is being made. Its metaphysics.
One could instead argue that the world just is itself, and anything we say about it is our human model or best approximation. Which would be a combination of realism (the world itself) and idealism (how we make sense of it). Then it's just a matter of not mistaking the map for the territory. Instead of saying that the brain or the universe is X, we say that X is our best current map for Y. We don't say that London is a map. That would be making a category error.
I think the article's overall stride is not so much about how these things are represented in neuronal mapping per se, and more that we shouldn't apply the idea of computer mechanics to organisms.
Less load/process/store of absolute data and more like natural processes such as the process of erosion creating rivers. The analogy of the environment as a "lock" and organisms are just the most fit "keys" to success in particular environment.
So the computer analogy is bad because organisms are more a matrix of interactions, feedbacks and responses that work well enough, but dont follow a "logical" design. This can be replicated within a computer easily and the result is evolutionary computation & hardware, genetic algorithms and evolutionary neural networks. The problem in understanding the result of evolutionary systems is that they're blind to design and only respond to fitness and therefore create systems that are so tightly coupled it's a quest to understand how the model even works.
So the article is suggesting we shouldn't apply human design principles to evolved solutions. Perhaps we need some kind of "messy science" to make sense of it all.
Going forward, machine learning running evolutionary algorithms on neural networks should be able to produce sufficiently incomprehensibility for us to be studying our own inventions for years to come.
Thank you for this. I spend a non-trivial amount of time telling people working in AI and machine learning (which I also do) that the brain isn't some parameter optimization machine and that analogies from whatever technology or math people are excited about aren't very useful. I wish some neuroscience education and articles like these were some part of the ML canon.
I couldn't disagree with you more. The article referenced by GP mistakes the form for the function. Just because the computer uses different technology than human tissue, doesn't mean it isn't emulating the same ultimate processes that are happening in our bodies.
And even if we don't have the correct algorithms in sight today, there is every reason to believe that whatever processes are occurring in our brains and bodies, can indeed be simulated and replicated virtually.
The only way to argue against this idea is to claim that there is some special magical non-material aspect to our existence... which no article or neuroscience education has yet demonstrated.
The comment was about universal Bayesian brains and other things that are quite a stretch to say the least. Of course, since our brains are made of physical matter, they must perform computations that other physical matter can perform.
The trap is to think about the brain in terms of things we find impressive, and about things we find impressive as being like brains somehow. Therefore analogies to steam engines, computers and deep learning. And these analogies have always turned out to be silly.
> Just because the computer uses different technology than human tissue, doesn't mean it isn't emulating the same ultimate processes that are happening in our bodies
BUT: at least I think we are far from it. Very far. In the sense that we don't need more computing power of the current approaches to get e.g. AGI, we need radically new ones. And I actually don't see why this would be opposed to more neuroscience education, instead of excitement for cool but still quite limited models, and why this would be pretending that there is some "special magical non-material aspect to our existence"
How much can you compress the essential structure and complexity of an intelligent brain? It is an open question, but if in the end you can not compress it "enough", it does not have much practical consequences of it being also theoretically a mathematical object. And on top of that: we already know how to make new ones...
Very tiny animal life shows what we would consider intelligent behavior. There is no particular reason to believe that evolution has even come close to size optimization that intelligence can be reduced in, as there are a large number of other dimensions it is working on at the same time, survival being the big one.
>> which no article or neuroscience education has yet demonstrated.
True, but there are some pretty interesting ideas out there. I'm going have to start putting together a list of articles. From the proof that if we have free will, so do particles to some extent. To the notion that quantum computation may happen in the brain. Not saying I believe these things, but the people behind them are pretty smart.
There is no real evidence that we have free will, and the general "suspicion" in the field is that we don't. Yes the brain is made of particles, but their arrangement is very particular and very complex, leaving cognition and all other things the brain does to almost certainly be emergent phenomena. Boiling down to single particles is like trying to reverse engineer a Tesla by focusing on the fact that it has iron atoms in it.
I agree with you that humans are very different from optimization machines in that they have some freedom in what they choose to optimize. Alan Newell made this point a long time ago, back then attempts were made to describe humans in terms of control theory. It works up to a point, but autonomous behavior needs the faculty to set goals independently of pre-programmed optimization points as well as current situational factors. Humans, Newell argued, should be understood as knowledge systems that operate on their representations of the world, but are equally adept at simulating the world in their heads, and create knowledge beyond current representations.
The article, however, is rubbish. As psychologist, I cringed throughout. It is a blurr of half-baked ideas and ill-understood controversies from cognitive science. The author manages to write an entire article about information at its core without ever properly defining information, not to speak of representation. In the sense of Shannon, or course neurons are channels transmitting information. What else would they do?
And of course we can decode that information even from the outside, even down to discrete processing stages during the execution of mental tasks (https://onlinelibrary.wiley.com/doi/epdf/10.1111/ejn.13817). And if there are truely no representations in the brain, as the author states, how do we plan for future events that are far beyond the horizon? And even if you reject all that, there is DNA in the brain that is literally information and expressed (decoded and made into protein) ALL THE TIME.
Regarding cognition, the good Mr. Epstein has not grasped the difference between computers and computability. I don't think anybody is looking for silicon in the brain. The smart people are asking how it is possible for a complex system to operate in a complex world without an outside unit directing their behavior. They ask "How how can the human mind occur in the physical universe" (http://act-r.psy.cmu.edu/?post_type=publications&p=14305)? How is it that we can do the things we do? How do we set goals, plan steps to achieve them, and choose the right actions for implementation?
I get where you are coming from and I agree with you regarding a dangerous misunderstanding of AI, especially ML. But this article is not helping putting things in perspective. I am willing, however, to concede one point to Mr. Epstein: His brain is dearly lacking information, representation, algorithms, or any such marker usually signifying intelligent life.
The article is bad, but the point of silly analogies to various technologies remains.
Regarding the questions you addressed, my suspicion is that the brain's primary trick is to model the organism and the environment. Planning ahead, reasoning and synthesizing knowledge can all happen if you can do that. I'd argue (and of course I'm biased) that control theory is probably a better place to start thinking about the brain, in that light, insofar as building models of the world is important.
Information processing as basis for human existence is not an analogy once you accept a very basic premise of what information is. It is the literal description of what is going on, even on the biological level. I've mentioned DNA, the immune system is another example.
If you want to be successful in a complex world, survive, replicate, you will profite massively if you know what is going on around you better than that other thing that wants to eat you. If you can grasp the structure of the physical world and predict its changes, you will come out on top. Information processing is an evolutionary necessity, because we are grounded in a physical world. Information is the successful way to deal with the world, because it gives the organism a choice.
Control theory is great if you want to describe real valued in- and outputs and their relationship over time. Like throwing a ball. But at some point we need to become discreet and abstract the real valued domain of space and time into symbols.
> But the IP metaphor is, after all, just another metaphor
With no apologies to the UNIX metaphor of "everything is a file", my favorite starting point for [pretending at] explaining intelligence/understanding/recognition has long been "everything is a metaphor" ;)
Yes, this line of thought it pretty interesting. There is also an equivalence between quantum physics and a kind of machine learning called restricted Boltzmann machines, in that they can efficiently simulate each other.
See e.g. Bayesian Brain and Universal Darwinism