I agree with you that humans are very different from optimization machines in that they have some freedom in what they choose to optimize. Alan Newell made this point a long time ago, back then attempts were made to describe humans in terms of control theory. It works up to a point, but autonomous behavior needs the faculty to set goals independently of pre-programmed optimization points as well as current situational factors. Humans, Newell argued, should be understood as knowledge systems that operate on their representations of the world, but are equally adept at simulating the world in their heads, and create knowledge beyond current representations.
The article, however, is rubbish. As psychologist, I cringed throughout. It is a blurr of half-baked ideas and ill-understood controversies from cognitive science. The author manages to write an entire article about information at its core without ever properly defining information, not to speak of representation. In the sense of Shannon, or course neurons are channels transmitting information. What else would they do?
And of course we can decode that information even from the outside, even down to discrete processing stages during the execution of mental tasks (https://onlinelibrary.wiley.com/doi/epdf/10.1111/ejn.13817). And if there are truely no representations in the brain, as the author states, how do we plan for future events that are far beyond the horizon? And even if you reject all that, there is DNA in the brain that is literally information and expressed (decoded and made into protein) ALL THE TIME.
Regarding cognition, the good Mr. Epstein has not grasped the difference between computers and computability. I don't think anybody is looking for silicon in the brain. The smart people are asking how it is possible for a complex system to operate in a complex world without an outside unit directing their behavior. They ask "How how can the human mind occur in the physical universe" (http://act-r.psy.cmu.edu/?post_type=publications&p=14305)? How is it that we can do the things we do? How do we set goals, plan steps to achieve them, and choose the right actions for implementation?
I get where you are coming from and I agree with you regarding a dangerous misunderstanding of AI, especially ML. But this article is not helping putting things in perspective. I am willing, however, to concede one point to Mr. Epstein: His brain is dearly lacking information, representation, algorithms, or any such marker usually signifying intelligent life.
The article is bad, but the point of silly analogies to various technologies remains.
Regarding the questions you addressed, my suspicion is that the brain's primary trick is to model the organism and the environment. Planning ahead, reasoning and synthesizing knowledge can all happen if you can do that. I'd argue (and of course I'm biased) that control theory is probably a better place to start thinking about the brain, in that light, insofar as building models of the world is important.
Information processing as basis for human existence is not an analogy once you accept a very basic premise of what information is. It is the literal description of what is going on, even on the biological level. I've mentioned DNA, the immune system is another example.
If you want to be successful in a complex world, survive, replicate, you will profite massively if you know what is going on around you better than that other thing that wants to eat you. If you can grasp the structure of the physical world and predict its changes, you will come out on top. Information processing is an evolutionary necessity, because we are grounded in a physical world. Information is the successful way to deal with the world, because it gives the organism a choice.
Control theory is great if you want to describe real valued in- and outputs and their relationship over time. Like throwing a ball. But at some point we need to become discreet and abstract the real valued domain of space and time into symbols.
The article, however, is rubbish. As psychologist, I cringed throughout. It is a blurr of half-baked ideas and ill-understood controversies from cognitive science. The author manages to write an entire article about information at its core without ever properly defining information, not to speak of representation. In the sense of Shannon, or course neurons are channels transmitting information. What else would they do?
And of course we can decode that information even from the outside, even down to discrete processing stages during the execution of mental tasks (https://onlinelibrary.wiley.com/doi/epdf/10.1111/ejn.13817). And if there are truely no representations in the brain, as the author states, how do we plan for future events that are far beyond the horizon? And even if you reject all that, there is DNA in the brain that is literally information and expressed (decoded and made into protein) ALL THE TIME.
Regarding cognition, the good Mr. Epstein has not grasped the difference between computers and computability. I don't think anybody is looking for silicon in the brain. The smart people are asking how it is possible for a complex system to operate in a complex world without an outside unit directing their behavior. They ask "How how can the human mind occur in the physical universe" (http://act-r.psy.cmu.edu/?post_type=publications&p=14305)? How is it that we can do the things we do? How do we set goals, plan steps to achieve them, and choose the right actions for implementation?
I get where you are coming from and I agree with you regarding a dangerous misunderstanding of AI, especially ML. But this article is not helping putting things in perspective. I am willing, however, to concede one point to Mr. Epstein: His brain is dearly lacking information, representation, algorithms, or any such marker usually signifying intelligent life.