Hacker News new | past | comments | ask | show | jobs | submit login

Argh, that article! Those are metaphors at some level of representation!!! You could argue in the same specious way that a computer is not a computer because it is really a bunch of atoms interacting via non deterministic quantum mechanical rules so it can't really implement deterministic algorithms and error free information storage. The three conditions stated in the article basically are the set up for reinforcement learning, (which can be implemented at some level of abstraction on a computer) and the question about if a representation is required or not is a mathematical one. For linear systems with gaussian noise the optimal control this is an answered question, yes you optimally estimate the state, then you base your controller off the optimal estimate. For more complicated systems it is unclear if the representation is required or not, but it sure seems reasonable that some level of representation is required. In the baseball example they are still talking about keeping a constant optical line, not what happens to raw optic nerve inputs. It has a reduced dimensionality representation!



> You could argue in the same specious way that a computer is not a computer because it is really a bunch of atoms interacting via non deterministic quantum mechanical rules so it can't really implement deterministic algorithms and error free information storage.

Jaron Lanier argues that computers and computation are cultural. Why give them a special ontological status? Yeah, we can think of the universe as computational or informational. We can also think of it as mathematical, mental or just whatever physics posits (fields, strings, higher dimensions, etc). Whatever the case, when someone like the article in the OP states that reality is X, then an ontological claim is being made. Its metaphysics.

One could instead argue that the world just is itself, and anything we say about it is our human model or best approximation. Which would be a combination of realism (the world itself) and idealism (how we make sense of it). Then it's just a matter of not mistaking the map for the territory. Instead of saying that the brain or the universe is X, we say that X is our best current map for Y. We don't say that London is a map. That would be making a category error.


I think the article's overall stride is not so much about how these things are represented in neuronal mapping per se, and more that we shouldn't apply the idea of computer mechanics to organisms.

Less load/process/store of absolute data and more like natural processes such as the process of erosion creating rivers. The analogy of the environment as a "lock" and organisms are just the most fit "keys" to success in particular environment.

So the computer analogy is bad because organisms are more a matrix of interactions, feedbacks and responses that work well enough, but dont follow a "logical" design. This can be replicated within a computer easily and the result is evolutionary computation & hardware, genetic algorithms and evolutionary neural networks. The problem in understanding the result of evolutionary systems is that they're blind to design and only respond to fitness and therefore create systems that are so tightly coupled it's a quest to understand how the model even works.

So the article is suggesting we shouldn't apply human design principles to evolved solutions. Perhaps we need some kind of "messy science" to make sense of it all.

Going forward, machine learning running evolutionary algorithms on neural networks should be able to produce sufficiently incomprehensibility for us to be studying our own inventions for years to come.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: