Hacker News new | past | comments | ask | show | jobs | submit login
The brain is not a computer (aeon.co)
131 points by dkucinskas on May 19, 2016 | hide | past | favorite | 161 comments



I don't see it.

- The brain is more fuzzy, but it still stores the link between the smell of roses and the look of roses. Just as a probability, like a Bayes network. And you will fall for Illusions, just like a Bayes network can be wrong ("for a second I thought this was..." and than more information from the senses falsifies/corrects the probabilities).

- The brain is imperfect in reading from memory, but it still does. It just uses really good, lossy compression. It's loosing much detail, but often filling holes probabilistically. Besides neural nets, in computers this would be: defaults, recovery blocks etc. In part the good compression comes from an additional layer of abstraction, but computers can do this too. A very simple example are the the blurry color layers in jpg.

- We are better at recognizing than recalling, because a highly compressed memory is not enough to recreate the original, but has enough indicators to verify. This us very much like a checksum.

What there is, is the bias of the "hardware". Our brains are not good at deterministic iterations, computers struggle with the complexity of forming wisdom and fuzziness and feedback does not come naturally to them either. But in principle, we are both turing complete or something :)


Agreed. This article struck me as exceedingly pedantic.

Given the author's background, I'd put money on the idea that this is the result of years of frustration at having his life's work dismissively reduced by casual observers to something like "yeah, it's basically a computer" and thinking they therefore understand all the intricacies of the human brain. I can see how that might grate on a person in his field and trigger a response like this.


It's not even pedantic, it's just wrong. He's ignoring years of neuroscience.

We know that the brain uses signals, and that these signals are composed of codes. We know that to interpret the world, certain information processing must take place. We are learning more about the exact nature of this processing, more about how neurons and even other fundamentals of the body code information and contribute to processing (e.g. chemical processes), but they definitely process information. We even know some of the codes.

On the other hand, if the author wants to argue about the nature of what is information, what is processing.. he's going to have a steep hill to climb.


A human being is not a machine just as "a brain" is not a computer. To say that the sum is greater than the individual parts is truly an understatement. I find it sad that scientists are so naive. The brain exists to tell the body what not to do. No machine will ever be able to replace it no matter how much you think you know about it.


Got a few citations on that? Neurons react to stimuli, that's certainly well-established. But codes?


Well, afaik we can't easily record individual neurons inside the brain, although there are some studies towards that goal. Most brain studies work on areas of activity. Even MRI is not that precise, to be frank. But for studying human perception, a technique often used is called microneurography, which involves sticking a probe into the nerve fibers along a known neural pathway and recording the electrical stimulations.

From this we can often find things like:

- frequency of impulses increases as stimulus increases

- different numbers of neurons in a similar area fire simultaneously

Things like that. The wikipedia articles linked by wickedagain contain a good summary. I won't link specific sources, since there are thousands, but here's one for instance:

http://jn.physiology.org/content/85/4/1561

There in fig 6 for example you can see that the frequency of nerve firing increases in the tibial nerve as heat is applied to the paw. (Yes, a lot of information comes from horrible animal studies. It's part of the reason I didn't continue in this domain of work -- although rest assured, this technique is possible with humans, without harm.)

So, do impulses in the perceptual nervous system reflect how brains work, how we think? It's hard to say for sure, but it's clear that information is coded according to certain possibilities related to how neurons work, and this information must be combined and processed somehow to create what we call perceptions. Is it such a stretch to imagine that these perceptions become their own codes, abstract reductions of correlations of perceptual information, that are evaluated, compared, and combined in the much, much more nerve-dense central cortex to produce what we call "thought"?


Not necessarily established, but you'll find some hypotheses here:

https://en.wikipedia.org/wiki/Neural_coding

https://en.wikipedia.org/wiki/Neural_decoding


> Forgive me for this introduction to computing

A further frustration might be with computers themselves. I don't see why an introduction/exposition to computing should be something that needs pardon.


To add on to the image compression theory, I read somewhere (maybe here on HN) that perhaps many image memories are stored as difference from a "nominal" representation. So one might not remember a complete face for example, but rather the features that stood out from a "normal face".


To be even more pedantic:

- The brain's network is not very well understood. For example, half your body's neurons are in your cerebellum. There people that don't have a cerebellum, and they only seem to walk later, and have balance issues [0]. That is half of all your neurons, and until you do a scan, people can't tell you are missing them. Alternatively, we have Henry Molaison (known in literature as HM), who could not remember a thing from after his surgery to save his life. All the surgeons took was his hippocampus (and some other stuff) back in 1953, which we then found out was SUPER important for memory [1]. He lived to be 82 and died in 2008, incidentally. Suffice to say, the neurons, their local network, and the long range network (aka, your whole body) are all very important in terms of how it all works. You can take out a ton, and not loose really anything, or you can take out some special ones and never remember a thing again. The brain is not 'fuzzy' and really 'fuzzy' all at the same time. It's like lumping all of China's politics into just their Chairman's ideas; you loose any sense of what is going on.

-The brain's recall system is also not well known. As with HM's life, you take out this special part, and it all goes to shit. However, when HM sang or played music, he could do an entire song or piece, even if it was Indagardadavida's length. Also, he remembered shocks to his hand when a surgeon put a joke buzzer in one day; he was hesitant to shake the next day. Our memory is not in any way good or lossy. Look at PTSD victims, they remember it ALL. Typically, it comes down to the motivations of the person. If you have to remember for the future, like an important fact for a test, or the sound a wolf makes right before it leaps onto you, then you will remember it very well. If you don't need to remember, then you typically do not. Again though, the brain is not well understood and we all have experienced exceptions to this.

-Some people are better at recognizing, some are better at recalling, and we don't know why. People that are face blind [2] are really bad at remembering faces; Brad Pitt says that he is face blind. Some people are really good with faces and really bad with names, some are the opposite. Some are synesthetes [3] and will recall equations or speeches by the taste that they experience when doing it. Some people are savants and can recall an entire phonebook and then draw it, nearly as a cc, from memory [4]. Memory is a fascinating subject and whenever you think you know what is going on, something new opos up and kills the theory.

-Our brains can be very good at deterministic calculations. Any major league ballplayer is very good in the outfield (the D-backs excepted this year) and can very accurately catch a ball hit from 100s of feet away. They may not be good with math, but some of them are much better than any computer [5].

The brain is not well understood at this time. Calling it a computer is obvious nonsense, but it's a good working theory to approach from, especially considering we have no other real ideas.

[0] http://www.iflscience.com/brain/24-year-old-woman-born-witho...

[1]https://en.wikipedia.org/wiki/Henry_Molaison

[2]https://en.wikipedia.org/wiki/Prosopagnosia

[3] https://en.wikipedia.org/wiki/Synesthesia

[4]www.neatorama.com/2008/09/05/10-most-fascinating-savants-in-the-world/

[5]https://en.wikipedia.org/wiki/Daniel_Tammet


Bravo. You can understand that only if you offer an abstract definition of terms like 'computer', 'computation', 'calculation' etc. If you define 'computing' as something these clusters of regular matter we call 'computers' do, then if would exclude animals and other things. But, when you're thinking in abstract terms, then you come to the conclusion that a lot of things in nature are 'computing'.

The same thing could be said about 'intelligence'. Current definitions of intelligence are not abstract definitions(even though the word itself seems like some abstract term, similar to calculation, motion etc.), but rather are labels/descriptions of particular human traits. If a more abstract definition was offered, you would see a lot more things in nature 'becoming' intelligent. So, currently we are in a situation where 'intelligence' is defined in terms of human traits(like being able to speak human language, do human mathematics, drive cars etc), but then at the same time we're looking for intelligent aliens. You can't find 'intelligent' aliens if the term intelligence is defined in such a way to only ever apply to humans. It's really a parlor trick.

This is an expression of human desire to feel special - you don't want to be labeled as a computer or cluster of regular matter - rather you are an 'intelligent' soul(a different substance, not a regular matter) made in the image of the most powerful creature in the universe who happens to love you and care about you.


Current definitions of intelligence are not abstract definitions [..] but rather are labels/descriptions of particular human traits.

Agreed with the rest of what you wrote, but "intelligence" is more or less uniformly defined as the ability to quickly assimilate and apply information. What we try to measure is the speed, accuracy and complexity of that process. We have tried to measure the same process with dogs, parrots, monkeys, and ravens, and we even use comparable scales (e.g. general opinion is that adult dogs can handle the same mental complexity as toddlers).

You are correct that the tests we use incorporate human traits. But that's as much because we are mainly testing humans as because we use humans as our reference.


The definition you've offered is an abstract definition, but it's not the one used in experiments and in everyday life.

What these experiments with animals are designed for is to check which animals can do things in a similar way to humans.

Why are ravens intelligent? Is it because they can quickly process information? No, that is not what experiments are checking. They are labelled as intelligent because they can use tools to unlock their food.

Why are dolphins intelligent? Is it because they process complex information quickly? No, it's because they have large brains and exhibit empathy towards humans plus they seem to be able to communicate with each other.

People don't say "Well, that system can process complex information quickly, therefore it's intelligent". Instead, what you'll hear is "WOW, animal X can use tools, can communicate, organize in groups! They are more intelligent that we thought". Using tools or communicating is just one way of processing information. There are many other ways. For example, an animal might not be able to use tools in a cage to unlock their food, but they could be a terrific fighter in the jungle. Fighting in a jungle is a process that demands quick information processing, but those animals are somehow below in intelligence than revens or dogs. That is because people aren't using the definition you've provided.

  > You are correct that the tests we use incorporate human traits. But that's as much because we are mainly testing humans as because we use humans as our reference.
There is no need to incorporate human traits, because you've already defined the term in a good way. Why do you need humans for reference? You have the definition, you can provide measurement for speed (X per second) and you're good to go. You don't use humans to measure acceleration of objects, so why would you use humans to measure intelligence?


This still leaves the mind-body problem. Got an opinion on that?


Right, I don't see the point of this article. At first I thought it's the yearly "humans have a soul and are better than AI algorithms" story we get.

Of course a mere year ago the fact that computers can't get good moves in Go was an argument. And all the arguments that came before that, voice recognition, reading, chess, games, even counting itself at one point. Of all those things that computers/"information processing" couldn't achieve we now know : computers have recognized more spoken text, read more books, letters, envelopes and ... than humans ever will. Computers have played more chess games, and certainly have processed more numbers than humans ever did or ever will. Humans probably still have a mild advantage in amount of Go processing, but it's clear that's not going to last much longer either.

This article takes another approach : because the type of processing that happens in our minds differs "so much" from what we do with computers, they must be fundamentally different. The emphasis lies on the human mind being different : humans don't work like computers, not the opposite. That wouldn't work : because computers do work like humans. That's why we build them, and use them. Computers analyse stocks, sell apples and cars and home insurance, they mail letters, they work out what a business should schedule tomorrow, they move their arms to glue soles onto shoes, ...

One might also analyse why computers are working so very differently. Why Von Neumann ? Well, because we don't really know ahead of time what we want computers to do, and even when we do, we want the ability to change it later. We want computers, to a large extent, and maybe even to 100%, to work like humans do, but we can't do that. As illustrated above, we get closer every year. Computers simulate other machines, that's what they do. So what you should compare is not instruction sets versus neurons, but neural network models those instruction sets simulate with biological neurons.

And then the differences melt away. Not fully. Not yet. We don't know how. Not fully. Not yet.

But more every year.


The point of this article is the argument that the metaphor that human brains work just like computers is leading people astray in figuring out how human brains actually do work.


He really didn't show where people are struggling due to that supposed handicap. He also didn't show why brains aren't literally performing the task of computing.

I mean shit, if I judge the distance between two points, how can it not be said that I have collected sensory data and computed the result?

All of his examples were pedantic or irrelevant.


When somebody says humans don't compute, I like this picture :

http://cdn.theatlantic.com/assets/media/img/posts/computer_w...

Let's call it "ipython before computers", or "excel before computers" if you must.


>leading people astray in figuring out how human brains actually do work.

In order to make any point remotely like this, he'd have to go talk to some actual neuroscientists, which he very plainly didn't.


Maybe we don't know how human brains actually do work (in my opinion we do, but for the sake of the discussion..) but we know what they do: brains compute.

To use computers as a simile is not so strange.


Brains associate. Compute is too narrow a definition in my opinion.

However, what we still lack completely is any kind of model for autonomy, i.e. how the brain decides what to "compute".


> We want computers, to a large extent, and maybe even to 100%, to work like humans do

http://yosefk.com/blog/ai-problems.html

"We don't build machines in order to raise them and love them; we build them to get work done.

If the thing is even remotely close to "intelligent", you can no longer issue commands; you must explain yourself and ask for something and then it will misunderstand you. Normal for a person, pretty shitty for a machine. Humans have the sacred right to make mistakes. Machines should be working as designed."


One look at the age pyramid of the world, or fertility statistics, will immediately drive home the point that we will in fact build machines to raise them and love them. I mean even in the places that for some reason currently feel comfortable with their birth statistics, there's no denying that the current generation is the last one to be larger than the one that came before it. In most countries, even that is not the case. In the US, GenX is the last one that was larger than the one before it, and only by the teensiest of margins. In Europe, that would have happened in the 80s.

A "child" that requires far less resources will be a product that will be incredibly popular. Not to mention an economic necessity.

And I would even say : 100 billion "small" AIs (scale vaguely comparable to a human mind) is a far preferable situation to one big AI. Both from a survivability standpoint and from an "oh my God it'll kill us all" standpoint.

> "We don't build machines in order to raise them and love them; we build them to get work done.

I would even say, if it's at all possible we'll do just that in the next 10-15 years. If we don't get advanced enough AI, perhaps 20-25 years.

Not a doubt in my mind.


If one doesn't want living child, why on earth would they need mechanical one?

(Except for Japanese of course. Excluding them from this question)

Seriously, fertility will become no issue if we prolong fertile period of our lives. Imagine having one kid at thirty, and another one at sixty.


I sort of get you, personally I feel the same. But then I walk around here and see people with dogs in baby carriages. Why ? Because they could never have a child, they're too old, or cannot or don't want to spend the money they think it'll require.

The problem does not tend to be that it's physically impossible to get a child.


> Your brain does not process information, retrieve knowledge or store memories.

Next up, an ornithologist tries to argue that birds don't fly, because they have muscles instead of engines and nobody can pinpoint the ailerons or control stick.

What else could a brain possibly be doing? What do you think a brain responding to stimulus and changing as a result of it is, if not a form of information processing and storage? And why would that be fundamentally non-computational just because it didn't look much like how we might hand-write software to do it?


The day before this article appeared, I participated to a talk about consciousness, and the presented dismissed the functional approaches to explaining consciousness. That is, if consciousness processes information, it is not a information processor LIKE the computer is. You can not define a thing by what it does, but only what it is. The electron is not something that moves through space, but something that has mass, charge, and so on. That is, you define it by its intrinsic properties. Otherwise, you could say consciousness is what made possible this comment here. That'd tell you absolutely nothing.

But there was a biologist that basically said what you're saying: what is then 'walking', don't you define it by a 'is doing'? And this objection misses the mark because 'walking' is an action, not an entity. On the other hand, the brain, presumed to be responsible for consciousness is not (not equal to) 'talking', 'walking', 'imagination', 'recognition'. These are faculties, but not the underling physical/ontological support that make these faculties possible.

The whole project of explaining consciousness must reveal the underlying substance, be it matter or something else, that make it possible, and the mechanism by which all faculties associated with consciousness arise.

And one more thing: the main conundrum of explaining consciousness is 'qualia', that which is experienced by being conscious, that which is it like to be conscious.


"You can not define a thing by what it does, but only what it is. The electron is not something that moves through space, but something that has mass, charge, and so on." Actually you know, mass and charge only signify that an electron does certain things in certain circumstances and are thus very much statements about what an electron does. I've thought in the past that what an object does is frankly the only informative description of what said object is, quite in disagreement with the sentiment you've expressed.


This triggered a thought in my head, that homo sapiens are basically "better" consciousness machines where as a chimpanzee or dolphin a slightly less powerful conscious machine, and so on down to a cockroach which is a tiny pre-programmed micro-controller.

This would imply some of us are less conscious than others to some degree. However, I now see the crux of the issue you've discussed by asking, "How do I measure consciousness?" Is it binary or is it a gradient (a multi-dimensional gradient)?


I would say a person who rarely stops to think about what it all means, what it's for, any kind of existential question, is not as conscious as somebody who reflects daily on wether or not they've done a good job at not becoming a little bit more insane today.

I'd even argue that people who never ever ponder their existence aren't conscious at all. They simply execute their DNA, nothing more, nothing less. On the other side are the few whose DNA and upbringing have enabled them to act on themselves. I see this as the main difference between conscious and unconscious people: the former take in their surroundings and the voices in their head to draw certain conclusions based on which they set certain actions on said voices, whereas the latter is happy with the reward and punishment system nature has laid out in our brains.

Then of course, there's everything in between, mixed-and-matched wildly, leading to the gradient that is consciousness (quite literally a gradient, slipping off is very easy, climbing back onboard can be damn near impossible).


If walking is not an entity then how is consciousness an entity? Further, the mass, charge, etc that we define an electron by are shorthands for describing the way it behaves which is all we can glean based on observation. You cannot observe the property of mass directly


Birds don't store flights, they fly. Human brains don't store memories, they remember: they reimagine internal mental states and tell stories.


And you're suggesting they do this without storing information?

Hmm.


The author and several of his sources argue that they do this without internal mental representations of objects.


Which is a distinction without a difference.

"Information" has a precise meaning. It is easy to design an experiment that proves information is being stored within yourself somewhere.

What form it takes hardly matters at all, if we can put bits in and get them back out later, information is being stored, and that is memory.

It's obvious that your immune system also stores information. Nobody would really question that -- it's a necessary precondition for vaccines to work. These are information-theoretic memories too. They are all that's required to show that the body computes.


There are pieces of data that I store in some manner (and that would be easier stored in a computer). I'm thinking about phone numbers, addresses, and such. The fact that I can accurately reproduce those pieces of information on demand has to mean that I'm storing them in some way. Are there literally a string of neurons with numbers stamped into them? Certainly not. Is there a cloud of neurons that (along with other, connected information), connects to the clouds representing "me", the concept of a "phone", image and tactile checksum-like data allowing me to recognize my current mobile phone, and a ton of other information? Absolutely. And somewhere in that mess, there's a strong link to a numerical sequence that "feels right" when I access those other neural clouds of concepts.

There's something in the physical structure of my brain that encodes various data in a probabilistic, fuzzy, emotionally-connected way, and in a way that will be virtually impossible to identify and understand. The information's still stored.


Comment below you makes powerful counter that there's storage that allows faithful reproduction of stored data. To add to it, I lost most of my memory to a brain injury that destroyed physical elements and reduced interconnections. Result is no recall on all sorts of things. Implies those memories were stored in those connections and physical spaces.

The brain has storage for data. It just works differently, is lossy, and connects with other data for compression or reinforcement. This is still storage.


Is that supposed to be evidence for not being a computer? Perhaps someone should introduce the author to Anchor Modeling[1]. Every table with its own attribute, so you don't have an internal representation of objects, just the base attributes and relations, but you can retrieve a representation on the fly.

1: https://en.wikipedia.org/wiki/Anchor_modeling


My the article is misargued mush. It's hard to analyse all 4000 words of it but to pick on a few points:

>But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.

I have several memories.

>cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain

We can build artificial neural networks at the moment to recognise faces and no doubt if you fed them the right inputs, to recognise Beethoven and probably tell you which symphony was playing. You would also not 'find a copy' of the 5th in the neural network. It is likely that the human network of neurons works in a similar manner to the artificial neural network with the learnings stored in modifications to synapses and changes in memory values respectively. I mention this rather obvious stuff because the author goes on to suggest:

>Meanwhile, vast sums of money are being raised for brain research, based in some cases on faulty ideas and promises that cannot be kept

On the basis of the above kind of semi arguments. The fact that the Human Brain Project may be a poor use of research funds is kind of a different issue.

>I challenged researchers there to account for intelligent human behaviour without reference to any aspect of the IP metaphor.

If you think of anything we normally regard as intelligence such as playing chess it involves taking in some information such as where the pieces are and then doing something with it. How that is supposed to prove his point is a loss to me.

etc and on for dozens of other wrong implications and twisted logic. Life's too short...

(For a kind of counter argument check out the Recurrent Neural Networks and Inceptionism articles https://news.ycombinator.com/item?id=9584325 and https://news.ycombinator.com/item?id=9736598 for how eeriely close the behaviour of artificial neural networks can be to human ones)


I found it odd how he noted rooting and nursing instincts in infants, then said that we aren't born with rules or software. That seems self-contradictory.


He's saying "rules" and "software" are just metaphors to attempt to explain these instincts. They are a way of saying it.


Well, yes, but he's saying that "rules" and "software" are incorrect metaphors, and that they are a way of saying it wrong. I think that's an important distinction.

What strikes me in the section of the article with the six metaphors is that each one is "true" at a certain level of abstraction, and as time progresses, the abstraction gets thinner and thinner and closer to reality.

The author's claims that I have the biggest problem with are that brains don't store or process information. We clearly do. Our behavior depends at least partly on past experience and our perceptions of current circumstances. When I remember something, do I assert the read and address lines on a NAND chip and latch the data into a register? No, certainly not. Do I trace a weighted graph of neural connections forming a fuzzy cloud of meaning? At the least, that's closer to what I'm doing, and I'd still classify it as information storage and retrieval.

The author doesn't seem like they're attacking the imprecision of a metaphor. They seem like they're rejecting the "brain as a computer" metaphor outright.


This article falls somewhere between a strawman argument and a deliberate misrepresentation of scientific understanding.

> "But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers"

We are born with most (if not) all of these things, and we develop more of them as we grow and learn. Sure, the nomenclature here was chosen for ridicule, but functionally those elements are present.

The brain is not a magical pudding that works in a completely occult and mysterious manner. While we do have a ways to go in deciphering the minutiae of its architecture and operation, the article engages in assertions that run completely counter to neurobiological facts we have already learned:

> "We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not."

This is at best a half-truthy strawman argument. Both our computer technology and the brain's architecture employ different mechanisms by virtue of their underlying implementation. However, the functions performed do converge if you look at what's actually being computed. There are entire fields of both CS and neuro research that are completely based on this overlap.

> "Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.

Humans, on the other hand, do not – never did, never will. "

This is utter bullshit. Humans totally operate on symbolic representations, decades of neuroscience has shown us that. Humans absolutely do store, process, and retrieve information. Humans possess without a doubt physical memories, just because they're not encoded byte-wise doesn't give you an excuse to make these garbage claims.


To add to the fallacy that to think that simply because computations by common computers are done with a bits-and-bytes approach hardware-wise and processes in the brain are not that computers wouldn't be able to express computation that comes very close to processes in the brain:

This is exactly a mapping from one computational calculus (e.g. "brain calculus") to another (e.g. binary logic) with a compiler. While it still might be that the physical processes governing brain functioning somehow elude computational description with digital logic (i.e. such a compiler can not exist. Not sure how this could be the case given the notions of universality by way of Turing machines attached to that), surely this would mean we'd have a new class of computational power on our hands.

And to say that brains are capable of more than Turing-complete (ignoring memory constraints) computation seems hard to believe (see also https://en.wikipedia.org/wiki/Digital_physics). Now how to factor true randomness (if it exists in our universe) into this I don't know, and maybe it doesn't even play a role...


We do. Always have. See my main comment to OP on analog computation and its models that were used for NN's. Hint: One wafer (maybe 40 chips) of analog neurals takes almost 400,000 digital cores to emulate.


> "Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.

Humans, on the other hand, do not – never did, never will."

In addition to neuroscience, going deeper to the atomic level and beyond, humans really do operate on symbolic representations of the world: representations that are stored as states of physical elements. And the algorithm that guides everything we do, would be the laws of physics.


Humans do operate on symbolic representations, but also on concrete data -- and since you cannot think this data right now, because you can only think (or visualise in your mind) the symbolic representations of that same data -- you end up thinking you don't operate on concrete data.


This article is filled with falsehoods, and poorly reasoned/incorrect arguments.

In particular, we can bat it away by citing the fact there are humans who can recite pi to a large number of decimal places (proving, specifically, that they can store and retrieve digital data). And there are humans who can do long multiplication in their head, by following a series of procedures. Also, humans can store, retrieve, and communicate - with near perfect fidelity - image data (http://www.dailymail.co.uk/news/article-1223790/Autistic-art...)

What the specific molecular structure of the brain's representation of the experience and memory of Beethoven's 5th is almost certainly not stored in a single neuron, this hardly presents talented musicians from playing the 5th from memory.


Are brains Turing complete?

Are all brains Turing complete?

I'm not convinced by your points. Savants are exceptionally uncommon, and there's no evidence most people can learn the skills they have. Many savants are famously bad at every day human skills, so clearly there's a trade-off - at best.

Storing and operating on numbers with arbitrary precision is a completely trivial operation for all but the oldest computers. But the abilities most humans find trivial - exploring an arbitrary environment by body movement (without falling over), throwing and catching things, playing sports, using tools creatively, reproducing and maintaining relationships, communicating using complex natural languages - are huge engineering problems in the digital space, and most aren't anywhere close to being solved definitively.

So let's ask again - how many of these problems can be solved using Turing complete digital systems?

I don't think anyone can honestly say "All of them." Given the state of the art, a more realistic answer is "We just don't know yet."


> to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery

In the book "The Inner Game of Tennis" (fantastic read, more about the brain than tennis), the author explains how, during service, the other player has to respond before the first player has hit the ball, because of simple physics -- if the 2nd player waits for the ball to be hit then he absolutely doesn't have the time to do anything before the ball is on him. (That's probably also true of baseball, although I don't know anything about that).

And so, how does he do it? Nobody really knows, but the current thinking is that, through practice, the player that receives the ball interprets the movements of the hitter and induces where the ball will likely be (with great precision), without actually using much information from the ball itself.

It's possible that, just as elite chess players can play without a board, elite tennis players could play without balls.


Years of research in intuition shows exactly what's happening. That part of the brain builds models between senses and actions. Spots patterns. Turns them into reflexes and instincts. It's not always accurate or rational but operates continuously, instantly, and usually effectively.

In your example, like in my martial arts training, certain body patterns repeat before an action is going to happen. Intuition models likely consequence (eg trajectory). Another pattern says what response to take go that trajectory to get desirable result. Whole set of patterns nick-named muscle memory executes that thought into muscle impulses and controls them. The strike results.

Intuition 101. Marine Corp been using it for decades in realistic drills. Martial arts and other sports too. More realistic the prep, the more accurate and effective the mental model in intuitive brain.


He has a very limited definition of a "computer". His description is how modern computers work. But that's not strictly what a computer could be.

When people compare a computer to a brain. They are not saying it's like a modern pc. They are saying it like a computer in a loose sense. A lot of people don't get that, they think were comparing to the thing on desk. A computer can mean lots of things with many different approaches.

For example "computers do store exact copies of data" - They don't have to? Just our current implementation does.


Some people forget that there's more than one computer architecture. A computer doesn't even have to be digital.


But that's expanding the definition to "Arbitrary processing architecture we haven't even invented yet which really could simulate a human brain, if only we knew what it was and how to build it - which we don't, but that's just an engineering problem."


It's definitely not arbitrary. We have solid mathematical definitions of computing that are not tied to any particular implementation.

Saying "the brain is not a computer" is a shorthand for "the brain is not Turing equivalent".

Now, that could be true. But if it is, it would be the first time anybody has found a physical system in our universe that can't be simulated by a Turing machine. It seems like quite a stretch.


A computer is something that computes. Doesn't really matter how it does it.


And what does it mean for something to compute?


Being able to follow an algorithm, a set of steps(parallel or in series) to reach a result.

The brain has input, goes through various connections, and you end up with a result.


The steps don't even have to be discrete. An analog computer can give a continuously varying response to varying inputs with no sequential operations.


So is fire a computer? It goes through a series of steps to reach a result.


Is fire able to follow an algorithm? I mean some creative person could probably make a computer out of using fire.


Well, what is an algorithm?

Anyway, before we stray too far from the point I was hoping to make, I'll claim that when most people use the word "compute" they mean a particular task performed at the behest of a human being. This sort of definition is problematic if we're ever to attempt to build an artificial brain or to encounter intelligent alien life radically different from our own.

It all falls back to the problem of agency and the philosophical concept of the self.


I think that's a way too restrictive definition of computation.

I'd argue genetic code (messenger RNA) is a program and inputting genes and outputting proteins is a form of computation.


So with that definition, how is fire not computation?


Where is the computation in fire?

BTW, I don't believe in free will. You mentioned earlier that there needs to be some magic somewhere and I don't think there is magic. Occam's razor and all that...


Fire is a process that converts chemical inputs into outputs, producing light and heat. It may be involve simpler reactions than protein synthesis but why does that matter?


> why does that matter?

Maybe that's a defining characteristic - complexity.


Analog computers have a pretty rich history.


As a matter of fact, many computers do not store exact copies of data. The data may be rearranged or compressed on many different levels. The data may be stored in such a way that an exact copy can be retrieved, but it is not necessarily stored as an exact copy.

Moreover, if you include the various checksum algorithms from different vendors as part of "storing", then even if you perform a bit-for-bit copy between different disks, they are not stored exactly the same.


Conceptual metaphors lead you astray when you take them too far. You can note some similarities these have, but the author cites lots of examples of people assuming that human brains store, retrieve, and process information just in the way that PCs do.


Just because it doesn't store it the way traditional pc's do, doesn't mean its not a computer.


It also never gets into whether the brain is an exception to our laws of physics; if it is not, then it's trivially computable because the laws of physics are. If you're going to argue the brain is not a computer, you need to give plausible reasons for thinking the laws of physics are incomplete (which is what Penrose does, for example).

Any quantum randomness in the brain can be simulated because we know how to generate quantum randomness, then just feed that into a deterministic physics emulator.

To deny that requires new laws of physics.


Hell, even my current implementation doesn't store exact copies -- whenever I use a lossy algorithm.


I find it somewhat amusing that all of the top comments on a story about how the human brain is not a computer seek to analogize the computer to the brain in order to prove that the brain is in fact a computer, while actual experts on the brain have gone into detail on exactly why that isn't the case. Dunning-Kruger much?


I'm just reminded of Kuhn: "To the extent that two scientific schools disagree about what is a problem and what a solution, they will inevitably talk through each other when debating the relative merits of their respective paradigms. ...Both are looking at the world, and what they look at has not changed. But in some areas they see different things, and they see them in different relations one to the other. That is why a law that cannot even be demonstrated to one group of scientists may occasionally seem intuitively obvious to another."


I think the issue is that in our everyday lives, we see people doing things that appear to require collection, storage, retrieval, and processing of information. Honestly, I'd want to see an explanation for how I can do something like drive from home to work without doing some combination of those four things. The article doesn't do that, and in many cases, the examples of human behavior seem to specifically contradict the author's claims.


Do you have sources on this? Because from what I've learned in my cognitive psychology / neuroscience classes and talks, the brain works very much like a processing machine. Incredibly different than the electronic deterministic ones we use day to day, but it's still a processing machine.


Did they ever compare it to an analog computer? Do they even know what one is? Because analog experts have and there's so many similarities. See my main comment to OP for links.


  Essentially, all models are wrong...but some are useful.
    --George Box
https://en.wikipedia.org/wiki/George_E._P._Box


Right. A brain is like a computer, so far as that is a useful model for discovery. A brain is not like a conventional "computer", so far as accepting this model precludes deeper understanding of brain functions that do not fit the model very well.


This bit bothered me, on Epstein's Information Processing metaphor:

> The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors.

Premise 1, in my reading, is not reasonable. It would be fallacious to assert that all computers are capable of "behaving intelligently", because we don't have a definition for intelligence, let alone a "universal" intelligence that would be consistent between things, or in any case how an individual thing's intelligence would be tested.

The syllogism is the core of the author's contrarian argument. If we can't prove Premise #1 from the preceding syllogism, then the core argument of this essay does seem to lose its legs.

Really, a single viewpoint/facet/argument about the brain-as-a-computer metaphor should undoubtedly be insufficient.

More useful would be a set of arguments that compare or contrast how a computer and a human brain work. We can use these to coalesce an understanding of how close the metaphor is to reality.

How do the two receive information? How do the two store information, recall information, delete information, update stored processes, etc? All of these perspectives begin to build a whole-picture of the relationship, which would seem more useful that trying to boil it down to a single statement.

I think the author needs to revisit his basic assumptions and walk through the argument again, because I don't think what's laid out in the essay actually supports his conclusions, so confidently espoused in the introduction.


I will quote the great philosopher of mind, Jerry Fodor:

1. The computational theory of mind is the only remotely plausible theory of mind.

2. A remotely plausible theory is better than none at all.

We know we literally process symbols because I can literally read a book, and I can literally write something down.


I'm looking up Anthony Chemero's book "Radical Embodied Cognitive Science", and the first sentence of the preface is:

"Jerry Fodor is my favorite philosopher. I think that Jerry Fodor is wrong about nearly everything.

...My goal is that this book is for non-representational, embodied, ecological psychology what Fodor’s The Language of Thought (1975) was for rationalist, computational psychology."


Jerry Fodor himself was fond of saying, of his ideas with Ernie Lepore, "Absolutely no one agrees with us, which leads me to believe we might be right."


Chemero's book has a long section criticizing Fodor's reasoning in arguing that connectional networks are not a good model of human thinking.

Looks like there's little skepticism about the power of neural networks here, even if some of the arguments are framed in terms of computationalism. https://en.wikipedia.org/wiki/Connectionism#Connectionism_vs...


I was studying Neural Networks in grad school (advisor was statistician from Bell Labs) at Rutgers and took a bunch of classes with Jerry, so I had these debates on a daily basis.

Unless you summarize Chemero's critique, though, I can't really respond to it.

Fodor didn't claim that connectionist models couldn't encode symbolic manipulation, just that the pertinent activity from the perspective of "thought" is symbolic manipulation. So he (I believe) would have said, maybe the connectionist can explain it or maybe he can't and who cares.

He did care that "concepts" were not statistical entities. They were "atomic" and basically tokens for the language of thought. So, Jerry argued, the concept of "Lion" could not be complex i.e. composed of cat, claws, teeth etc.


I posted a link and very brief summary elsewhere here. https://news.ycombinator.com/item?id=11730489 He discusses Fodor on p. 26 as part of a critique of conceptual/a priori (Hegelian) arguments.


This caused me some indigestion.

I don't consider myself either for or against Fodor's argument but I think the summary does great injustice to his arguments.

Fodor does claim that, for what he describes as language, systematicity and compositionality are essential features. However, the "evidence" he cites isn't from a study. It is primarily from facts about language.

To use one of his favorite examples, take these sentences:

    The cat ate the rat.
    The rat ate the cat.
If you understand sentence 1, you can understand sentence 2, and furthermore, the words "cat", "ate" and "rat" mean the same thing.

He takes those facts to be uncontroversial, and he says that that is what he means by systematicity and compositionality.

This "ability comes in clusters" bit is very confusing, and I am not sure what he means by it.

Fodor doesn't care that connectionists don't have good models for symbolic manipulation. He says that connectionists models are only good in so far as they reduce to symbolic manipulation because symbolic manipulation are the only models we have that demonstrate compositionality and systematicity.


> He takes those facts to be uncontroversial, and he says that that is what he means by systematicity and compositionality.

Right, but Chemero's point is that that premise is not so empirically grounded; it is an a priori assumption.

I am not familiar with this literature, but it's ultimately the same point that he makes against Chomsky's poverty of the stimulus argument (the literature on which I know much better): that it's not an empirically grounded premise, and the evidence for such an a priori argument is weak.


Okay, which premise? Can you be specific about the premise that you believe (that Chemero believes) is not "empirically grounded"?


"The time has come to hit the DELETE key." So, after all, he is using a computer metaphor? The author proceeds with an priori agenda, rather than looking at facts and arriving at conclusions. I can't find a spot where he tries to see if his claims are falseable. His approach is unscientific. To be fair, his essay contains falseability ideas to test against current ideas of intelligence as the capacity of entities to consume and apply information.


A better title: Your Brain is Not an IBM-Compatible PC with 64k of RAM


You're right. I have 128k of RAM.


My brain is not a silicon-based processing unit with a von Neumann architecture. My brain, however, calculates, computes, stores, retrieves...

It's a computer more advanced by evolution than we've built through our understanding of science.


That's his whole point. It doesn't store or retrieve.


Indeed it does. If not, how then can you, "recall" the names of your family upon seeing their faces? You've stored a representation of each face, you've stored their names, and you retrieve that information to maintain the relationship.


pattern recognition. you see a face, and the visual stimulus refires the same set of neurons, which then fire additional neurons, creating a chain that eventually fires off the set that is connected with the sound of that face. This is a pattern of neuron activity, and its not the same as retrieval.


You're just describing a different way to physically distribute the bits. From a mathematical standpoint it's still very clear that information-theoretic bits are going in and coming back out again.

Or to put it another way, if you really think that isn't computing, you need to argue that this new piece of hardware also isn't computing:

https://cloudplatform.googleblog.com/2016/05/Google-supercha...


ah, no, its not just twiddling bits. You can make analogies to that if it makes it easier to describe a process, but all analogies break down. One may create a mathematical model of the processes but unless that model mimics and predicts the biology so closely as to be indistinguishable from the reality, its still just a model.

Your second assertion is simply wrong. Asserting that what the brain does isn't the same as what a computer does simply doesn't imply that a computer designed to mimic some aspects of biological neural nets is not a computer.


> twiddling bits

This kind of phrasing implies that you aren't hearing what I'm saying. You're still picturing physical bits that can be twiddled. I'm not making an analogy to any physical bits at all, I'm not even making an analogy.

I'm talking about bits in the information-theoretic sense. If I can send a signal to a person and as a result that person can do better than random at picking an intended symbol out of a set of symbols, then bits of information were conveyed.

If the person can do the same thing at a later point in time, then bits of information were also stored.

No analogies are required, this is literally the definition of "information" and "bits" since 1948.


I'm hearing what you're saying, and if you want to take the broadest possible definition of a computer, then sure, the human brain is a computer. So is a protractor. Shadows on the ground could be used to "compute" the integration of a curve. But its pretty clear that this isn't the issue at hand in the OP.


You've just described a way go store, process, and/or retrieve information. A lossy and generative one. We've modelled similar ones on both digital and analog machines.

It not being a conventional implementation doesn't mean what it's doing isn't computation.


I didn't say it wasn't computation. I said it wasn't a computer.


Then it's a device that performs many types of specific computation. The definition of a computer.


That's a pretty broad definition of computer, even moreso if you're going to claim that the way the human brain arrives at an answer is "specific computation". Moreover, its clearly not the definition used by the author of the article.


The author screwed up by considering only digital computers. You keep ignoring addressing my main counterclaim that there's multiple models of computation, the brain matches one pretty closely, and so it's likely computing and/or a computer.

The evidence I presented of alternative models of computation, neurons, and implementations in those models is here:

https://news.ycombinator.com/item?id=11731697


Then how come that when I see your comment, my brain knows what letter each symbol represents, and knows what each word means, and can construe a meaning out of the sentence?


And is that the defining characteristic of a computer? I don't think it is.


My comments from yesterday's thread in /r/Neuropsychology:

While I understand the frustration with the whole Kurzweil camp, this article reads like it was written by an angry high schooler. It also throws the baby out with the bathwater, in that there are clearly valuable analogies to be drawn from computing, as others have mentioned in this thread. My two biggest disagreements:

- "Your brain does not process information": This statement is too broad. How do I do mental math?

- "Memory is not encoded in neurons [as evidenced by a crude drawing of a dollar bill]": Where then does the crude drawing come from?


> Where then does the crude drawing come from

Thank you for making this point. I am completely unable to follow the author's logic


This is just another version of the Chinese Room "argument" and still based on the same fundamental misunderstanding of what constitutes conscience and erroneous insistence that there is some sort of magic about humans.

The brain, like the computer are both pattern recognizing feedback loops which uses input to simulate phenomena.

The computer is still evolving. The human brain much less so.

Humans evolved from animals, why should computers not be able to evolve from humans.


Fascinating how somebody can say nothing at all with so many words and give the impression he has a clue. He doesn't seem to know how computers or brains work. For example in the baseball catching story, there is no reason why a computer/robot shouldn't be able to use the same heuristic as a human. ("to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery" - why should that be incompatible with information processing?)


There's he's talking more about paradigms that require or oppose mental representations that simulate the motion of a ball. The point there is that it's unnecessary to have an internal mental representation simulating the motion of the ball in order to catch it.


That's a funny way to claim brains are not like computers then. After all the usual argument is that computers have no internal representation of anything.

Why would anybody demand that there has to be an internal mental representation of a ball to catch it?


I just came here to make sure the article was receiving a suitable amount of hatred. Everything is as it should be.


One argument of embodied cognition and enactivism is that all our concepts have an embodied basis - even abstract things like math, computation, language, philosophy, etc.

The article mentioned radical enactivism from Anthony Chemero. Another person who's written a lot from this perspective is Dan Hutto: https://uow.academia.edu/DanielDHutto

Here for example is an article describing "Remembering without Stored Contents" https://www.academia.edu/6799100/Remembering_without_Stored_...

and he has a book: Radicalizing Enactivism: Basic Minds without Content. First chapter: https://www.academia.edu/1163887/Radicalizing_Enactivism_Bas...

But for a more general, less radical background on this topic, see also work by Rafael Nunez and others: http://www.cogsci.ucsd.edu/~nunez/web/publications.html

The Embodied Mind: Cognitive Science and Human Experience http://www.amazon.com/Embodied-Mind-Cognitive-Science-Experi...

George Lakoff & Rafael Nunez: Where Mathematics Come From: How The Embodied Mind Brings Mathematics Into Being http://www.amazon.com/Where-Mathematics-Come-Embodied-Brings...

George Lakoff & Mark Johnson: Philosophy in the Flesh http://www.amazon.com/Philosophy-Flesh-Embodied-Challenge-We...

Mark Johnson: The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason http://www.amazon.com/Body-Mind-Bodily-Meaning-Imagination/d...

And there are now around 30-40 books on this topic, at least.


The author conflates brains and minds.

Of course brain wetware is not computer hardware, but Minds run on wetware as computation runs on hardware.

He claims we don't use algorithms to catch balls but then describes the heuristic we do use algorithmically.

He attacks von Neumann's nascent speculations that the mind can be usefully modelled digitally but ignores von Neumann's proposal for an alternative type of computation, indeterminate & probabalistic - from the same book The Computer & the Brain (1958).

The inability of most people to draw from memory as accurately as from life is evidence of compatmentalisation not a lack of storage of mental images - as drawing from memory is a skill that can be aquired.

Then he goes all in by proposing hard AI as fact: "Reasonable premise #1: all computers are capable of behaving intelligently"

No doubt minds and brains are a deeper mystery than silicon and software but the author fails to demonstrate why or propose how.

IMHO the Roboticist Rodney Brooks was onto something when he proposed intelligence must be embodied - AI will come from interacting with the world.

The author cites Chemero's work Radical Embodied Cognitive Sciencr which is very very well argued and interesting.


Electronic computers as we have them now are different in many aspects from our brains - right. But they are also similar in many ways and surely we can have some metaphor that would fit both.

What he forgets is that information processor is a metaphor also in respect to the machines we have on our desks. They don't really process information - they just route some electrons around. But metaphors are useful.


Metaphors are useful, but taking them too literally is like overfitting your model.


So true - and your reply applies to about 90% of the posts here.


I think you're missing some key points

- Good models are important While the traditional computer model is good at explaining behavior, it doesn't help us ask new questions. And no matter how well what we currently know brain fits the traditional computer model, if it can't be used to inspire new questions then we need to explore new models. Feynman explains this better [https://youtu.be/NM-zWTU7X-k]. Even though our finite non-magic brains could fit in a computer, a computer isn't the best metaphor.

- Computers don't act like brains by themselves. In any FUZZY LOGIC compiled and ran on a computer, all your code is translated into functions and data. LOSSY COMPRESSION is actually the application of experiments on perception, and not an inherent property of computers. A jpg is designed to look the same while using less space, but the information from it is not integrated in the same way as a human. Our MEMORY is not like a computer's. It is shockingly constructive, something that isn't expected with the computer model.

- New programming paradigms might be better models. My beef with the metaphor is that is has very separate memory and computation. I don't think the brain "reads and writes from memory". I think it creates and updates a mesh of functions. Something like FUNCTIONAL PROGRAMMING could be used to model this. What if instead of looking for a neural hard drive, we treat each neuron as a function? What if generation of a dollar bill is actually OPTIMIZATION of our recognition function? What if we are a PIPELINE of functions, each feeding into the next from retinas to occipital lobe to thalamus to motor cortex to muscles? I would rather see new ideas coming from us than holding on to an outdated metaphor.


Regardless of the strength of the article's central assertion, it is interesting to consider how closely coupled our consciousnesses are to the stimuli we are actively receiving from our environment.

To continue with the IP metaphor that the author decries, it's like we only have crude wireframes and physics models in our heads, and we project the stimuli we receive to those models as best we can in realtime in order to create our conscious experience. Perhaps the sparsity of the "models" our brains contain is what allows to adapt/reuse them to comprehend such a broad array of phenomena.

The disorientation that people can experience during sensory deprivation or even extreme social isolation is perhaps another indication that the coherence of our conscious experience is so linked to our real-time environmental stimuli.

This all makes one wonder whether we could make better intelligent agents by somehow measuring the "coherence" of an agent's experience and turn that into a positive reinforcement learning signal.


Cited in the article: Radical Embodied Codnitive Science, by Anthony Chemero http://uberty.org/wp-content/uploads/2015/03/Anthony-Chemero...


Some interesting ideas. He points out that cognitive science is young and can be multi-paradigmatic, and is clear that this is non-mainstream cognitive science. It is an attempt to explain cognition without involving appeals to mental gymnastics on mental representations of the world: the world is its own model.

"The term radical embodied cognition is from Andy Clark, who defines it as follows: Thesis of Radical Embodied Cognition[:] Structured, symbolic, representational, and computational views of cognition are mistaken. Embodied cognition is best studied by means of noncomputational and nonrepresentational ideas and explanatory schemes, involving, e.g., the tools of Dynamical Systems theory."

"...antirepresentationalism (which implies anticomputationalism) is the core of radical embodied cognitive science."

There's a long discussion of Randall Beer's 2003 paper "The Dynamics of Active Categorical Perception in an Evolved Model Agent" that uses a continuous time, real-valued neural network (CTRNN). https://www.cs.swarthmore.edu/~meeden/DevelopmentalRobotics/... "Using the model of the CTRNN alone, one can only tell how an instantaneous input will affect a previously inactive network. But because the network is recurrent, the effect of any instantaneous input to the network will be largely determined by the network’s background activity when the input arrives, and that background activity will be determined by a series of prior inputs. This model of the CTRNN, in other words, is informative only if one knows what flow of prior inputs to the neural network typically precedes (and so determines the typical background activity for) a given input. The impact of the visual stimulus is determined by prior stimuli and the behavioral response to those prior stimuli. The model of the CTRNN is useful, that is, only when combined with the models of the whole coupled system and the agent–environment dynamics. These three dynamical systems compose a single tripartite model. ...The models also show that the agent’s "knowledge" does not reside in its evolved nervous system. The ability to categorize the object as a circle or a diamond requires temporally extended movement on the part of the agent, and that movement is driven by the nature and location of the object as well as the nervous system. To do justice to the knowledge, one must describe the agent’s brain, body, and environment. Notice that none of these dynamical models refers to representations in the CTRNN in explaining the agent’s behavior. The explanation is of what the agent does (and might do), not of how it represents the world. This variety of explanation—of the agent acting in the environment and not of the agent as representer—is a common feature of dynamical modeling..."


The brain is a computer. A rock, or a cloud, or a lake could also be thought of as computers via computational equivalence. You could say that a biological brain is a special type of computer. But such mystical language does not help us understand the brain, artificial brains, or computing in general.


I feel like the main issue with this article was a failure to adequately define "computer" and "information processing". There were some good points. In particular the fact that a given experience will likely cause completely different changes in the brains of any two individuals is poignant in the face of Kurzweil's claims. If the claim of the article was "Kurzweil is way to optimistic" then it would have been much better. Instead it works from an entirely too specific and poorly defined understanding of what a computer is to argue a point that is fairly difficult to defend.


This article is like someone went to the "List of Fallacies" page in Wikipedia and turned it into an essay about how the brain is not an information processing system.


The brain does not store an actual image of a dollar bill because it stores only important information such as a basic figure shape etc and that is why it is very efficient to preserve space. I can't believe this argument is coming from the former editor of Psychology today. Rest of the article is filled with bad arguments like "this is silly obviously".



"The information processing (IP) metaphor of human intelligence now dominates human thinking, both on the street and in the sciences. ...But the IP metaphor is, after all, just another metaphor – a story we tell to make sense of something we don’t actually understand. And like all the metaphors that preceded it, it will certainly be cast aside at some point – either replaced by another metaphor or, in the end, replaced by actual knowledge. ...the IP metaphor is ‘sticky’. It encumbers our thinking with language and ideas that are so powerful we have trouble thinking around them. ...The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous...no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. ...The mainstream view is that we, like computers, make sense of the world by performing computations on mental representations of it, but Chemero and others describe another way of understanding intelligent behaviour – as a direct interaction between organisms and their world."

RTWT.


Ok. For one, we really still don't know a lot about the brain. Secondly, well...as far as I can tell, the brain is like an computer that adds more and more traces as it goes. The logic is the circuit, no real analogue to software. The hardware is the algorithm, and we can add new hardware. There is storage, but it's fuzzy...with a MTBF that's not well understood. The actions that we take are based on potentials, just like in a circuit with sense amps that aren't perfectly tuned, nor is the switch amp binary....there's a lot of muxing going on (again, that we don't understand). But to say that the brain isn't like a computer is extremely naive. Cells make up the brain, each cell's logic can be worked out using well...logic. Putting those cells together into a complex system produces behavior that is obviously complex, and hard to understand. Ever tried to understand the behavior of a multi-threaded, multi-node, multi-socket compute system down to the microsecond? It's complex right? Yeah, we barely can grasp that, give it a few decades and we'll probably figure out a bit more about how the brain works...and how to quantify it.


BRAINS ARE COMPUTERS is a useful conceptual metaphor for many purposes. But metaphors are imperfect. It's not literally true that LIFE IS A JOURNEY or SOCIAL ORGANIZATIONS ARE PLANTS.


In spite of the length of the article, he hardly actually proves anything in regards to his point or how this can be.


As others have commented, the author doesn't seem to understand that computers as Information Processors is also a metaphor. Computers just shuffle electric signals around according to the properties of their circuits. Software is just an abstract way of referring to electricity flowing in a certain way inside the computer.

My knowledge of how computers work is fairly shallow and my knowledge of neuroscience is basically none. With that caveat: this answer https://www.quora.com/Is-the-human-brain-analog-or-digital best fits my biases/intuitions. TLDR: The brain has a digital aspect -- neurons fire and send a signal or they do not -- and an analog aspect -- whether or not a neuron fires depends on the chemical(?) conditions in which it's embedded and inside the cell.

This makes sense to me. That being said, I'm not sure whether I actually disagree with the author. If anyone is trying to find the brain's equivalent of a CPU, that's a fool's errand. It also seems likely that digitizing a brain is not possible, certainly not feasible. Digitizing analog systems is always(?) lossy -- the lossiness just doesn't matter with things like music due to the limitations in human hearing.

This doesn't mean we can't create artificial brains, it just means that they can't be digital machines.

My personal, bad, metaphor for memories is that they work somewhat like linked lists. It's easy enough to recite a poem start to finish. Harder to recite it if you start in the middle. Really hard to recite it backwards. Is this true? Who knows. Does it tell us anything about the underlying mechanisms? Not really.


Judging from the comments, there's a massive misunderstanding going on between the readers, the author, and everyone's understanding of ourselves.

There are certainly odd things in this article. What I think the author wants to get at:

We don't store specific information like "on a dollar bill, there's a line going from x of the left side to y of the bottom side with a curvature of z" in a specific container labeled "info about dollar bills", to be read out everytime you interact with one.

Instead, each time we pay using such a bill, the chain of neurons firing from the moment you open your purse to the moment you get your receipt, as a whole, is responsible for your internal representaion of a dollar bill: (1) first glimpse of sheets of paper in purse (2) fumble indivual sheets to examine more closely (3) recognize key elements like green colour, rectangular, famous bust in the center (4) in case you didn't, now you know you've got cash in there (5) remember the total, do some arithmetic to figure out which bills to hand over (6) extract bills from purse (7) extend hand to hand cash over to cashier (8) wait for receipt/pack things (9) get receipt (10) purchase complete.

This entire chain (which is obviously incomplete for the sake of demonstration), with all the completely unrelated stuff about rummaging in a purse or human interaction, comes together to form what you think of as a dollar bill. This is how we store information, by relating it to stuff we already know.

Which is why it makes sense that babies come into this world with only the bare minimum. We don't pop out ready made, that's the whole point of childhood and adolescence. We build ourselves, unconsciously incorporating our surroundings into our personality, by constantly relating to what we know, comparing new things to old things. This implies that a faulty impression early on has disastrous consequences in whatever is built afterwards, meaning most of it.

(edit) just came up with a nice formulation: the brain doesn't store raw data as in pixels or decimal values, it stores characteristics and rebuilds the thing you're thinking about as well as your medium of expression allows it to (which is why you can always think about something, but when asked to draw or describe it, it just seems impossible). This explanation is far from perfect, and I don't understand how our notions of numbers for example comes to be, but seeing as maths is really just a load of charateristics giving intersting results when combined, I see no problem.


The map is not the territory. But that doesn't mean that maps can't be useful.


I'm a PhD student who studies this stuff. This article is, quite simply, poorly argued.

Most importantly, there's nothing in here that directly argues against the information processing metaphor. The author gives a bad argument that he thinks lies behind the metaphor and shows why that argument is bad. But a) I don't recognize this argument as what motivates the metaphor and b) to show that one argument for a conclusion fails is not to show that the conclusion is false.

The author often points to differences between us and actual computers. No one who employs the information processing metaphor is claiming that we are identical to human-made computers. Rather, the claim is that there's some abstract sense of 'information processing' under which both human brains and computers are implementations. In order for both of those things to be implementations, they need not do so with the exact same kinds of mechanisms, nor must they have the exact same kinds of abilities. (This is why I find the discussion of the dollar drawing case so strange - no one is claiming that we store visual representations of objects in the way that an actual human-made computer does.)

When the author writes:

"To catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms."

What he describes is not a non-algorithm. Insofar as it's a procedure that takes in certain inputs and follows a series of steps in order to achieve some goal - it just is an algorithm, albeit a somewhat simple one.

More importantly, the author seems to misunderstand what is meant when researchers say that the brain 'runs' an algorithm. Of course you don't consciously crunch numbers when your visual system is chugging away, trying to analyze and sort through the...data...it's getting from the retina. But the lack of conscious computation is not evidence for the complete lack of computation.

There are all sorts of legitimate criticisms of cognitive science - and the author discusses some of these at the end of the piece without really understanding/explaining what's going on behind them. But the author says nothing convincing that it's the failure of the information processing metaphor that explains the limitations/failures of cognitive science.


In fact several people in this thread have explicitly compared human visual memory to lossy compression of images in a computer system. That's his main point, about the metaphor leading people astray.


Understood in the right way, there's nothing wring with that comparison.

Certainly, people misuse/misunderstand the information-processing metaphor/claim. (Though the people who do this tend not to be cognitjve scientists.) But that's not to say that the metaphor is complete garbage.


The vast majority of the posts here are having trouble loking beyond it. That suggests to me that the metaphor is becoming more problematic than useful, at least among people who understand digital technology but who are not cognitive scientists.


Provocatively naive - does anyone nowadays really think the brain Works like a computer? A much more interesting focus is on non-representationalistic understandings of the world seen in the discussion.


"Provocatively naive - does anyone nowadays really think the brain Works like a computer? " It's the most widely accepted theory.

It's just not computer in the way you think a computer works.


"Computer" used to be an occupation for humans.

In The New York Times was February 3, 1853, an obituary stated: "Mr. Walker was widely known as an accomplished Astronomer and a skillful Computer."

So to claim that the very thing which enables us to be a computer, is not a computer, might simply be short-sighted or trying to be absurd.


A majority of the posters here seem to think the author is wrong to suggest a brain does not work like a computer.


Oh, this is a treat. Author claims to demolish "Brain is a Computer" as historical ignorance by comparing digital computers and brain. Instead, author shows own ignorance of history of computing by ignoring the relevant branch: analog computers. Most ignore them although they're critical here. So, here's a summary of how analog computers work:

http://oro.open.ac.uk/5795/1/bletchley_paper.pdf

These types of computers implement mathematical functions that process signals from the real-world in their native form in real-time. Each component implements primitive functions that act on electrical input. They're usually arranged in connected components that flow from one thing to another or have feedback loops. They have no memory but can emulate it by generating on the fly. Doesn't that sound awfully familiar? ;)

I've been saying for years that brain is not digital and is a computer. Past year or two learning hardware just solidifies it more. Just look at it's properties to instantly see analog effects:

http://theness.com/neurologicablog/index.php/is-the-brain-an...

There's models for general-purpose, analog computers with prototypes built, analog models of brain functions, and analog implementations of neural networks. All show that the brain might actually be a general-purpose, analog (or mixed) computer. Moreover, it seems to be a self-modifying, analog machine with some emergent properties starting at childhood in its early phase. The complexity of this thing and re-creating any one brain's function would be mindboggling as author correctly noted.

So, the brain is not a digital computer. It's an analog or mixed-signal computer with massive redundancy and ability to reconfigure itself. The descriptions of inconsistencies with digital match up nicely to model approximations in an analog style. Those issues even contributed to switch to digital for better accuracy/precision. Now, they're going to have to switch back if they want to match the greatest computer ever made. :)

I'll leave you with examples of analog and neural.

http://www.artificialbrains.com/brainscales

https://pdfs.semanticscholar.org/6e84/1782fec1f1f46629ad965b...

Note: The above, my favorite as geometrically closer to brains, is a 60 million synapse system that took 294,912 cores to simulate. That's what a handful of analog chips are doing in real-time. Behold the power! :)

http://research.cs.queensu.ca/home/akl/cisc879/papers/PAPERS...

http://binds.cs.umass.edu/anna_cp.html

Note: Siegelmann writes on non-Turing models for computation focusing on analog and neural. Her lab is all over the theoretical aspects of this stuff.

http://moon.cc.huji.ac.il/oron-shagrir/papers/Brains_as_Anal...

Note: Shows how many of these things are mathematical relationships. Other references exploit that given it's what analog implements.

http://www.eetimes.com/document.asp?doc_id=1329591

Note: A few were interested in simulating actual brain structures with memristors. Above is the latest take on that in Russia.

So, there you all go. It's a computer. It's an analog computer. Also, ends the parallel debates about whether you can have a general-purpose, analog computer or whether it can top digital. A few, analog computers working as a team... invented... digital computers. QED. :)


What is a computer?


just because our brain is not at all like our current 'computers' that we have created architecturally speaking does not mean it is not a computer. Our brains calculate and store and retrieve information, just not to the near-perfect extent that our 'normal' computers do.


among all the other well articulated complaints in the comments here I have to ask a question...

is the result of this article the conclusion that computers are just boring, uncreative, overly logical brains? Who woulda thunk?


This guy really doesn't seem to know what he's talking about.


Humans are Turing complete. What other definition do you need?


But it can certainly compute.


tl;dr -- This article is a barefaced example of rank ignorance. Author/editors have no idea what "information" and "algorithm" means. Instead, they seem to be dealing with those concepts as ignorant rubes.

A baby’s vision is blurry, but it pays special attention to faces

The latter phrase reflects a well established and important developmental fact. I'm very curious about the first phrase. Is this also an established fact in the science of human development? On the face of it, it seems like an extraordinary piece of research if that's true. One cannot, of course, simply ask a baby if their vision is blurry. Their visual acuity needs to be inferred, and some cleverness would be required to do this reliably and accurately.

Anyone know which study this refers to? (Or is this an example of why this article is pretty darn fluffy?)

EDIT: Arrgh! This article is ignorant fluff!

But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.

The brain can contain information. That's simply an evident fact. Does the author understand what information means?

The human nervous system also embodies algorithms. That's also a fact established by neuro-biology. The visual system is full of algorithms. I think the author doesn't understand what these words mean. All of the other words seem to be thrown in there to evoke images of machine parts that don't look like people parts. It's exactly the cargo-cult misunderstanding of the underlying principles, in mistaken deference to resemblances.

Also, anyone who's delved into Rubik's Cube solving knows that we humans can develop and embody algorithms. (The scene plays fast and loose with the term, but they do use algorithms in a mathematical sense.) This author loses a whole lot of academic credibility for writing this, and the editors of this website do as well for publishing this. This is just rank ignorance on the level of publishing a perpetual motion machine article. It's 2016. If you are publishing a factual article of interest to intellectuals, and you don't understand what information or an algorithm is, you have to more business writing, editing, and publishing such an article than if you had no idea of what the periodic table means.

Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessary.

As a traditional musician who can play several hundred melodies from memory, I can say this is complete BS. He's simply stupid playing games with terminology. What I know from my experience and that of many fellow musicians, is that we do have to retrieve melodies. In fact, sometimes we have to take a few minutes to make sure we have all parts of the melody retrieved. You can go out to pubs in SF and see traditional musicians sit there and do this. Sometimes the right cue needs to occur for the complete retrieval to happen. Some musicians have certain melodies they can only remember after they remember the name.


My brain computes things. So by definition, it is a computer. QED


Hmm probably a brain that wrote this, to confuse us.


Google cache for those who can't connect to the paper https://webcache.googleusercontent.com/search?q=cache:https:...

TLDR Response:

After taking a look at the paper, take a look at this post by the Google deep learning team. Take a look especially at the images.

http://googleresearch.blogspot.co.uk/2015/06/inceptionism-go...

I think most if not all of the author's arguments that the brain is not a computer, can also be applied to Google's deep learning (which obviously runs on computers).

Long response:

I think the author is very naive about how broad a computer is and what a computer can do. Deep learning is obviously running on a computer, but like the author states for the brain, any single image or fact is probably not encoded in a single place.

In addition, we have probabilistic data structures, for example a bloom filter. Would a computer running a bloom filter to recognize data it had seen before not be a computer because it did not store an complete representation of any one item?

What we are seeing now, is that as computers go beyond just storing and retrieving data, to actually doing stuff like speech/image/pattern recognition, the representation is becoming more and more like the brain representation.

In addition, the author ignores that the brain did indeed develop a way encode information exactly. If the author would like more information about this, I suggest they look up "drawing" and "writing" in wikipedia. With drawing and especially with writing, the brain developed a way to encode information exactly that could later be retrieved. In addition, it could be retrieved by other brains (if they know the language).

Also, the author proposes straw man arguments for the other side. I am somewhat familiar with both computers and brains ( I trained in Neurosurgery, but work now as a programmer), and I never met a professional who claimed that memories are stored in individual neurons! I was always taught that memories are encoded in the connections between neurons - which is very similar to how neural networks encode information ( as the strength of connections between elements of the neural network ).

It is also interesting that he quotes Kandel. When I actually read Principles of Neural Science (by Kandel and Schwartz - http://www.amazon.com/Principles-Neural-Science-Eric-Kandel/...), I got the distinct impression that Kandel actually viewed the brain as an information processor, with lower level information being processed into higher level representations (see for example the chapter on the visual system).

So overall, the author takes a very limited view of both computers and brain and concludes (I think falsely) that the brain is not a computer.


I do not think that is his conclusion, though he is certainly at fault for giving that impression. I think he is actually saying that it is a misleading to compare a brain too closely to a digital computer.

It is trivially true that the brain processes information - that is not much of a conclusion, it is a starting point, and it certainly does not mean that it is like a digital computer.


> I never met a professional who claimed that memories are stored in individual neurons! http://www.extremetech.com/extreme/123485-mit-discovers-the-...


I've been going through my pocket stack today and came along this article:

http://www.nytimes.com/2015/06/28/opinion/sunday/face-it-you...

> "Face It, Your Brain Is a Computer"

I think it fits as a counterpoint.


Interestingly, the author of that piece is the editor of "The Future of the Brain", a selection of which is cited in the original article.


This article is poorly thought out clickbait. Few cognitive scientists think about cognition the way that is being suggested, anf the suggested replacement is next to useless.


I think that the brain does something beyond or in addition to information processing because I don't see how you can get qualia or conscious experiences out of information processing alone. However to claim that information processing isn't at least an aspect of what the brain does is absurd. If you agree that sorting a list constitutes information processing then since a person equipped with pencil and paper (or without if the example is short enough) can sort a list it's clear that the person's brain has processed information.

I find it ironic that one of the things the author claims humans don't have is memories. Memory is a concept that arose to describe an aspect of human experience thousands of years before computers existed. Using the term "memory" to describe information storage in computers is indeed an analogy but it's an analogy in the opposite direction from what the author claims.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: