Hacker News new | past | comments | ask | show | jobs | submit login

Its relevance in philosophy/epistemology/science/computing has been inflated out of proportion by the confused arguments of Hofstadter, Penrose, etc. Just because it's difficult to understand doesn't mean that it will tell you the meaning of life. Of course it's a mathematical theorem of profound importance, but it has absolutely nothing to do with the limits of rational thought.



I agree with your sentiment. Gödel's work applies only to formal axiomatic systems of a specific strength. There are formal systems that are so weak they can prove their own truth and every statement is decidable (every true statement can be proved as true). And if your theory makes no claim to an axiomatic basis then Gödel really does not have jurisdiction.

But if a person accepts the Church Turing Thesis, then the human brain is not doing any computation that can't be modeled by a Turing Machine. In fact, the brain is at least a Turing Machine. Gödel's limitations will apply to it.

If we accept the Church Turing Thesis and replace the brain with a Turing machine, I argue that the mind is a program that runs on that Turing machine. The program embodies a particular formal system sufficient to encode mathematical statements and leverage itself to prove statements. From this argument one can infer that there are some mathematical truths that some minds will not find. And that if each human mind represents a different formal system, it is in their best interest to work together.

Disclaimer - The above is me thinking out loud not something from a peer reviewed paper


It's a fallacy to think of a human brain as a distinct entity. If you are viewing the human brain as "just" a bunch of matter interacting in interesting ways to produce what we call consciousness (as opposed to viewing consciousness as something separate from the physical processes of the brain), then there is nothing really distinguishing your brain from the rest of you, and indeed from the rest of the universe. (Practically speaking, a good bit of decision-making happens in neurons outside of your brain.) So if you want to argue that the mind is equivalent to (or can be simulated by) a Turing machine, then you really want to argue that the whole universe could be simulated on a Turing machine.

Which is not necessarily a bad position to take. http://xkcd.com/505/


I have the vague hope that this insight about the nature of the brain being a type of computing device might be kind of obvious to us as IT and information science people. It is, however, sadly not obvious to mathematicians and physicists as the article itself quotes:

  Gödel's Theorem has been used to argue that a computer can never be as smart as 
  a human being because the extent of its knowledge is limited by a fixed set of 
  axioms, whereas people can discover unexpected truths
...which to me is an utterly surprising non-sequitur.


Well, that paragraph you quoted is indeed non-sequitur, clearly you can program computers to discover unexpected truths using their 'axioms', code their 'axioms' to be as flexible and fragile as ours and so on, but the assertion that the nature of the brain is a computing device in the sense we "information science people" refer to one, is far from obvious. Obviously the brain computes stuff, but we tend to be rather specific and we think of computers when we say computing device, which might or might not be able to simulate human intelligence, hence the strong vs weak AI debate.

Trivially you can, with perfect information and sufficient technology, modelling neuron by neuron, create an electronic brain. The catch is, you would only prove humanOS runs on silicon as well as it does on carbon.

Having a powerful enough computing device does not imply it can compute anything a different type of computing device can. We have the turing-completeness that indeed says this is valid for a subset of our current programming languages and hardware, but turing-completeness does not trivially apply to our brain.

As an analogy, a ruler is a drawing device, but you can't draw a circle with it as with a compass. You can somewhat simulate drawing a circle by taking it very slowly using a certain clever algorithm, but it will never be perfect or else require infinite or virtually infinite time/space to do so.


I have the vague hope that this insight about the nature of the brain being a type of computing device might be kind of obvious to us as IT and information science people.

Ever stopped to think that the very "obviousness" of it might just be a product of professional bias as IT persons?


I actually do take that position.


> But if a person accepts the Church Turing Thesis, then the human brain is not doing any computation that can't be modeled by a Turing Machine. In fact, the brain is at least a Turing Machine. Gödel's limitations will apply to it.

That's not the case. If one accepts the Church Turing Thesis, then those functions the brain performs on effectively calculable functions can be performed on a Turing Machine. The brain is at least a Turing Machine, but may be greatly more than one, and Gödel's limitations will only apply to that portion doing calculations. The brain may very well be doing a great deal more than simple computation.

Thus, when you write I argue that the mind is a program that runs on that Turing machine, you're leaping off into wild speculation.


Excellent point and I suspected someone would call me out on this. Notice that I also said at least a Turing Machine. As I respond to another comment in this thread, implicit in my statement is the belief that the universe is calculable on a Turing Machine. I do not believe this is wild speculation. These ideas are not simple for me at least so I will be careful:

If the universe is not calculable on a Turing machine then some physical processes including those going on in the brain are not computations but can only be expressed using superrecursive algorithms. If those processes in the brain were computations then the human brain would be a hypercomputer. I do not believe in the existence of that latter. This opens the possibility that even if the brain operates via non computable means, its behaviour could be fully captured by a Turing Machine. I also think the theory that the universe has non computable things going on and the brain harnesses them in a non algorithmic way is more complex than the theory that the universe is merely Turing equivalent and so is the human brain.

My basis for this belief is the unrelated fact that there are some strict limitations in reality. Finite Speed Limit, 2nd Law, Maximum Force, Maximum Information per square meter, Quantum Indeterminacy; Compuational Indertermincancy of various facets: Diophantine, Church, Godel, Turing, Chaitin. Also the prudent belief that P <> NP and more importantly, lack of any evidence of Nature doing P in NP. Also: No Free Lunch in Search and its counter (okay no free lunch but the universe has structure exploitable by turing machines - see M Hutter). To me, saying the universe is just a turing machine fits this pattern.

Other patterns are the various links which occur in: physics, topology, logic and computation; the unifying power of category theory (e.g colgebras/algebras:objects---analysis as tagged unions---algebra), the link between physical and information entropy, the possibility of a Holographic Principle, the possibility of a discrete theory of quantum gravity, the relationship between a complex probability theory and Quantum Mechanics and the informational nature of QM. To me all these are very suggestive of a simple underlying nature which is informational and that digital physics may not be correct but it is in the right direction.


The false conclusions philosophers, computer scientists and many others draw due to misunderstanding, or misapplying, the Church-Turing thesis are every bit as problematic as those false conclusions that result from misunderstanding or misapplying the incompleteness theorems. See http://plato.stanford.edu/entries/church-turing/

In both cases the problem is insufficiently respecting the rigorous boundaries on what the theorems actually say and apply to. Neither theorem has anything to say about consciousness and the human mind, unless a lot of currently unproven preconditions, some of which seem unlikely, get proven first.


I haven't read Hostadter or the other's arguments, but this comment got me thinking. If the incompleteness theorem is about the limitations of an axiomatic system, why wouldn't it apply to rational thought. If (and it's by no mean certain) rational thought and the human brain have certain rules built into it to process, interpret, and act on information from the outside world, then couldn't these hardwired rules be considered axioms? These axioms are clearly powerful enough to express the integers and so one would conclude that there are certain propositions that are true but cannot be proven by the human brain. Of course, the brain and it's wiring is still a matter of research, and the fact that the brain is a dynamic system with neural connections in constant flux means that the system is not static nor are these "axioms".

I don't know, I'm just thinking out loud....


Penrose's The Emperor's New Mind addresses this idea. Basically, Penrose assumes that:

1. Gödel's Incompleteness Theorems are equivalent to the halting problem (provable)

2. If the human mind is deterministic, it can be modeled with a deterministic algorithm (provable).

3. The human mind seems to be capable of proving arbitrary things about all algorithms (debatable; I'm extremely skeptical).

4. Therefore, the human mind is not bound by the halting theorem, and therefore the human mind is not deterministic.

He then draws several possible speculations. 1, the human mind is driven by quantum mechanics. He explains this more in Shadow of the Mind, the "sequel" to this book, and goes on to postulate specifically that "microtubules" allow quantum mechanics to have far-reaching effects that are indistinguishable from consciousness. Other people, most notably and substantially Max Tegmark, disagree[1], arguing, "we find that the decoherence timescales (∼10^−13 to 10^−20 seconds) are typically much shorter than the relevant dynamical timescales (∼10^−3 to 10^−1 seconds), both for regular neuron firing and for kink-like polarization excitations in microtubules. This conclusion disagrees with suggestions by Penrose and others that the brain acts as a quantum computer, and that quantum coherence is related to consciousness in a fundamental way." Since then, quantum effects, especially quantum teleportation, appear to be crucial at the molecular level in processes like photosynthesis[2], suggesting that, perhaps, quantum mechanics may play a role.

However, even if quantum mechanics do play a role in human consciousness, I don't see how trading a deterministic brain for a random one is an improvement. Personally, I don't think that quantum mechanics do provide a crucial level to the extent that our brains are somehow not bound by logical axioms; I still believe there's a fundamental "algorithm" that drives the biological brain, though it may be difficult to conceive of, and I tend to agree with Tegmark in ideas of consciousness. Still, I'm open to any evidence on either side of the table, but I think we're a few decades away from key discoveries about the way our brains work that will shed any serious light on the subject.

-------------------------------------------------------------

[1]: http://arxiv.org/abs/quant-ph/9907009

[2]: http://www.nature.com/nature/journal/v446/n7137/abs/nature05...

Extra reading: see [wikipedia](http://en.wikipedia.org/wiki/Orch-OR)


Thanks for the follow-up!


Well...I think about Godel's theorem as the mathematical implementation of Kant's ideas about the limits of reason based upon human experience, i.e. there are some truths which are inaccessible because time and space are preconditions of all human experience.

Not to dwell on arguments about whether or not time and space actually exist independently of human experience, what Kant was getting at is that the way in which humans experience the world limits our ability to draw conclusions to a particular subset of all truths.

If Godel's theorem is true, then from a Kantian perspective, mathematics no longer enjoys a uniquely privileged place in regards to human rationality. That's pretty important philosophically - at least to some people.

Positivists may take a different view.


I've just spent about an hour trying to explain how the comparision between Godel's incompleteness theorem and the ideas of Kant is flawed, but I couldn't come up with anything, because they have nothing to do with each other. It's like trying to explain how the number 2 is different from a rhinoceros; there's no explanation that would satisfy anyone who already believes that the number 2 and a rhino are comparable.


That's close to Jacques Lacan's famous argument that there is a philosophical equivalence between the square root of -1 and an erect penis.


Godel doesn't talk about space, time, human experience, truths that exist independently of human experience. I fail to see how he's relevant.


He does. Learn more about his biography and his interest in mysticism and philosophy: http://www.amazon.com/G%C3%B6del-Logic-John-L-Casti/dp/07382...


Perhaps he does, but I don't think he does so in the proofs of his incompleteness theorems.


"It has absolutely nothing to do with the limits of rational thought."

One implication of GIT is that there are true sentences that cannot be proved.

I'm sorry, but I can't manage to see how "It has absolutely nothing to do with the limits of rational thought."

It saddens me that I am the first one that had to point this out on this comment that is over 15 hours old. On HN. :-(


But computers can only work with recursively ennumerable axiom systems. Humans do not have this limitation. The Incompleteness result shows that the natural numbers as defined by the second order Peano Axioms is different than the natural numbers defined by any first order system. Doesn't this have to with thought and rationality?


>But computers can only work with recursively ennumerable axiom systems. Humans do not have this limitation.

I don't think this is true, at least not in the way you put it. I for one surely cannot work with the True Arithmetic system, because I cannot distinguish axioms from non-axioms. Maybe you could expand it a bit?


There is no effective (computable) procedure for determining when a statement is an axiom. However, sometimes humans can make such a determination. Fundamentally a computer wouldn't be able to since there is no computable way of making such a determination. So the computer can't be programmed to work with, say, the second order Peano Axioms. Humans can and have worked with the second order Peano Axioms.

I would like to know the flaw in my reasoning.


Basically, your argument is known to philosophers by name "Lucas'/Penrose argument", and has been discussed to death. Most of the philosophers and mathematicians consider it to be invalid. There are lots of references in Wikipedia article[1].

[1] - http://en.wikipedia.org/wiki/Orch-OR#The_Penrose.E2.80.93Luc...




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: