Hacker News new | past | comments | ask | show | jobs | submit login
Gödel's Incompleteness Theorem explained (miskatonic.org)
111 points by simonbrown on March 24, 2012 | hide | past | favorite | 58 comments



It should be noted that Godel gave two Incompleteness Theorems.

The first says that, in a system based on a list of axioms, there will always be some statements that aren't provable. (Remember Mr. Spock confusing a computer by telling it "I am lying"? "This statement can't be proved" behaves the same way -- it's true, but you can't prove it, because if you did then you couldn't prove it.) In other words, there is always truth the system can't account for.

The second theorem builds on the first. It says a system can't prove itself consistent, and that if a system includes the claim "this system is consistent" then it is necessarily inconsistent. (In essence, you can construct a more complex version of the "I am lying" statement in any system that includes a claim of "I am telling the truth".) In other words, the only way to prove something consistent is from outside of it.


Interestingly, Godel gave a presentation somewhere within 5 - 7th September at Koningsberg presenting the first incompleteness theorem. At this time, he did not have the second. [1]

Von Neumann was in attendance and he wrote a letter to Godel on November 20th announcing his discovery of the second incompleteness theorem. As it turns out, Godel had just sent a paper for publication on November 17th with the proof of the second incompleteness theorem. In reply to Godel, finding out he had just been scooped, von Neumann wrote:

"As you have established the theorem on the unprovability of consistency as a natural continuation and deepening of your earlier results, I clearly won't publish on this subject." [2]

[1] http://goo.gl/uJH9Q [2] http://goo.gl/AXfQV

(Links are very long, hence the shortner. They take you to Google Books.)


> "This statement can't be proved"

This statement (and Godel's mathematical equivalent) does not assert anything to be proved, it is contentless, which is why logical, axiomatic systems choke on it, essentially due to recursion or self-reference. Logically there are only two fundamental ways to err; by contradiction and by circular reasoning.

> In other words, the only way to prove something consistent is from outside of it.

This is what I take from Godel's theorems but this is a poor formulation of the idea. A better way to say it is that proof presupposes consistency and more specifically the law of identity which is a metaphysical law that has to be validated not proved (since proof depends on it).


A "strange loop", perhaps.


Its relevance in philosophy/epistemology/science/computing has been inflated out of proportion by the confused arguments of Hofstadter, Penrose, etc. Just because it's difficult to understand doesn't mean that it will tell you the meaning of life. Of course it's a mathematical theorem of profound importance, but it has absolutely nothing to do with the limits of rational thought.


I agree with your sentiment. Gödel's work applies only to formal axiomatic systems of a specific strength. There are formal systems that are so weak they can prove their own truth and every statement is decidable (every true statement can be proved as true). And if your theory makes no claim to an axiomatic basis then Gödel really does not have jurisdiction.

But if a person accepts the Church Turing Thesis, then the human brain is not doing any computation that can't be modeled by a Turing Machine. In fact, the brain is at least a Turing Machine. Gödel's limitations will apply to it.

If we accept the Church Turing Thesis and replace the brain with a Turing machine, I argue that the mind is a program that runs on that Turing machine. The program embodies a particular formal system sufficient to encode mathematical statements and leverage itself to prove statements. From this argument one can infer that there are some mathematical truths that some minds will not find. And that if each human mind represents a different formal system, it is in their best interest to work together.

Disclaimer - The above is me thinking out loud not something from a peer reviewed paper


It's a fallacy to think of a human brain as a distinct entity. If you are viewing the human brain as "just" a bunch of matter interacting in interesting ways to produce what we call consciousness (as opposed to viewing consciousness as something separate from the physical processes of the brain), then there is nothing really distinguishing your brain from the rest of you, and indeed from the rest of the universe. (Practically speaking, a good bit of decision-making happens in neurons outside of your brain.) So if you want to argue that the mind is equivalent to (or can be simulated by) a Turing machine, then you really want to argue that the whole universe could be simulated on a Turing machine.

Which is not necessarily a bad position to take. http://xkcd.com/505/


I have the vague hope that this insight about the nature of the brain being a type of computing device might be kind of obvious to us as IT and information science people. It is, however, sadly not obvious to mathematicians and physicists as the article itself quotes:

  Gödel's Theorem has been used to argue that a computer can never be as smart as 
  a human being because the extent of its knowledge is limited by a fixed set of 
  axioms, whereas people can discover unexpected truths
...which to me is an utterly surprising non-sequitur.


Well, that paragraph you quoted is indeed non-sequitur, clearly you can program computers to discover unexpected truths using their 'axioms', code their 'axioms' to be as flexible and fragile as ours and so on, but the assertion that the nature of the brain is a computing device in the sense we "information science people" refer to one, is far from obvious. Obviously the brain computes stuff, but we tend to be rather specific and we think of computers when we say computing device, which might or might not be able to simulate human intelligence, hence the strong vs weak AI debate.

Trivially you can, with perfect information and sufficient technology, modelling neuron by neuron, create an electronic brain. The catch is, you would only prove humanOS runs on silicon as well as it does on carbon.

Having a powerful enough computing device does not imply it can compute anything a different type of computing device can. We have the turing-completeness that indeed says this is valid for a subset of our current programming languages and hardware, but turing-completeness does not trivially apply to our brain.

As an analogy, a ruler is a drawing device, but you can't draw a circle with it as with a compass. You can somewhat simulate drawing a circle by taking it very slowly using a certain clever algorithm, but it will never be perfect or else require infinite or virtually infinite time/space to do so.


I have the vague hope that this insight about the nature of the brain being a type of computing device might be kind of obvious to us as IT and information science people.

Ever stopped to think that the very "obviousness" of it might just be a product of professional bias as IT persons?


I actually do take that position.


> But if a person accepts the Church Turing Thesis, then the human brain is not doing any computation that can't be modeled by a Turing Machine. In fact, the brain is at least a Turing Machine. Gödel's limitations will apply to it.

That's not the case. If one accepts the Church Turing Thesis, then those functions the brain performs on effectively calculable functions can be performed on a Turing Machine. The brain is at least a Turing Machine, but may be greatly more than one, and Gödel's limitations will only apply to that portion doing calculations. The brain may very well be doing a great deal more than simple computation.

Thus, when you write I argue that the mind is a program that runs on that Turing machine, you're leaping off into wild speculation.


Excellent point and I suspected someone would call me out on this. Notice that I also said at least a Turing Machine. As I respond to another comment in this thread, implicit in my statement is the belief that the universe is calculable on a Turing Machine. I do not believe this is wild speculation. These ideas are not simple for me at least so I will be careful:

If the universe is not calculable on a Turing machine then some physical processes including those going on in the brain are not computations but can only be expressed using superrecursive algorithms. If those processes in the brain were computations then the human brain would be a hypercomputer. I do not believe in the existence of that latter. This opens the possibility that even if the brain operates via non computable means, its behaviour could be fully captured by a Turing Machine. I also think the theory that the universe has non computable things going on and the brain harnesses them in a non algorithmic way is more complex than the theory that the universe is merely Turing equivalent and so is the human brain.

My basis for this belief is the unrelated fact that there are some strict limitations in reality. Finite Speed Limit, 2nd Law, Maximum Force, Maximum Information per square meter, Quantum Indeterminacy; Compuational Indertermincancy of various facets: Diophantine, Church, Godel, Turing, Chaitin. Also the prudent belief that P <> NP and more importantly, lack of any evidence of Nature doing P in NP. Also: No Free Lunch in Search and its counter (okay no free lunch but the universe has structure exploitable by turing machines - see M Hutter). To me, saying the universe is just a turing machine fits this pattern.

Other patterns are the various links which occur in: physics, topology, logic and computation; the unifying power of category theory (e.g colgebras/algebras:objects---analysis as tagged unions---algebra), the link between physical and information entropy, the possibility of a Holographic Principle, the possibility of a discrete theory of quantum gravity, the relationship between a complex probability theory and Quantum Mechanics and the informational nature of QM. To me all these are very suggestive of a simple underlying nature which is informational and that digital physics may not be correct but it is in the right direction.


The false conclusions philosophers, computer scientists and many others draw due to misunderstanding, or misapplying, the Church-Turing thesis are every bit as problematic as those false conclusions that result from misunderstanding or misapplying the incompleteness theorems. See http://plato.stanford.edu/entries/church-turing/

In both cases the problem is insufficiently respecting the rigorous boundaries on what the theorems actually say and apply to. Neither theorem has anything to say about consciousness and the human mind, unless a lot of currently unproven preconditions, some of which seem unlikely, get proven first.


I haven't read Hostadter or the other's arguments, but this comment got me thinking. If the incompleteness theorem is about the limitations of an axiomatic system, why wouldn't it apply to rational thought. If (and it's by no mean certain) rational thought and the human brain have certain rules built into it to process, interpret, and act on information from the outside world, then couldn't these hardwired rules be considered axioms? These axioms are clearly powerful enough to express the integers and so one would conclude that there are certain propositions that are true but cannot be proven by the human brain. Of course, the brain and it's wiring is still a matter of research, and the fact that the brain is a dynamic system with neural connections in constant flux means that the system is not static nor are these "axioms".

I don't know, I'm just thinking out loud....


Penrose's The Emperor's New Mind addresses this idea. Basically, Penrose assumes that:

1. Gödel's Incompleteness Theorems are equivalent to the halting problem (provable)

2. If the human mind is deterministic, it can be modeled with a deterministic algorithm (provable).

3. The human mind seems to be capable of proving arbitrary things about all algorithms (debatable; I'm extremely skeptical).

4. Therefore, the human mind is not bound by the halting theorem, and therefore the human mind is not deterministic.

He then draws several possible speculations. 1, the human mind is driven by quantum mechanics. He explains this more in Shadow of the Mind, the "sequel" to this book, and goes on to postulate specifically that "microtubules" allow quantum mechanics to have far-reaching effects that are indistinguishable from consciousness. Other people, most notably and substantially Max Tegmark, disagree[1], arguing, "we find that the decoherence timescales (∼10^−13 to 10^−20 seconds) are typically much shorter than the relevant dynamical timescales (∼10^−3 to 10^−1 seconds), both for regular neuron firing and for kink-like polarization excitations in microtubules. This conclusion disagrees with suggestions by Penrose and others that the brain acts as a quantum computer, and that quantum coherence is related to consciousness in a fundamental way." Since then, quantum effects, especially quantum teleportation, appear to be crucial at the molecular level in processes like photosynthesis[2], suggesting that, perhaps, quantum mechanics may play a role.

However, even if quantum mechanics do play a role in human consciousness, I don't see how trading a deterministic brain for a random one is an improvement. Personally, I don't think that quantum mechanics do provide a crucial level to the extent that our brains are somehow not bound by logical axioms; I still believe there's a fundamental "algorithm" that drives the biological brain, though it may be difficult to conceive of, and I tend to agree with Tegmark in ideas of consciousness. Still, I'm open to any evidence on either side of the table, but I think we're a few decades away from key discoveries about the way our brains work that will shed any serious light on the subject.

-------------------------------------------------------------

[1]: http://arxiv.org/abs/quant-ph/9907009

[2]: http://www.nature.com/nature/journal/v446/n7137/abs/nature05...

Extra reading: see [wikipedia](http://en.wikipedia.org/wiki/Orch-OR)


Thanks for the follow-up!


Well...I think about Godel's theorem as the mathematical implementation of Kant's ideas about the limits of reason based upon human experience, i.e. there are some truths which are inaccessible because time and space are preconditions of all human experience.

Not to dwell on arguments about whether or not time and space actually exist independently of human experience, what Kant was getting at is that the way in which humans experience the world limits our ability to draw conclusions to a particular subset of all truths.

If Godel's theorem is true, then from a Kantian perspective, mathematics no longer enjoys a uniquely privileged place in regards to human rationality. That's pretty important philosophically - at least to some people.

Positivists may take a different view.


I've just spent about an hour trying to explain how the comparision between Godel's incompleteness theorem and the ideas of Kant is flawed, but I couldn't come up with anything, because they have nothing to do with each other. It's like trying to explain how the number 2 is different from a rhinoceros; there's no explanation that would satisfy anyone who already believes that the number 2 and a rhino are comparable.


That's close to Jacques Lacan's famous argument that there is a philosophical equivalence between the square root of -1 and an erect penis.


Godel doesn't talk about space, time, human experience, truths that exist independently of human experience. I fail to see how he's relevant.


He does. Learn more about his biography and his interest in mysticism and philosophy: http://www.amazon.com/G%C3%B6del-Logic-John-L-Casti/dp/07382...


Perhaps he does, but I don't think he does so in the proofs of his incompleteness theorems.


"It has absolutely nothing to do with the limits of rational thought."

One implication of GIT is that there are true sentences that cannot be proved.

I'm sorry, but I can't manage to see how "It has absolutely nothing to do with the limits of rational thought."

It saddens me that I am the first one that had to point this out on this comment that is over 15 hours old. On HN. :-(


But computers can only work with recursively ennumerable axiom systems. Humans do not have this limitation. The Incompleteness result shows that the natural numbers as defined by the second order Peano Axioms is different than the natural numbers defined by any first order system. Doesn't this have to with thought and rationality?


>But computers can only work with recursively ennumerable axiom systems. Humans do not have this limitation.

I don't think this is true, at least not in the way you put it. I for one surely cannot work with the True Arithmetic system, because I cannot distinguish axioms from non-axioms. Maybe you could expand it a bit?


There is no effective (computable) procedure for determining when a statement is an axiom. However, sometimes humans can make such a determination. Fundamentally a computer wouldn't be able to since there is no computable way of making such a determination. So the computer can't be programmed to work with, say, the second order Peano Axioms. Humans can and have worked with the second order Peano Axioms.

I would like to know the flaw in my reasoning.


Basically, your argument is known to philosophers by name "Lucas'/Penrose argument", and has been discussed to death. Most of the philosophers and mathematicians consider it to be invalid. There are lots of references in Wikipedia article[1].

[1] - http://en.wikipedia.org/wiki/Orch-OR#The_Penrose.E2.80.93Luc...


Think of it kind of like the Sieve of Eratosthenes, but for provability instead of primeness. You start with one axiom and prove everything you can based on that. Then you pick one of the statements you couldn't prove and add it as a second axiom (thereby expanding your axiomatic system), then prove everything that you can with those two axioms. The pick another unproven statement as your third axiom and repeat. Godel's incompleteness theorem is equivalent to saying that you will never run out of axioms, and that you will be able to continue this process indefinitely (just like there are an infinite number of primes).

It's not an exact comparison, but it gives you the idea.


I think your comparison misses an important point. If you pick in every step a sentence that is non-provable from your current axioms, but not contradictory with them either, after infinitely many steps[1] you will have picked all of them, and get a system called True Arithmetic, in which every true sentence is provable, essentially by definition. The reason why Goedel's incompleteness theorem does not work here is that your set of axioms is not recursively enumerable, and this is a necessary assumption of GIT.

[1] - If "after infinitely many steps" sounds fishy to anyone, please keep in mind that you start with an infinite list of axioms anyway -- first order Peano arithmetic theory has an induction axiom schema Ind that basically says that for every sentence f, Ind(f) (the sentence you get by substituting every occurrence of a single free variable in Ind with f) is an axiom of PA.


The "infinitely many steps" does sound fishy to me, because I wonder whether the number of axioms in True Arithmetic is countably infinite or uncountably so, and whether it makes a difference (I think it does).

But yes, the comparison is an analogy, certainly not correct at every level.


The set of all statements is easily seen to be countable, so the subset of statements being true, being infinite, is countable as well.

It does not make any difference, though, since what matters here is, as I said, recursive enumerability of axioms. Anyway, transfinite induction lets one use "pick next element" arguments even on uncountable sets.


An interesting comparison. Just one important difference though if I understand correctly: In Gödel's world you will sooner or later ruin your axiomatic system by adding an axiom that is inconsistent with one or several of your earlier axioms. There's no way to know (from within the system) when that happens, except perhaps that it gets a whole lot easier to prove weird stuff. :)


I guess I should have said that you pick as your new axiom a statement that you couldn't prove and also couldn't disprove. Though I'm not sure if even that is the same as logical independence, which is what you really want.


Gödel showed that provability is a weaker notion than truth

This, to me, is a great summation and one of the most important conclusions we can draw. When you reflect on it it's easy to see how Godel's theorems reach beyond computer science and into philosophy, ethics and religion.


Not that easy, because it only applies to truths represented in a system of symbols. Very abstract stuff, and in fact, trivial to overcome – use two sets of symbols.

The theorem was important at the time because it ended one pursuit (one single-level system to rule them all), but its actual impact on anything of importance in the wider world of, well, anything, is vastly overrated.


I believe you would need to establish some sort of equivalence between arithmetic on the set of natural numbers and philosophy, ethics, and religion in order to apply Godel's incompleteness theorem to any of them, since Godel's theorem applies to sets of axioms used to prove things about the natural numbers.


By using a UTM as a metaphor for the axioms themselves, the UTM must run itself, essentially describing an infinite loop over the finite program, which is another point to consider; with any axiomatic system, is it possible to describe a UTM which you can guarantee halts in finite time? Probably not necessarily.

It seems to be a solid notion that if a paradox can be constructed, then the space is unverifiable, but in this case it feels like the paradox results only from a poorly specified question (the correct answer exists, and is an acknowledgement that the input is meaningless).

The important part to take away is that Gödel himself exists in a logical space where he is able to understand and deal with the absurdity of his paradox, which is outside the scope of the UTM. Every logical system containing a paradox, is a subset of a logical system wherein the absurdity of the paradox is understood. So, perhaps a logical system which explicitly recognises paradoxes as absurd, can be complete. Like NaN in IEEE754, or int|null in dynamic languages.

The result clearly only applies to logical spaces where the paradox is translatable, so i'd be interested to see if a version exists for only basic arithmetic.

A similarly interesting related discussion is the total, abject, and infinite unavailability of a "quadratic formula" for polynomials in x^5 or greater.


> the UTM must run itself, essentially describing an infinite loop over the finite program,

Exactly. I came up with this counterargument not so long ago. It's why neither Gödel's theorems nor the Halting problem ever really impressed me. I don't doubt their validity but the conclusion that no system can be complete is only true for very stringent definitions of "complete". I still believe the Halting problem can be solved in some sense of the word solve. But perhaps I'm being too pragmatic.

> So, perhaps a logical system which explicitly recognises paradoxes as absurd, can be complete.

Gödel explicitly counters this with his second incompleteness theorem which says that no consistent system can provide a proof of its own consistency. In other (but equivalent) words, I might be confident that my mind works correctly but there's technically no way to be completely sure. In fact if I were sure of it, my mind wouldn't be working fine.


> I still believe the Halting problem can be solved in some sense of the word solve. But perhaps I'm being too pragmatic.

While its true there are some forms of functions that can be found to either halt or not, some can't. For example, ask the user for a boolean, and do a while loop based on this value. You can't say whether this will halt or not, but it does contain unspecified variables.

Then consider the Collatz Conjecture. We can't prove any other number then 1 actually reaches 1 without simulating the process. Since we don't even know if the next step will reach the destination, we can't make any decision about when it will halt. If we can't even decide it for such a simple function, then I don't think we can 'solve' it, even for a general 'solve'.


>For example, ask the user for a boolean, and do a while loop based on this value. You can't say whether this will halt or not, but it does contain unspecified variables.

I don't think that's an issue. The Halting Problem ask the question whether a program will halt when supplied with a given input. Nothing is stopping us from providing a separate proof for every single program for every possible that it'll stop.


Either I'm misunderstanding your ideas or you've misunderstood the nature of the problem.

If you're being pragmatic, rest assured that for any machine/program we can construct in this universe, we can decide the halting problem, as it is bound to be finite state, i.e. it will have a bounded amount of states it can be in and repeat itself. Less technical, there is practically no infinite search space, even if theoretically there is - memory of a physical computer can only count up to a certain number, even if very large.

The halting problem, or undecidability in general is the application of a very basic intuition. If I give you infinite space, and ask you to find me a certain something in that space, there is no way I can be sure whether or not you will succeed. Diophantine equations for example, to solve them requires you to search infinitely through the complex plane, and is analogous to the halting problem.

Now in which sense of the word 'solve' could you possibly believe these can be solved? Yes - our problems and machines are finite, so in that sense maybe.


What I mean is that while the proof for the Halting Problem is completely valid, I find it rather "weak" as it uses self-reference. That doesn't tell you anything about how it would work for other programs. The only thing you know is that the problem is undecidable for all input. What would happen if we were to restrict the input to a program to everything that doesn't include the entire program source code?

It's quite similar to Cantor's diagonal argument. Both proofs are completely correct and use a very similar reasoning. The difference is that I'm not interested in almost denumerating the real numbers (I can do that with the rational numbers anyway), but I would be interested in almost solving the Halting Problem.


The BBC had a 45 minute radio panel with Marcus du Sautoy and some other mathematicians going through Godel's theorems: http://www.bbc.co.uk/programmes/b00dshx3 (I believe people outside of the UK can listen to it.. if not, sorry! For some reason the 'listen' image does not appear for me, but is to the left of '(45 minutes)').

(And if you like it, the whole set of math and science episode of In Our Time are similarly excellent.)



I of course have no proof, but I sometimes get a feeling that Gödel does have something to do with the limits of rational thought.

To me it seems humans have found a way to cope with Gödel's incompleteness: we accept that some of our axiomatic systems are inconsistent and adapt by simply discounting the value of proofs.

It's difficult to explain, but let me give an example. Among engineers you can often reason along very long logical chains and have your conclusions accepted. Among (some) business people you can't. They seem to refuse the validity of logic itself, being nervous about trusting logical conclusions. I think that's because they know their axiomatic systems are self-inconsistent. :)

By observing business people I have come to the conclusion that the best you can do in an a self-inconsistent axiomatic system is to look for "nuggets of truth" and never wonder too far from them. You can hear people say stuff like "limiting tweets to 140 characters will brings out creativity - people are less afraid to create when they are constrained" and similar. If you accept that as truth then you can wonder a short short distance from it and reason about business opportunities, but you can't combine that nugget with some other nugget and be sure to come to a correct conclusion.

Perhaps it is so, that the likelihood of "proving" an untrue statement in a self-inconsistent system increases with the number of axioms you are basing your proof on. That's perhaps why many people are very skeptical of "proofs" that seam to involve a lot of axioms? :)


Surely a simpler explanation for that is that business people don't usually work with crisp, binary data. Instead they're (implicitly) doing some kind of statistical inference on noisy data.

Propositional logic might be an acceptable approximation to this kind of inference in certain narrow situations, but the approximation breaks down quicker when the chain of inference gets longer.


That could certainly be a simpler (and therefore better) explanation. :) When your "axioms" are fuzzy so to speak, and not real axioms, it's of course dangerous to draw conclusions from several of them (because the risk of one of them being false increases exponentially).

But I still have a feeling there's something there... :) For example, I remember reading here on hacker news about this CS professor who had found a way to predict performance in entry level programing courses: http://www.eis.mdx.ac.uk/research/PhDArea/saeed/. Basically, they just tested their students ability to form a self-consistent model of how programming works. If they could they did well in the course. If they couldn't they did poorly, and it was very difficult to help them.

That experiment leads me to believe that about 50% of the population is in the habit of constructing self-inconsistent systems of hypothesis (provisional axioms you could say). :) If that's true I'm not sure yours is a simpler explanation... ;)


> The implication is that all logical system of any complexity are, by definition, incomplete; each of them contains, at any given time, more true statements than it can possibly prove according to its own defining set of rules.

Here is an even simpler explanation:

A description of a thing is not the thing itself, it's just a description that allows you to: interact with that thing and place that thing into a framework.

And an even more accurate description of a thing is still not the thing itself, it's just a more accurate description.

Add more layers, and you still have just a description, and never the thing itself.

It's like an onion, and each layer gets more distant from the core.

Then those layers begin to interact with other descriptions of other things. So more layers are added to explain those interactions.

It goes on and on until you've simulated the universe.

The problem is it's exponential, and even if it was not, you're still just stuck with just a description, and not the thing itself.

Some people will claim otherwise here, so just ask them if a description of a thing is the same as the thing itself and go back to the start of all this.


If I'm not mistaken, one way of seeing GIT is just as a reductio ad absurdum on the idea that truth is just that which is proved.

Perhaps you should consider that talking about ontology clouds the water here, this explanation might have some analogical value, but isn't 'simpler'.


Seems like a silly paradox. Why is it important?

Also it seems very similar to this famous paradox: (from 600BC btw.) http://en.wikipedia.org/wiki/Epimenides_paradox


Constructing the paradox is just the means of proving the theorem. The paradox itself is not particularly important. Also, the paradox in Godel's theorem is "This statement is unprovable" not "This statement is false".


While it might not be the most technically accurate, the ways in which David Foster Wallace touches on Gödel in Everything and More are worth a look for interested parties.


The referenced page doesn't deliver on the hype of the HN headline, it merely offers:

"... some selections that will help you start to understand it."""


The greatest explanation I've ever seen for the Incompleteness Theorems comes from Palle Yourgrau in his book A World Without Time. He gives an excellent introduction to his explanation: "To appreciate Godel's theorem is your birthright; let no one, including the mathematical police, deprive you of what you have a right to enjoy."


Who in the world are the 'mathematical police'?


A rhetorical and shadowy group who discourage ordinary people from reading about or becoming interested in mathematics. A lot of popular science authors invoke this underground organisation, so it must exist.


With regards to the "Rucker, Infinity and the Mind" except that says:

>The proof of Gödel's Incompleteness Theorem is so simple, and so sneaky, that it is almost embarassing to relate. His basic procedure is as follows: (...)

Actually, Godel's theorem is not that simple at all. It involves lots of hard math. And it's not about some hazy "Universal Truth Machine", it's about a specified axiomatic mathematical system with certain specific properties.

The "simple" thing that RI&TM describes is a variation of the Liar's paradox. Which is somewhat like what Godel used, but he did not use it in a simplistic way, not at that level of coarseness, and surely not "embarrassingly simple to relate".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: