Hacker News new | past | comments | ask | show | jobs | submit login
Kurt Gödel and the Mechanization of Mathematics (the-tls.co.uk)
155 points by vo2maxer on Dec 17, 2019 | hide | past | favorite | 132 comments



One of the most interesting things about Gödel's Incompleteness theorem must surely be how _little_ effect it has had on the day to day practice of mathematical research. It's one of the most profound mathematical results in the last 100 years, and yet outside foundational fields like axiomatic set theory (Cohen's work on the Continuum Hypothesis comes to mind) it's quite possible to completely ignore the existence of undecideable statements. Gödel's work had a deep impact on mathematicians' _philosophical_ views of what they do, but that didn't carry down into the actual day to day business of proving theorems.

Isn't that strange? Wouldn't you think that undecidable propositions would be scattered through mathematics and turn up at curious and interesting moments in different fields? But apparently that's not generally the case. The Goldbach conjecture (Knuth) and of course the Riemann Hypothesis have been mentioned as possible candidates (but you might have said the same thing about Fermat's last theorem 30 years ago). If we did find a first-rank conjecture in number theory or some other area that's undecidable in ZFC (or whatever system) then we would at last be stepping through the door that Gödel opened for us.


Just like the conservation of energy has little effect on day to day practice of physics, besides blocking off an entire unbounded fruitless field of research into free energy.

In other words, impossibility results make it possible to have a practical discipline where people are focused on feasible approaches instead of spending all their time and money researching pie in the sky technologies that can never work. It's like when alchemy turned into chemistry and astrology turned into astronomy, because the fields became restricted to what is realistic.

Arguably the same still has to occur in other disciplines cough AI cough.


Actually, conservation of energy has enormous effect on the day-to-day practice of physics.....


AI has a ton of impossibility results. the "no free lunch theorems" are probably the best known. people now actively think in terms of designing assumptions, priors, or inductive biases to make their systems work better on the data they actually care about, because there's typically no solution that is always superior without tradeoffs.

https://en.wikipedia.org/wiki/No_free_lunch_in_search_and_op...


With respect to Intelligent Systems, there are two important theoretical results.

According to [Shamir, et. al 2019] “given any two classes C1 and C2, along with any point x∈C1, and our goal is to find some nearby y which is inside C2. Our ability to do so suggests that all the classes defined by neural networks are intertwined in a fractal-like way so that any point in any class is simultaneously close to all the boundaries with all the other classes.” Their results suggests that gradient classifiers (aka "Deep Learning") could remain forever fragile in ways that cannot be remedied by larger training sets.

Also important are the results on Inference Robust Logic, e.g. speed bumps) in the following:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3428114


This seems to be the case with all the widely used ML paradigms: NNs, decision forests, and SVMs. Favored for their high accuracy models which seem to come at the cost of massive overfitting and hence fragility.

I used a human in the loop approach for comparison, and those models, while not as accurate, seem much more robust. I think humans are better at identifying the big picture patterns, whereas ML approaches will microoptimize their way to a fragile approximation of the big picture pattern.

Practically, besides likelihood of breakdown, this means these overfit ML models are very susceptible to hacking. I predict once ML becomes more commercially widespread, we'll see a big increase in ML hacking.


I would counter that Gödel's Incompleteness theorem has had the greatest effect on the practice of mathematics by inventing digital computing which has revoltionised mathematics - all as a side effect of Turing and Church's proofs of Gödel's theorem.

Turing's proof of Godel's incompleteness devised the Turing machine as a metaphorical device to demonstrate incompleteness - this then led to the whole of computing.

And for Church's proof of Godel's incompleteness he devised the lambda calculus which founded functional programming, McCarthy's LISP, and the Y-combinator of algebraic recursion.


> it's quite possible to completely ignore the existence of undecideable statements.

It's not being ignored, it's just that once you know about the theorem, there are whole classes of problems that are no longer being worked on because they're impossible to solve.

It's like once you know about relativity, you stop trying to learn about the properties of the aether.


> there are whole classes of problems that are no longer being worked on because they're impossible to solve.

Genuinely curious - what problems? (Outside special things in Axomatic Set Theory like the Continuum Hypothesis.) E.g. are there topologists not working on certain problems in Topology because they're know to be undecidable?


Some computer science problems are basically reformulations of incompleteness theorems (which makes proving incompleteness even easier, at least for someone familiar with programming). E.g. see "Is there any concrete relation between Gödel's incompleteness theorem, the halting problem and universal Turing machines?" https://cs.stackexchange.com/questions/419/is-there-any-conc...


[Turing 1936] pointed out that the theorem of computational undecidability of the halting problem is rather different than Gödel's theorem of the inferential undecidability of I'mUnprovable. In strongly-typed theories, computational undecidability of the halting problem can be used to prove inferential undecidability but not vice versa because the proposition I'mUnprovable does not exist.


I don't think it's too surprising. Mathematicians aren't in the business of searching around for all true theorems. They work on problems of interest. It seems much more likely that you fail to progress for all the usual reasons (it's hard) rather than because it's undecidable.

It's like the halting problem. It doesn't prevent programmers from automating stuff and reasoning about programs.


Yes and no. The analogy with the halting problem is a good one, and I completely agree that mathematicians are guided by what's interesting, not just by enumerating propositions. But they also advance many conjectures that are considered interesting open propositions to prove or disprove. Very very few of these interesting conjectures have turned out to be undecidable (the Continuum Hypothesis being, again, a notable special exception). And I guess that's my point: it's a curious fact that undecidability and "interestingness" don't seem to intersect very much, as far as we can tell. I don't think that's something that self-evidently had to be the case, but it's where we are after close on 100 years of living with what Gödel showed us.


In strongly-typed higher-order theories, the Continuum Hypothesis has _not_ been proved to be inferentially undecidable because Cohen's results do not apply to strongly-typed higher-order theories.


https://en.m.wikipedia.org/wiki/Goodstein%27s_theorem

Not provable in Peano Arithmetic.

https://en.m.wikipedia.org/wiki/Continuum_hypothesis

Not provable in ZFC.

There are a few more.


Goodstein's theorem is easily provable in the [Dedekind 1888] theory of the Natural Numbers.

Cohen's incompleteness results do not hold for a theory of sets that axiomatizes sets up to a unique isomorphism. See the following:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3457802



What browser are you using?

The link below works for me in FireFox:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3457802


GNU Wget 1.20.1 built on linux-gnu.

That URL produces a HTML page containing a link to "Delivery.cfm/SSRN_ID3488706_code2561841.pdf?abstractid=3457802&mirid=1" with text "Download This Paper", which I assume is supposed to be the direct link. However https://papers.ssrn.com/sol3/Delivery.cfm/SSRN_ID3488706_cod... 302 redirects back to https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3457802 .


The following link also works for me in Chrome on Windows:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3457802


That is the same link. It produces a HTML page containing a abstract of the paper (not the paper itself) and a link that purports to be to the full paper, but actually 302 redirects back your (abstract) link.


But the thing is you are just trading off computability. Aren't you?

> The theory Ordinals is algorithmically inexhaustible


Excellent question!

Proof checking in the theory Ordinals is algorithmically checkable although theorems are not computationally enumerable because instances of strongly-typed induction axiom are uncountable.


Thank you! Can't believe I had a conversation with Prof. Carl Hewitt here! Love the actor calculus btw!


You are very welcome!

Cheers, Carl


I don't think Heisenberg's 'uncertainty principle' has had much impact on day-to-day physics either. But the idea of built-in limitations may have helped to make the humiliation of mere probabilities more palatable. As did the idea that the observer is part of the equation.

"What we observe is not nature itself, but nature exposed to our method of questioning ."


Physical indeterminacy is extremely important in modern day many-core computation because of essential reliance on arbiters with physical indeterminacy. The following for an axiomatization:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3459566


Stephen Wolfram has an interesting article about Gödel and the implications of his work for the 21st century.

https://writings.stephenwolfram.com/2006/05/kurt-godels-100t...


> One of the most interesting things about Gödel's Incompleteness theorem must surely be how _little_ effect it has had on the day to day practice of mathematical research.

This seems like post-hoc reasoning. Imagine the state of mathematics if his theorems had not been true and ZFC (or some extension) had been proven complete and consistent. Mathematics could have been the first entire field of knowledge to automate itself out of a job!


It is quite easy to prove consistent the theory ZFC and even a much powerful theory of the ordinals. See the following:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3457802


Just like problem of induction didn't stop scientists from doing science?


The undecidability of the halting problem is directly related to the existence of undecidable statements, and several important problems are uncomputable. (The word problem for groups, for example.)


Actually, because of the nonexistence of Gödel’s proposition I'mUnprovable in strongly-typed theories, the computational undecidability of the halting problem can be used to prove inferential undecidability (incompleteness).


(total novice question here) Isn't it that because you can approach these mathematical questions from a variety of axiomatic systems?


I am not much of an expert either, but as I understand it Zermelo-Frankel set theory is the most common foundation of mathematics and it is stronger than Peano arithmetic so I think Gödel's results hold in general.


Gödel's proof of inferential undecidability (incompleteness) does not work in strongly typed theories because his proposition I'mUnprovable does not exist.


It's possible that P=NP? is undecidable.


Yes! And 4 out of the 79 theorists from this survey[1] think that it is.

[1] https://www.cs.umd.edu/~gasarch/papers/poll.pdf#page=3


i have noticed that to get day-to-day mathematical work done of any kind, you basically have to at least pretend to be a mathematical Platonist. the mathematical objects are things "out there" to be discovered or manipulated. everybody seems to do this, whatever their professed philosophical views on math.


> The First Incompleteness Theorem tells us that assuming PA is consistent, one can formulate propositions that cannot be proved (nor disproved) on the basis of PA, but which are nevertheless true.

As far as I know, that last clause was never a claim made by Godel. Specifically the claim that these propositions are nevertheless true. This follows quite simply from the assumption that either a proposition is true, or its negation is true (i.e. the law of the excluded middle). But I recall reading that Godel, in his proofs, went to great lengths to avoid the law of the excluded middle over infinite sets.

Indeed, (again AFAIK) godel's first incompleteness theory still holds in constructive mathematics (roughly where the law of the excluded middle cannot be used). As such, I think the addition that these propositions are 'nevertheless true' actually hides a lot of complexity. Because there are philosophical positions in which these independent propositions are neither true not false.

And really, this is why we call them independent. The truthiness of these propositions is not determined by their axiom systems. It's a rather big philosophical stance to say that the 'truth' of these propositions can be determined outside of the axiomatic systems. It seems to me like that claims there is an objective (as in, not dependent on an axiomatic system) definition of what is true.


There is an objective definition of truth in any particular model. For any particular model M of, say, Peano arithmetic, there is an objective, unambiguous, perfectly well-defined notion of truth in that model (of course, the definition of truth cannot itself be formalized within the model).


> There is an objective definition of truth in any particular model.

"in any particular model" - this is exactly parent's point.

From Wikipedia, a restatement of the 1st incompleteness theorem:

> there are statements of the language of F which can neither be proved nor disproved in F

The truth of a proposition P, within F, means "P is an axiom of F, or can be proved in F".

Saying "there are statements of the language of F which can neither be proved nor disproved in F... but the statements are true nevertheless" is appealing to some 'objective notion' of truth that transcends F.


>The truth of a proposition P, within F, means "P is an axiom of F, or can be proved in F".

No, truth and proof are NOT the same thing. The system F needn't have any notion of truth. For example, Peano arithmetic has no notion of truth. Peano arithmetic has notions of zero, successor, addition, and multiplication--none of truth.

Almost all competent mathematicians would agree that there exist theorems in the language of PA which are neither provable nor disprovable in PA but which are nevertheless true. Here, "true" means "true in the standard model of PA", not "true according to PA"---the latter isn't even meaningful because there is no such thing as "truth according to PA".


Out of curiosity, could you give some examples of such theorems?


1. The original example would be Goedel's own example, namely, the statement (formulated in the language of PA) that PA is consistent, i.e., that PA does not prove 1=0.

Less contrived examples are:

2. Goodstein's Theorem https://en.wikipedia.org/wiki/Goodstein%27s_theorem

3. A theorem stating that every computable strategy is a winning strategy in the hydra game (see https://faculty.baruch.cuny.edu/lkirby/accessible_independen...)

4. The strengthened finite Ramsey theorem, see https://en.wikipedia.org/wiki/Paris%E2%80%93Harrington_theor...


Thanks!


Philosophical arguments aside, Gödel himself was absolutely, even evangelically, a mathematical realist. He believed any meaningful statement about mathematics had an objective truth value, axiomatic systems be damned.

https://plato.stanford.edu/entries/goedel/#GodRea


> It's a rather big philosophical stance to say that the 'truth' of these propositions can be determined outside of the axiomatic systems.

I don’t think so. ZFC is either consistent or it isn’t. That is, a Turing machine enumerating all theorems of ZFC either produces 0=1 or it doesn’t. This is, however, independent of ZFC (if the latter is consistent) by the second incompleteness theorem. See https://mathoverflow.net/a/332266, in particular Nik Weaver’s comment:

> I think the strongest argument in favor of the definiteness of ℕ, and against the idea that PA, or any other axiomatization, is the be-all end-all, is the evident fact that we have an open-ended ability to go beyond any computable set of axioms, for instance by affirming their consistency. If you accept PA you should accept Con(PA), and the process doesn't stop there: you can then accept Con(PA + Con(PA)), and so on. This goes on to transfinite levels. If our understanding of ℕ really were fully captured by some particular set of axioms then we would not feel we had a right to strengthen those axioms in any special way; the fact that we do feel we have this right shows that our understanding is not captured by any particular set of axioms.


I'm not 100% sure this perspective works, but could one take the position that this is because of a disconnect between model and reality?

You look at some big list of symbols and say "this formula (model) corresponds to whether a Turing machine enumerating all theorems of ZFC produces 0=1 (reality)". But does it really?


In a way. Consider a Turing machine that enumerates all proofs in ZFC and halts iff it finds a proof of 0=1 (i.e. a contradiction). The statement that this Turing machine halts (like the statement that any given Turing machine halts) is what’s called a Sigma_1 statement, which lies at the bottom of the arithmetic hierarchy (right above bounded-quantifier statements). ZFC + ~Con(ZFC) is consistent but Sigma_1-unsound and therefore omega-inconsistent [1].

[1] https://en.wikipedia.org/wiki/Ω-consistent_theory


What Gödel does is the following: he constructs a sequence of symbols which is not decidable (i.e. there is no “proof” of it or of its negation).

Then (and this is super important) he explains that IN THE STANDARD INTERPRETATION that sequence of symbols can be translated to “the sequence of symbols corresponding to the number XXXXX is a statement which has no proof” (and has to be true) and that very sequence corresponds to that number XXXXX.

But those two things (the construction and the interpretation) are totally separated, and goes to great lengths to keep them so.


Question for you. The first part...how does he ‘construct a sequence of symbols which is not decidable’ to begin with?


That is the summary, not "the first part". The how is the second paragraph.


> formalism: the idea that mathematics can be reconstructed in a content-free manner, or more strongly put, that mathematics itself is nothing but “a formal game of symbols”.

This is a common misrepresentation of Hilbert's Formalism. It doesn't state that no mathematical sentence has content beyond its mere symbols, but that not every mathematical sentence has content beyond its mere symbols.

Formalism was meant as a compromise between the two dueling philosophies of mathematics of the time: Russel's Logicism, which essentially said that all of mathematics can be derived from some self-evident, obviously true, logical axioms, and Brouwer's Intuitionism, which essentially said that the truth of mathematics stems from empirical confirmation that breaks down when infinities are involved, and therefore some logical axioms cannot be taken to be true for infinites. Hilbert tried to accept Brouwer's finitist philosophical position without rejecting any useful axioms. Formalism, then, separates mathematical sentences into two kinds: "real" -- those dealing with finite quantities and can be tied to physical observation and prediction, and "ideal" -- those dealing with infinities. Instead of rejecting the latter, Formalism says that as long as they don't introduce inconsistencies that invalidate the former, one could view them -- the ideal sentences -- as having no content beyond symbols, i.e. as not referring to anything "real".


Brouwer was absolutely not a finitist! Indeed, he completely rejected the idea that you could use formal logic to bound what mathematics could talk about.

A better (but still not entirely accurate) description would be that Brouwer had an epistemic conception of mathematical truth: mathematicians become able to assert mathematical truths as they engage in the process of constructing mathematical objects. This is indeed a constructive process, but it isn't finitistic -- for Brouwer mathematical reasoning is open-ended rather than a closed set of rules.

So much like Goedel (who disagreed with Brouwer in many other respects), Brouwer was totally committed to the idea that mathematical statements actually mean things. And because they carry meaning, the mathematical process and the properties of the mathematical objects created by a mathematician take precedence over the rules of logic.

This is why Brouwer rejected classical logic -- not because of any finitistic reason, but because (for example) classical logic permits disassembling the continuum into its set of points. Since (for Brouwer) the point of a continuum is that it is continuous, it followed that any logical principles (such as classical logic plus set theory) that permitted you to break up the continuum were fallacious. This is why he spent a lot of time on his so called "strong counterexamples", such as his proof that intuitionistically all total functions on the reals are continuous.

Notice that the direction of reasoning here is exactly the opposite of what a finitist would accept: Brouwer used the properties of the continuum to justify various finitary logical rules, rather than the other way around. The continuum was real, and logic just a façon-de-parler.

I find this kind of thing really interesting because it illustrates that the pretheoretical idea of truth can be formalized in many different ways, and that the various formalizations are not all compatible with each other. We are used to informal ideas like space having many different mathematical realizations -- metric spaces, topological spaces, vector spaces, etc -- but it is a little disconcerting (and a lot fascinating) to realize that logic and truth exhibit the same plural nature.


Hey Neel! Brouwer was a finitist in the sense that he did not accept infinitist axioms. His epistemology indeed was one of "mathematicians become able to assert mathematical truths as they engage in the process of constructing mathematical objects," but the justification was empiricism. I.e., his concept of establishing truth was by empirical, and therefore finitist. Here he is in "On the significance of the excluded middle in mathematics", 1923 (in Heijenoort):

> Within a specific finite "main system" we can always test (that is, either prove or reduce to absurdity) properties of systems, that is, test whether systems can be mapped, with prescribed correspondences between elements, into other systems; for the mapping determined by the property in question can in any case be performed in only a finite number of ways, and each of these can be undertaken by itself and pursued either to its conclusion or to a point of inhibition [here is his mention of induction that I provide below].

> ... On the basis of the testability just mentioned, there hold, for properties conceived within a specific finite main system, the principle of excluded middle

> ... It is a natural phenomenon, now, that numerous objects and mechanisms of the world of perception, considered in relation to extended complexes of facts and events, can be mastered if we think of them as (possibly partly unknown) finite discrete systems that for specific known parts are bound by specific laws of temporal concatenation. Hence the laws of theoretical logic, including the principle of excluded middle, are applicable to these objects and mechanisms in relation to the respective complexes of facts and events, even though here a complete empirical corroboration of the inferences drawn is usually materially excluded a priori and there cannot be any question of even a partial corroboration in the case of (juridical and other) inferences about the past.

Note his use of "testability", "natural phenomenon" and "empirical corroboration." So the source of truth is nature by way of empiricism, and that demands finitism. Even his reasoning about the continuum was finitist: no determination on order/separation can be made after examining a finite prefix of the decimal expansion of a real number (see ibid §2). If he were alive today, I have little doubt he would have been an ultra-finitist -- not any constructivist! -- because, as it turns out, there is no empirical justification (even in principle) to accept inifinite induction. The very arguments he uses to reject LEM must also reject unbounded induction: it is not verified by nature.

(Here's what he says on induction:

> Here the principle of mathematical induction often furnishes the means of carrying out such tests without individual consideration of every element involved in the mapping or of every possible way in which the mapping can be performed; consequently the test even for systems with a very large number of elements can at times be performed relatively rapidly.

After the discovery of computational complexity he would have had no choice but to be an ultra-finitist.)

As to the notion of truth, I completely agree, but here Alan Turing came up with a completely different perspective. Whereas both Brouwer and Russel were looking for Truth in mathematics (and Hilbert, too, although he was content to Truth in the contentual "real propositions" and the finitist rules of inferene), Turing rejected that notion altogether. To him, mathematics was a means whose intrinsic truth is irrelevant. It is valuable inasmuch as it is useful, not because it is true (see https://mdetlefsen.nd.edu/assets/201037/jf.turing.pdf).


You are being a bit too quick to equate finitism and constructivism. Hilbert was a finitist, but not a constructivist, and Brouwer was a constructivist, but he was not a finitist.

Brouwer's analysis of the continuum depends heavily upon the notion of a "free choice sequence" (aka a lawless sequence). The idea is that a creating subject -- i.e., you or me -- is free to make an unbounded sequence of choices to produce a sequence. At any finite time, you can only have made a finite number of arbitrary choices, but if you wait long enough you can make any number of arbitrary choices. Because these choices are arbitrary, free choice sequences cannot be described as being generated by any finite rule.

This is a constructive process, if you imagine (for example) generating a lawless sequence by throwing dice or from the clicks of a Geiger counter. But it's not a finitist one, because there is no finite data from which a choice sequence can be computed. This let Brouwer retain the idea that a mathematician can only have considered a finite amount of information at any point, without having to make any identification of the real numbers with computable functions.

In terms of complexity, there's an interesting analogy between choice sequences and the question of whether randomization gives you extra power -- the Miller-Rabin primality test lends immense force to the intuition that RP is stronger than P, but we might be centuries from proving it.

Even if we set aside randomness, though, an ultra-finitist/complexity-aware-logic is something that basically every logician I've ever talked to has agreed should exist, except we basically don't know how to do it. It's something that pretty clearly demands more creative insight than technical grunt, and unfortunately imaginative leaps don't come on demand. :/


We need to separate Brouwer's construction of the choice sequences from his philosophical justification for them; in the transition between the two there were just things he couldn't have known. His entire conception of choice sequences was because the mathematician's mind is finite. He couldn't have based his choice sequences on computation because the notion didn't exist yet, but if it had, he wouldn't have needed "lawless sequences". There is just nothing that requires them either in his chosen axioms or in his philosophy -- their salient feature is that one cannot predict their behavior, something that, after Turing, we know is true even for "lawful sequences" but Brouwer didn't. I, therefore, do not think it is correct to say that he wasn't a finitist because he relied on randomness. As far as he knew, randomness was consistent with a finite mind, the finiteness is what mattered to him, and anyway, he cared about undecidability, not randomness, but didn't know the former can exist without the latter.

What is essential to his philosophy is that the only justification for the axioms can come from nature by way of empiricism, and is therefore finite. If you accept Russel's, Brouwer's and Hilbert's belief that mathematics contains Truth, and if you then accept Brouwer's belief that the only source of truth is nature by way of empiricism, you have no choice today but to reject both LEM and induction. It is perfectly fine to work in constructive mathematics, and, in the Turingian sense it is of value, but it is no longer justified by Brouwer's philosophy of mathematical truth. Brouwer would have had to reject today's constructive mathematics on the same grounds he rejected classical mathematics: its axioms are unjustified by nature and empirical observation, even in principle. In fact, I don't think there is any philosophy of mathematics that could distinguish between constructivism and classical mathematics on any reasonable philosophical grounds today.

I would say that Brouwer's philosophy is (with today's knowledge) constructivist and ultra-finitist. BTW, Brouwer was not opposed to formalism as much as he was to logicism, and he even tried to propose a way to reconcile the two ("Intuitionistic reflections on formalism", 1927).


In modern terms, you can identify sequences and automata: a sequence is a current state, plus a transition function giving you the next element of the sequence and the next state. However, if every function is tracked by a computable process, then the transition function must also tracked by a computable process, and so every infinite sequence must be generated by a computable process.

This identification is precisely what choice sequences are used to rule out. If a sequence is genuinely arbitrary, then you cannot describe it up front with a finite amount of data and a finite rule for updating that data. Instead, you have to imagine a mathematician adding a new axiom at each time step about the next element of a choice sequence, and requiring that any constructions they do works uniformly regardless of what axioms may be chosen in the future.

In this picture, the total amount of axiomatic data is actually going up over time, and this is actually important for the results Brouwer proved. In particular, without the axioms for choice sequences, you cannot derive the strong counterexamples, because you need classically-false axioms to prove classically-false theorems (like the continuity of all real-valued functions).


> This identification is precisely what choice sequences are used to rule out.

But you don't need to rule them out -- Brouwer certainly didn't. He just wanted to rule out decidable sequences. It's the undecidability that's important to his result, not arbitrariness, and that's perfectly compatible with automata. He just didn't know about computational undecidability and thought that undecidability requires lawlessness, but it doesn't. In any event, he didn't think randomness violates finitism. That only finite systems can be the source of axioms has been a constant refrain of his. I see that according to Wikipedia's definition of Finitism, he would be a "classical finitist."

> In particular, without the axioms for choice sequences, you cannot derive the strong counterexamples, because you need classically-false axioms to prove classically-false theorems (like the continuity of all real-valued functions).

The continuity of all real-valued functions can be proven with just computable sequences because of undecidability; you just can't do it if computational undecidability hasn't been discovered yet. In fact, it doesn't even become quite "classically false", but can be "embedded" in classical mathematics by restricting the "all": all computable real functions are continuous is a classical theorem. AFAIK, all constructive theorems can be embedded in classical mathematics by restricting them to computable entities. Do you have an example of any result that relies on uncomputable sequences?


> It's the undecidability that's important to his result, not arbitrariness

Undecidability is arbitrariness - to say that, for example, the axiom of choice is independent of ZF is the same thing as saying that there is a arbitrary choice[0] between ZF+C and ZF+!C. Similarly, the sequence[1] G(PA),G(PA+G(PA)),G(PA+G(PA)+G(PA+G(PA))),... requires (at least) one additional arbitrary bit per item.

0: not asserting anything about what makes that choice, though.

1: where G(X) is the Godel sentence, roughly "X cannot cannot prove G(X)"


What I meant was that Brouwer doesn't need lawless sequences; all the results are valid with lawful sequences only. He just didn't know that because he didn't know that lawful sequences can be undecidable. You can call it arbitrary, but it doesn't have to be random, it just needs to be unknowable, which is the case even if it's chosen by a computational rule. It is true that most of the sequences of the kind you presented are not computable, but I don't think any result of Brouwer's is affected.


Hilbertian formalism is only one particular brand of formalism.

The definition in the article is the one put forth by Frege, perhaps the man most responsible for launching the debate about formalism into the wider philosophical community.

It seems okay to me for an informal article to define formalism in the same way (certainly the game aspect of it is the thing that ties the various schools of formalism together).


Frege started Logicism, so I don't think he would say that mathematical sentences lack content. Where are you getting this?


Frege was certainly arguing against formalism; he was the one responsible for taking its defenders to task.

But the formalism he presented and argued against (and the version held by its defenders of the day) is clearly the form described in the article and the discussions he sparked later all circled around the same.

Frege quotes Heine as

> Suppose I am not satisfied to have nothing but positive rational numbers. I do not answer the question, What is a number?, by defining number conceptually, say by introducing irrationals as limits whose existence is presupposed. I define from the standpoint of the pure formalist and call certain tangible signs numbers.

and Thomae

> The formal conception of arithmetic accepts more modest limitations than does the logical conception. It does not ask what numbers are and what they do, but rather what is demanded of them in arithmetic. For the formalist, arithmetic is a game with signs, which are called empty. That means they have no other content (in the calculating game) than they are assigned by their behavior with respect to certain rules of combination (rules of the game).

within the Grundgesetze der Arithmetik.

And indeed Frege defines the position he is arguing against as

> Arithmetic is concerned only with the rules governing the manipulation of the arithmetical signs, not, however, with the denotation of the signs

These are from Max Black's translation of the Grundgesetze der Arithmetik in his three part series "Frege Against the Formalists." These quotes are from part 1 in The Philosophical Review, Vol. 59, No. 1 (Jan., 1950), pp. 79-93


Now I see what you mean. Excellent! I wasn't at all aware of early formalism.


> Note that if a system is inconsistent then everything is provable, thus if a system is consistent there must be a proposition which cannot be proved.

¬ ∀ (P Q : Prop), (P → Q) → (¬P → ¬Q)


Along these lines, a book I've seen recommended here:

Computability: Turing, Gödel, Church, and Beyond. B. Jack Copeland, Carl J. Posy, Oron Shagrir (2013) MIT Press

Description:

In the 1930s a series of seminal works published by Alan Turing, Kurt Gödel, Alonzo Church, and others established the theoretical basis for computability. This work, advancing precise characterizations of effective, algorithmic computability, was the culmination of intensive investigations into the foundations of mathematics. In the decades since, the theory of computability has moved to the center of discussions in philosophy, computer science, and cognitive science. In this volume, distinguished computer scientists, mathematicians, logicians, and philosophers consider the conceptual foundations of computability in light of our modern understanding. Some chapters focus on the pioneering work by Turing, Gödel, and Church, including the Church-Turing thesis and Gödel's response to Church's and Turing's proposals. Other chapters cover more recent technical developments, including computability over the reals, Gödel's influence on mathematical logic and on recursion theory and the impact of work by Turing and Emil Post on our theoretical understanding of online and interactive computing; and others relate computability and complexity to issues in the philosophy of mind, the philosophy of science, and the philosophy of mathematics.


Using Orders on propositions, the construction of Gödel’s proposition I'mUnprovable is blocked because the mapping Ψ↦⊬Ψ has no fixed point because ⊬Ψ has order one greater than the order of Ψ since Ψ is a propositional variable. Consequently, some other way had to be devised to prove inferential incompleteness without using Gödel’s proposition I'mUnprovable. A complication is that theorems of strongly-typed theories of the natural numbers and computation are not computationally enumerable because there are uncountable instances of the induction axiom in each theory. However, proof checking is computationally decidable :-) The complication can be overcome using special cases of induction axioms to prove inferential incompleteness of the theories. See the following:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3418003


>One way of describing the Incompleteness Theorems (1931) of the Austrian logician Kurt Gödel is to say that he proved, in the form of a mathematical theorem, that the possibility of a fully automated mathematics can never be realized

What Gödel apparently missed (although it's hard to tell without going over his writings carefully), is that humans, by the same criterion, can't "solve" mathematical problems. This is essentially a re-statement of the halting problem (undecidability).

You can't expect to go to the best mathematician in the world (o.c. that's not a thing, but say for example Terry Tao), give him an arbitrary problem and expect the problem to be solved within a fixed time. Or even some arbitrary but finite time (even if he had infinite scratch paper). If he had an obligation to solve the problem (within an axiomatic basis), there could be problems that go unsolved. Any mathematician knows this -- there's no guarantee the Riemann hypothesis will be resolved -- it could be within a year, within a century, or never.

It's very attractive to think we have some kind of higher powers (that transcend ordinary matter, or computers, or w.e.), but we do not and mysticism is not necessary to explain our abilities.


Yeah I was confused by that and maybe the article could have clarified this in relation to the Church-Turing thesis.


Yeah, that's a lot of Hofstadter's thesis in Godel, Escher, Bach. In his long, rambling way, he eventually gets into the subtle differences between the different formulations of the Church-Turing thesis and argues that all consciousness is the same kind of epiphenomenon as a computer and thus subject to the same rules.


It’s an open question if it is decidable to show that P=NP problem is decidable :). We may live in a universe where undecidable problems exist and it may not ever be possible no show that they are undecidable. Which imho is kind of beautiful. Even if we live in a simulation it is so incomprehensible that it might as well be God to us.


I always get a feeling articles like the OP (and there seem to be many) are pushing some kind of mysticism instead of clearly explaining...

Can anybody give an example of mathematics that mathematicians consider “true”, but which is not provable in ZFC?

If no such mathematics exist, then mathematics is essentially just a game of symbol manipulation, right?


Mathematicians consider that there are "true" propositions of the form procedure P on input x does not halt but the propositions are not provable in ZFC.


Sure, such statements exist. But would a mathematician consider one of them true without a proof of some sort?

(Nope.)


Mathematicians consider them to be true because they are provably true, i.e., they hold in the unique up to isomorphism model of the theory Ordinals, which subsumes ZFC.


Ok, so Ordinals is the new foundation of mathematics so to speak? Doesn’t that mean that all of mathematics can be formalized within this Ordinals theory?


All of classical mathematics before 1930 can be formalized in the the theory Ordinals. However, formalizing digital computation requires the theory Actors, which is axiomatized here:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3459566


Wow. Thanks.


Sure. Most mathematicians think it's true that ZFC is consistent; but that can't be proved in ZFC.

mic drop/Halmos block


Most mathematicians probably think the Riemann conjecture is true, but that wasn’t what I meant. I should probably have said “consider proven”.


Okay, well that's easy, too. ZFC has been proven consistent in other formal systems. For instance, you can prove ZFC consistent using Morse-Kelley theory: https://en.wikipedia.org/wiki/Morse%E2%80%93Kelley_set_theor....

Which is a "stronger" theory than ZFC; but you can also prove ZFC consistent in a system consisting of ZFC plus the axiom that "ZFC is consistent"; or indeed, in weaker theories than ZFC, augmented with the axiom that "ZFC is consistent".


Doesn’t your example of an axiom that “ZFC is consistent” show of how little use proofs within stronger systems are? I at least feel tempted to say that proofs of ZFCs consistency within stronger systems tell you nothing about ZFCs consistency.


They most certainly do, though! They do tell you something about the consistency of ZFC; they tell you that if the system you're working in is consistent, then so is ZFC. Is that not worth knowing?

It does sound a bit like you want something out of formal systems that they just can't give you, which is "absolute" truth.

edited to add: It's also worth noting that very, very few mathematicians care about actually formalising proofs in a formal system - it's a niche area. The vast majority of mathematicians go on about their business without giving much thought to ZFC and its axioms at all.

Many can't even name them all. (I know this, because they are quite surprised when you tell them what some of the axioms are. Especially the Axiom of Infinity.) Lol, I probably can't either,I suspect I would miss a few if I did it off the top of my head.


Sure I guess.

I don’t think I want more from formal systems than they can offer. I definitely don’t think of truth as “absolute”.

I think it’s actually the opposite: I’m skeptical of the “absolute” and almost mystical truth that some mathematicians seem to ascribe to mathematics. Do you believe in it? :)


Do I? Absolutely.


So all of mathematics can never be formalized? And this is not just a question of effort and difficulty, it’s a fundamental philosophical limit of some sort?

If so, why should I believe that? What’s the evidence?


Of course, adding the axiom ZFC is consistent to the theory ZFC does _not_ produce a convincing proof of the consistency of ZFC!

The theory Ordinals has a convincing proof of consistency because the theory has a provably unique up to isomorphism model. Consequently, ZFC must also be consistent because it is a special case of the theory Ordinals.


However, the consistency of ZFC can be proved in strongly-typed higher-order theories that subsume ZFC.


What does that mean though?

Does it only mean that ZFC is consistent when subsumed within a strongly-typed higher order theory, or does it mean that ZFC by itself is consistent?


The strongly-typed higher-order theory Ordinals is consistent because it has a unique up to isomorphism model by a unique isomorphism. Consequently, ZFC is consistent because it is a special case of the theory Ordinals.


That ZFC by itself is consistent, i.e. you can't find a contradiction.


This is nonsense. It's true; but nonsense nevertheless. Because it gives the misleading impression that you can only prove the consistency of a formal system using a "stronger" system. But that's quite wrong: it only needs to be a different formal system.


Of course, it isn't "nonsense"; instead simply the state of affairs ;-)


why are you telling me this


Mistake is often made that ZFC is the be-all and end-all ultimate foundation of mathematics. Unfortunately, 1st-order ZFC (often referred to as just "ZFC") is a very weak theory.

For example, if O is the type of the ordinals, then

          O
   Boolean  // type of all functions from ordinals into type Boolean
cannot be represented in ZFC because it does not exist in the cumulative hierarchy of sets.


No, I mean, why me, specifically? Did I accidentally say something to give the impression I find your odd rantings interesting? Because if I did, please let me know so I can correct the misunderstanding.


Personal insults are not allowed on Hacker News.


>One way of describing the Incompleteness Theorems (1931) of the Austrian logician Kurt Gödel is to say that he proved, in the form of a mathematical theorem, that the possibility of a fully automated mathematics can never be realized.

Which suggests the following solution : limit our endeavors to that part of mathematics that CAN be automatically proven (and drill down infinitely into details thereof). And stub off the rest with good-enough axioms (aka holy doctrine).

Crazytalk? Well it's basically how we as a culture handle pretty much everything, epistemologically (and ontologically) speaking.

It could be very efficient. It could be a dystopian nightmare.


What would be a good starting point to learn more about the topics described in this article? I felt like I understood the gist, but a lot of things went over my head.


Douglas Hofstadter's "Gödel, Escher Bach" does a very slow gradual walk through Gödel's incompleteness theorem. If you can get into his writing style, it's a good way to really "get" the points.


Thanks for the recommendation! I actually just picked it up off a friend's bookshelf yesterday and it's next on my list of books to read.

I appreciate your response!


Chaitin's "meta math" is also a good read. Less arduous than G.E.B.


Chaitin made the very good point that inferential undecidability (incompleteness) is too important a result to depend on the triviality of the existence of the proposition I'mUnprovable. Fortunately, inferential undecidability of strongly-typed theories can be proved without using the nonexistent I'mUnprovable.



are there any practical impacts (direct or indirect) of these results?


The computer theory version of Godel's result is the halting problem, and that is applicable all over the place, in the same way that physical conservation laws imply a large variety of invention are impossible.

For example, it is impossible in general to understand what an English sentence says, because that sentence may describe the operation of an arbitrary program, and understanding an arbitrary program is undecidable.

Another example, it is impossible to detect all variants of malicious programs, because that requires understanding what all programs do, which requires identifying whether an arbitrary program halts or not.

Yet another example, optimal machine learning is impossible, since that is based on Solomonoff induction, based on Kolmogorov complexity, which is uncomputable due to the halting problem.

And another off the top of my head, it is impossible to create a general program creating program, such that you give it a well defined specification and it spits out a program that meets the requirements, if the requirements are satisfiable, since that requires solving the halting problem.

Finally, humans, while not optimally satisfying all the above tasks, do quite a decent job at these tasks, which is surprising if humans themselves were some sort of algorithm. So, this strongly suggests the human mind is doing some operation beyond computation, which in turn would imply that human intelligence level AI is impossible in theory, and thus impossible in practice. Furthermore, if the human mind is indeed doing something uncomputable, this has technological implications that human in the loop computation approaches are inherently more powerful than computation alone, a vastly under researched concept, although arguably the secret to the success of most AI/ML/datamining companies these days, such as Google, Facebook, Amazon, and Palantir. And the software industry in general is proof of this concept that humans are program generating processes performing much better than the halting problem would lead one to expect if humans were indeed mere computation.

Anyways, a whole bunch of impossibility implications, which have the practical application of protecting your wallet from comp. sci. snake oil salesmen, and the intriguing possibility 'beyond computation' technology.


> Finally, humans, while not optimally satisfying all the above tasks, do quite a decent job at these tasks, which is surprising if humans themselves were some sort of algorithm.

Whilst I agree with everything in your comment before this point, I don't agree with this.

There is a very big difference between almost achieving something and actually achieving it. For example, there exist algorithms which are able to determine whether a given program halts, for a large class of programs, or to give some probability that a program halts prior to some point, even if they are unable to solve the general halting problem. So you haven't really argued that humans are doing something computers can't do.

In addition, I take issue with your statement that humans "do quite a decent job at these tasks". On the face of it, I agree that yes, (when sufficiently motivated) (some) humans do indeed do a pretty good job of e.g. proving mathematical theorems and writing programs that halt when they should. Certainly better than the best computers we can build. But that "pretty good" is a _far_ cry from the optimality referenced in the theorems you've stated. And if you are using this "pretty good" to indicate that humans are better than any possible algorithm, that argument doesn't follow-through.

I think that human-level algorithms are in principle possible. They will have to incorporate a lot of probability and approximate results and the like, and likely will have something similar to "intuition" where the path from input to output is not clear, whether to human onlookers or to the algorithm itself (whatever that might mean). We already see hints of this today with our deep learning algorithms. Not to say that we are necessarily close to achieving this, or that we ever will, but on the other hand it might very well be just around the corner!


Sure, this observation is not a definitive proof, it is more along the evidential lines.

There is also a class of deciders that can decide for an infinitely large class, which cannot be decided computationally, yet is not complete. I'd say humans most likely fall in this class, since they far exceed anything we can do with the best algorithms we've dreamed up, yet also cannot solve everything.

Of course, you can always say there is a yet undiscovered algorithm to fill the gap between state of the art and human performance, but at some point that hypothesis is less well supported by the evidence than the uncomputable mind hypothesis, and I'd say we are well past that point.


Humans exceed our current computers in the same way that a large asteroid impact exceeds our current nuclear weaponry. It's a matter of scale, not of kind.

If we track the capability of computation machines from binary adders through old-school chess AI, expert systems and robotics, to the neural networks playing Go, chess, Starcraft, DOTA, we see an ever expanding sphere of ability and influence - often exceeding what humans predicted was impossible for computers.

> Of course, you can always say there is a yet undiscovered algorithm to fill the gap between state of the art and human performance

This implies a kind of asymptotic approach towards human intelligence, that it's something we keep trying to approach but will never reach.

But beyond "filling the gap" I believe computers have the potential to shoot past us in ability. And if they do, I think it's going to be a pretty shocking experience, because I think the rate of increase in intelligence at that point is going to be bigger than we expect - there's no reason to believe the exact level of intelligence is a special border, objectively speaking.

To claim that the evidence so far supports the uncomputable mind hypothesis seems to me like if we had managed to build tiny motors which had been getting bigger and better each year, and then claiming that we would never be able to beat an elephant in strength because no motor up to that point had yet been able to do so. Yes, our metaphorical motors are still tiny, and yes, it takes other extra pieces of engineering beyond motors to actually push down a tree, and indeed it's possible we will perhaps never build a big enough machine. But to take each larger motor as somehow being evidence AGAINST us ever getting to that point is a strange viewpoint.


I don't see why you believe this. From my perspective, all the recent gains in AI abilities is purely due to faster hardware. But, due to the combinatorics of search spaces, exponential speed increase only gets you linear AI improvements. So, we'll rapidly hit peak AI in the near future, falling way short of even emulating human abilities.

As it is, compare the amount of processing, space and energy humans have to expend to perform equivalent tasks to computers, and there is no comparison. Computers are extraordinary orders of magnitude inferior to human performance. The only reason there is the appearance of parity is because we are not looking at the back end of what it takes computers to accomplish these tasks.

It is like we flipped a coin to create the works of Shakespeare, and waited the many eons necessary for just the right sequence to occur. We then say a coin is just as capable as Shakespeare, but that's only because we've ignored how much effort it took to flip the coin vs how much effort it took Shakespeare.

So, I'm not arguing our motors will not become bigger. I'm arguing that as our motors become bigger, the returns are diminishing exponentially, much more rapidly than can get us to true human parity.


You argue that because it is impossible for an algorithm to do some tasks perfectly, and humans can do "quite a decent job at these tasks," thus human minds are doing something beyond computation. Firstly this just doesn't logically follow. Perfection is completely different from "okay." Secondly it's far from clear that humans do any better than machines on many of these tasks, which would imply the opposite according to your argument.

Your greater than mere computation mind can makes huge bucks solving any problem since it can see if a program halts. Lets factor out some large primes and crack RSA. Just make a program that would halt if the answer is less than a certain value, then do binary search. A bit tedious, but surely huge piles of money are motivating. I'd do it myself but my mind seems to be malfunctioning.

Basically this is just a silly claim which is a shame because the first half of your response was pretty good.


Sure perfection is different from okay, but the human 'okay' is also much much beyond what we can do algorithmically, at least as far as we've discovered. That is at least evidence in favor of the uncomputable mind.

And you might be correct that a human in the loop approach could crack RSA better than algorithmic approach, I've given it a shot in the past. There was a slight correlation of better answers with human involvement. Not entirely unexpected since our method of generating RSA primes has to be computable in a tractable time frame, so there is probably some exploitable regularity in the generation scheme that a super computational process could assist in identifying.


> The computer theory version of Godel's result is the halting problem

I am not sure that I agree with this. Of course if you can solve the halting problem, then you can decide some propositions. Like you have two Turing machines, one that enumerates proofs and the other that enumerates counterexamples, and they stop after finding the first one. Prove that one or them halts and you have decided the proposition.

On the other hand, there are "highly undecidable" problems which you cannot solve even if you had an oracle for the halting problem.


The Halting Problem Theorem is an intuitive way to prove inferential incompleteness for strongly-typed theories in which Gödel’s proposition I'mUnprovable does not exist.

Using Orders on propositions, the construction of Gödel’s proposition I'mUnprovable is blocked because the mapping Ψ↦⊬Ψ has no fixed point because ⊬Ψ has order one greater than the order of Ψ since Ψ is a propositional variable.


Yes. Because of Gödel, von Neumann and others stopped trying to build machines that can prove anything.

Of course, following the critiques and further revisions of Zermelo’s set theory by Fraenkel, Skolem, Hilbert and von Neumann, a young mathematician by the name of Kurt Gödel in 1930 published a paper which would effectively end von Neumann’s efforts in formalist set theory, and indeed Hilbert’s formalist program altogether, his theorem of incompleteness. von Neumann happened to be in the audience when Gödel first presented it [1]

[1] https://medium.com/cantors-paradise/the-unparalleled-genius-...


Not really. Gödel's theorem only proved that the specific kind of "mechanization of mathematics" that's implied by Hilbert's program as originally devised is impossible. The wiki has a nice section on varieties of Hilbert's program after Gödel https://en.wikipedia.org/wiki/Hilbert%27s_program#Hilbert%27... , citing (Zach, 2006) https://arxiv.org/abs/math/0508572

Of course from a slightly different POV there are plenty of practical implications, since e.g. unsolvability of the Halting problem can be viewed as a close variant of Gödel's results.


For one practical example, Gödel's incompleteness is directly related to decidability problems in CS. there's this glorious liar paradox machine discussed by Computerphile [0].

[0] https://www.youtube.com/watch?v=macM_MtS_w4


You want to ask Hilbert ;-) Yes, there are. You/we can not programmatically derive all mathematical truths from ZFC ("the basic math axioms"). E.g. we recently had the Collatz Conjecture here on HN [1], and I strongly suspect that it can be neither proven nor falsified. Also, if one could solve the Halting Problem that would be quite useful.

Thanks to Gödel we know that we might be not just stupid apes, but that some things are indeed undecidable (even if they must be true or false).

[1] https://news.ycombinator.com/item?id=21780068


Specifically, "mathematical truths" here meams statements that, while logically independent from ZFC, are not independent from e.g. ZFC plus Con(ZFC). Or Con(Con(ZFC)), or Con(for any n, Con^n(ZFC)), or even more elaborate varieties. AIUI, part of what Recursion theory does is characterize these "true" consistency statements. The field is closely related to, e.g. halting oracles in TCS.


True propositions of sets are those that hold in the up to unique isomorphism model of the theory Ordinals defined here:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3457802


> some things are indeed undecidable (even if they must be true or false).

What does it mean 'even if they must be true or false'? What does truth and falsehood mean if you cannot in theory prove/disprove a proposition?

You are suggesting the existence of some sort of Platonic realm, where propositions have definite truth/falsehood values, while at the same time being inaccessible to us.


The propositions themselves are not inaccessible, but there is no way to automate the process of testing their truthiness in a way that always works. That is the usual menaning of undecidability.


> The propositions themselves are not inaccessible

Yes, I know. I meant that their truth value is inaccessible. What I'm saying is that claiming 'some things are indeed undecidable (even if they must be true or false)' suggests the existence of a Platonic realm with an existing truth value.

If a proposition is undecidable, it makes no sense to talk about whether 'it actually is true/false', because truth/falsehood is attached to a proposition by virtue of proving/disproving the proposition.


> truth/falsehood is attached to a proposition by virtue of proving/disproving the proposition.

The thing is, this isn't necessarily the case. "Proving/disproving" a proposition can only be done starting from a set of axioms that are independently thought of as "true". But if you think of, e.g. ZFC as true, you'll also think of Con(ZFC) as true, even though the statement of Con(ZFC) can't be proven from ZFC. Similarly for Con(Con(ZFC)) and many sorts of increasingly-complex "consistency" statements.


Ok, now I see your point. It seems you're advocating for intuititionistic logic: you only consider a proposition "true" if you can build a proof for it. Classical logics (and, therefore, classical mathematics) assumes that it makes perfect sense to talk about the truth value of a proposition without computing/proving it. Even undecidable ones are either true or false.


The law of the excluded middle says that for all propositions, either it or its negation must be true.

But Godel showed that there exist propositions for which neither can be proven.


The Law of Excluded Middle says:

   ⊢(Ψ or ~Ψ)
In inferential incompleteness says that there is a proposition Ψ such that:

  (⊬Ψ) and (⊬~Ψ)


> the existence of a Platonic realm with an existing truth value.

This is a standard assumption by mathematicians.


For the [Dedekind 1888] axiomatization of the Natural Numbers, a proposition is true if and only if it holds in the unique up to isomorphism model of the axioms.


There is one - a particular viewpoint on the Incompleteness Theorems imply that you cannot in general tell if a program encoded on a UTM will halt or not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: