Hacker News new | past | comments | ask | show | jobs | submit login

One of the most interesting things about Gödel's Incompleteness theorem must surely be how _little_ effect it has had on the day to day practice of mathematical research. It's one of the most profound mathematical results in the last 100 years, and yet outside foundational fields like axiomatic set theory (Cohen's work on the Continuum Hypothesis comes to mind) it's quite possible to completely ignore the existence of undecideable statements. Gödel's work had a deep impact on mathematicians' _philosophical_ views of what they do, but that didn't carry down into the actual day to day business of proving theorems.

Isn't that strange? Wouldn't you think that undecidable propositions would be scattered through mathematics and turn up at curious and interesting moments in different fields? But apparently that's not generally the case. The Goldbach conjecture (Knuth) and of course the Riemann Hypothesis have been mentioned as possible candidates (but you might have said the same thing about Fermat's last theorem 30 years ago). If we did find a first-rank conjecture in number theory or some other area that's undecidable in ZFC (or whatever system) then we would at last be stepping through the door that Gödel opened for us.




Just like the conservation of energy has little effect on day to day practice of physics, besides blocking off an entire unbounded fruitless field of research into free energy.

In other words, impossibility results make it possible to have a practical discipline where people are focused on feasible approaches instead of spending all their time and money researching pie in the sky technologies that can never work. It's like when alchemy turned into chemistry and astrology turned into astronomy, because the fields became restricted to what is realistic.

Arguably the same still has to occur in other disciplines cough AI cough.


Actually, conservation of energy has enormous effect on the day-to-day practice of physics.....


AI has a ton of impossibility results. the "no free lunch theorems" are probably the best known. people now actively think in terms of designing assumptions, priors, or inductive biases to make their systems work better on the data they actually care about, because there's typically no solution that is always superior without tradeoffs.

https://en.wikipedia.org/wiki/No_free_lunch_in_search_and_op...


With respect to Intelligent Systems, there are two important theoretical results.

According to [Shamir, et. al 2019] “given any two classes C1 and C2, along with any point x∈C1, and our goal is to find some nearby y which is inside C2. Our ability to do so suggests that all the classes defined by neural networks are intertwined in a fractal-like way so that any point in any class is simultaneously close to all the boundaries with all the other classes.” Their results suggests that gradient classifiers (aka "Deep Learning") could remain forever fragile in ways that cannot be remedied by larger training sets.

Also important are the results on Inference Robust Logic, e.g. speed bumps) in the following:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3428114


This seems to be the case with all the widely used ML paradigms: NNs, decision forests, and SVMs. Favored for their high accuracy models which seem to come at the cost of massive overfitting and hence fragility.

I used a human in the loop approach for comparison, and those models, while not as accurate, seem much more robust. I think humans are better at identifying the big picture patterns, whereas ML approaches will microoptimize their way to a fragile approximation of the big picture pattern.

Practically, besides likelihood of breakdown, this means these overfit ML models are very susceptible to hacking. I predict once ML becomes more commercially widespread, we'll see a big increase in ML hacking.


I would counter that Gödel's Incompleteness theorem has had the greatest effect on the practice of mathematics by inventing digital computing which has revoltionised mathematics - all as a side effect of Turing and Church's proofs of Gödel's theorem.

Turing's proof of Godel's incompleteness devised the Turing machine as a metaphorical device to demonstrate incompleteness - this then led to the whole of computing.

And for Church's proof of Godel's incompleteness he devised the lambda calculus which founded functional programming, McCarthy's LISP, and the Y-combinator of algebraic recursion.


> it's quite possible to completely ignore the existence of undecideable statements.

It's not being ignored, it's just that once you know about the theorem, there are whole classes of problems that are no longer being worked on because they're impossible to solve.

It's like once you know about relativity, you stop trying to learn about the properties of the aether.


> there are whole classes of problems that are no longer being worked on because they're impossible to solve.

Genuinely curious - what problems? (Outside special things in Axomatic Set Theory like the Continuum Hypothesis.) E.g. are there topologists not working on certain problems in Topology because they're know to be undecidable?


Some computer science problems are basically reformulations of incompleteness theorems (which makes proving incompleteness even easier, at least for someone familiar with programming). E.g. see "Is there any concrete relation between Gödel's incompleteness theorem, the halting problem and universal Turing machines?" https://cs.stackexchange.com/questions/419/is-there-any-conc...


[Turing 1936] pointed out that the theorem of computational undecidability of the halting problem is rather different than Gödel's theorem of the inferential undecidability of I'mUnprovable. In strongly-typed theories, computational undecidability of the halting problem can be used to prove inferential undecidability but not vice versa because the proposition I'mUnprovable does not exist.


I don't think it's too surprising. Mathematicians aren't in the business of searching around for all true theorems. They work on problems of interest. It seems much more likely that you fail to progress for all the usual reasons (it's hard) rather than because it's undecidable.

It's like the halting problem. It doesn't prevent programmers from automating stuff and reasoning about programs.


Yes and no. The analogy with the halting problem is a good one, and I completely agree that mathematicians are guided by what's interesting, not just by enumerating propositions. But they also advance many conjectures that are considered interesting open propositions to prove or disprove. Very very few of these interesting conjectures have turned out to be undecidable (the Continuum Hypothesis being, again, a notable special exception). And I guess that's my point: it's a curious fact that undecidability and "interestingness" don't seem to intersect very much, as far as we can tell. I don't think that's something that self-evidently had to be the case, but it's where we are after close on 100 years of living with what Gödel showed us.


In strongly-typed higher-order theories, the Continuum Hypothesis has _not_ been proved to be inferentially undecidable because Cohen's results do not apply to strongly-typed higher-order theories.


https://en.m.wikipedia.org/wiki/Goodstein%27s_theorem

Not provable in Peano Arithmetic.

https://en.m.wikipedia.org/wiki/Continuum_hypothesis

Not provable in ZFC.

There are a few more.


Goodstein's theorem is easily provable in the [Dedekind 1888] theory of the Natural Numbers.

Cohen's incompleteness results do not hold for a theory of sets that axiomatizes sets up to a unique isomorphism. See the following:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3457802



What browser are you using?

The link below works for me in FireFox:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3457802


GNU Wget 1.20.1 built on linux-gnu.

That URL produces a HTML page containing a link to "Delivery.cfm/SSRN_ID3488706_code2561841.pdf?abstractid=3457802&mirid=1" with text "Download This Paper", which I assume is supposed to be the direct link. However https://papers.ssrn.com/sol3/Delivery.cfm/SSRN_ID3488706_cod... 302 redirects back to https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3457802 .


The following link also works for me in Chrome on Windows:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3457802


That is the same link. It produces a HTML page containing a abstract of the paper (not the paper itself) and a link that purports to be to the full paper, but actually 302 redirects back your (abstract) link.


But the thing is you are just trading off computability. Aren't you?

> The theory Ordinals is algorithmically inexhaustible


Excellent question!

Proof checking in the theory Ordinals is algorithmically checkable although theorems are not computationally enumerable because instances of strongly-typed induction axiom are uncountable.


Thank you! Can't believe I had a conversation with Prof. Carl Hewitt here! Love the actor calculus btw!


You are very welcome!

Cheers, Carl


I don't think Heisenberg's 'uncertainty principle' has had much impact on day-to-day physics either. But the idea of built-in limitations may have helped to make the humiliation of mere probabilities more palatable. As did the idea that the observer is part of the equation.

"What we observe is not nature itself, but nature exposed to our method of questioning ."


Physical indeterminacy is extremely important in modern day many-core computation because of essential reliance on arbiters with physical indeterminacy. The following for an axiomatization:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3459566


Stephen Wolfram has an interesting article about Gödel and the implications of his work for the 21st century.

https://writings.stephenwolfram.com/2006/05/kurt-godels-100t...


> One of the most interesting things about Gödel's Incompleteness theorem must surely be how _little_ effect it has had on the day to day practice of mathematical research.

This seems like post-hoc reasoning. Imagine the state of mathematics if his theorems had not been true and ZFC (or some extension) had been proven complete and consistent. Mathematics could have been the first entire field of knowledge to automate itself out of a job!


It is quite easy to prove consistent the theory ZFC and even a much powerful theory of the ordinals. See the following:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3457802


Just like problem of induction didn't stop scientists from doing science?


The undecidability of the halting problem is directly related to the existence of undecidable statements, and several important problems are uncomputable. (The word problem for groups, for example.)


Actually, because of the nonexistence of Gödel’s proposition I'mUnprovable in strongly-typed theories, the computational undecidability of the halting problem can be used to prove inferential undecidability (incompleteness).


(total novice question here) Isn't it that because you can approach these mathematical questions from a variety of axiomatic systems?


I am not much of an expert either, but as I understand it Zermelo-Frankel set theory is the most common foundation of mathematics and it is stronger than Peano arithmetic so I think Gödel's results hold in general.


Gödel's proof of inferential undecidability (incompleteness) does not work in strongly typed theories because his proposition I'mUnprovable does not exist.


It's possible that P=NP? is undecidable.


Yes! And 4 out of the 79 theorists from this survey[1] think that it is.

[1] https://www.cs.umd.edu/~gasarch/papers/poll.pdf#page=3


i have noticed that to get day-to-day mathematical work done of any kind, you basically have to at least pretend to be a mathematical Platonist. the mathematical objects are things "out there" to be discovered or manipulated. everybody seems to do this, whatever their professed philosophical views on math.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: