Hacker News new | past | comments | ask | show | jobs | submit login

How do you know something is true if you don't have a proof? It all depends on you views on the philosophy of mathematics. Are there "true" statements that don't have proofs? Some say yes, there are platonic ideas that are true, even if they aren't provable. Others say, "what does it even mean to say something is true, if there is no proof. What you really have is a conjecture."



Didn’t Gödel show that in most useful logical systems there are true statements that cannot be proved?


Indeed, the first incompleteness theorem tells us that any logical framework which can express Peano arithmetic must necessarily contain true (resp. false) facts for which no (resp. counter) proof can be given.

Sometimes you can prove that no proof exists about a specific sentence (that's what his incompleteness proof does), and I think you could extend this technique to construct sentences where no proof exists of whether it has a proof, etc...


> the first incompleteness theorem tells us that any logical framework which can express Peano arithmetic must necessarily contain true (resp. false) facts for which no (resp. counter) proof can be given.

Not quite. Any logical framework which can express Peano arithmetic must necessarily contain true facts for which no proof can be given within PA. The proof of Godel's theorem itself is a (constructive!) proof of the truth of such a statement. It's just that Godel's proof cannot be rendered in PA, but even that is contingent on the assumption that PA consistent, which also cannot be proven within PA if PA is in fact consistent. In order to prove any of these things you need to transcend PA somehow.


> It's just that Godel's proof cannot be rendered in PA

This is incorrect, the proof can be carried out in very weak subsystems of PA.


> must necessarily contain true (resp. false) facts for which no (resp. counter) proof can be given

This formulation misses the important aspect that whether the statement is 'true' is not absolute property (outside logical truths). We can consider truthfulness of a statement in a specific structure or in a specific theory.

E.g. a statement can be undecidable in Peano arithmetic (a theory) while true in natural numbers (a structure, model of Peano arithmetic), but that just means there is a different structure, different model of Peano arithmetic in which this statement is false.


Discussing Gödel in terms of "truth" immediately brings about deep philosophical discussions about what that actually means in the context of mathematics. If the "true" nature of the (standard model of) arithmetic is unknowable in full, does it even exist?

I like to be pragmatically classical in my mathematical outlook (I don't worry about LEM), but when we come to foundations, I find that we need to be a little bit more careful.

Gödel's original proof (or rather, the slightly modified Gödel-Rosser proof) avoids notions of absolute truth - it says that for any consistent model of arithmetic, there must exist undecidable sentences. These are ultimately purely syntactical considerations.

(I'm not claiming that there is no relation to "truth" at all in this statement, just that things become less precise and more philosophical once that's how we're going to frame things)


These deep philosophical discussions are about absolute "truth" in general, but in logic, the truthness of formula in specific mathematical structure is well-defined through structural induction over the formula, although with non-finite, non-computable steps. Yes, from strict-finitistic point of view even the concept of truth in the standard model of arithmetic could be problematic.


I'm personally not a constructivist or even finitist (mostly for pragmatic reasons), but discussions about Gödel get sidetracked by musings about either of these philosophies so often that I feel it's important to point out that there is an understanding of Gödel's theorems that can be understood and discussed regardless of one's commitment to a particular philosophy of mathematics (well, at least assuming some "reasonable" base).

If you are a finitist, you could interpret Gödel's theorem to mean that infinite systems such as the natural numbers don't actually exist, because they can never be understood in full (I'm not a finitist, so maybe I'm wrong in this conclusion). If you're a classical platonist, they would mean that the "natural numbers" do exist in some platonic realm, but will never fully be captured by human mathematics. If you're a formalist like me, maybe you'd say that it's very useful to pretend that the natural numbers exist, but that strictly speaking we don't really fully understand what they are or should be (but it turns out not to matter all that much).

Either way, a complete theory of the natural numbers doesn't exist.


The latter would be an axiom. A disproof would be a proof that there is no proof, so if you’d proven that no proof exists one way or the other then you’ve proven it can’t be disproven _or_ proven.

Which means you’ve hit a branch in mathematics. You can assume it to be either true or false, and you’ll get new results based on that; both branches are equally valid.


Constructively speaking, a disproof is a "proof that a statement leads to a contradiction". A "proof that there is no proof (assuming consistency)" can exist just fine, and that's exactly what the incompleteness theorem is, alongside a "proof that there is no disproof, i.e., that a contradiction cannot be derived from the statement (assuming consistency)".


> any logical framework which can express Peano arithmetic

(with a finite list of axioms)


I think the precise pre-condition is that the theory should be recursive, which means either a finite list of axioms _or_ a computable check to determine whether a given formula is an axiom.



No he showed either there are true statements that cannot be proved, OR the system is inconsistent...


There is another possibility: it could be an arbitrary choice, as in the case of the parallel postulate, the axiom of choice, and non-standard models of the Peano axioms.


According to logicians, it always is an arbitrary choice. But we have a second-order notion of what "finite integer" should mean. And within that notion the idea might be false.

Here's how that plays out. Suppose that the RH cannot be proved or disproved from ZF. (It turns out that choice cannot matter for all theorems in number theory, so ZF is enough.) That means that there is a model of ZF in which RH is true. Every model of ZF contains a finite calculation for any non-trivial zero of the Riemann function. (The word "finite" is doing some heavy lifting here - it is a second order concept.) That calculation must work out the same in all models. Therefore every finite nontrivial zero has complex part 0.5. Therefore RH is actually true of the standard model of the integers. Therefore every model of ZF where RH is false, is non-standard.

The truth of RH is therefore independent of ZF. But it's true in our second order notion of what the integers are.


> The word "finite" is doing some heavy lifting here - it is a second order concept.

Sorry, you're going to have to explain that to me. The word "finite" has only one meaning AFAIK and on that definition it is definitely not the case that "every model of ZF contains a finite calculation for any non-trivial zero of the Riemann function." I don't even know what it means for ZF to "contain a calculation." The concept of calculation (at least the one that I'm familiar with) comes from computability theory, not number theory.


How much explanation do you want?

Any consistent first order set of axioms that has a model, has multiple models. In the case of the Peano Axioms, there is one and only one model that we're interested in - the finite integers. All the others we call nonstandard.

But there is absolutely no statement in first order logic that can explain what we mean by finite, and why THIS model is what we want, and THAT model should be considered nonstandard.

However there is something called second order logic. As https://www.lesswrong.com/posts/3FoMuCLqZggTxoC3S/logical-pi... correctly says, in second order logic we really can say that something is finite, and mean it. But as https://www.lesswrong.com/posts/SWn4rqdycu83ikfBa/second-ord... explains, anyone with Constructivist tendencies is going to look at second order logic's definition of finite and call it wordy hogwash.

Regardless, we can look at things this way. Given any two models of the same set of axioms, we can try to set up a correspondence between the models. Each may contain some things not in the other. For example a model of PA + (PA is inconsistent) will contain a proof of inconsistency that is not in a standard model of PA.

The standard model of PA is simply this, it is a model of PA which is contained in every other model of PA. And so, whether or not two models agree on the Riemann Hypothesis, they will all agree on what the billionth zero is. And the trillionth. And so on for every finite number. Why? Because those are all in the standard model. And everything in the standard model is included in every other.


Thanks, that was very helpful. One question though:

> there is absolutely no statement in first order logic that can explain what we mean by finite

That seems implausible. All natural numbers are finite. I can easily express that in F.O.L.


> That seems implausible. All natural numbers are finite. I can easily express that in F.O.L.

Just for the sake of clear terminology, I'll take "finite natural number" to mean that it's either 0 or formed by applying the successor function to zero finitely many times.

Here's an easy proof for why you can't prove that every number is finite in PA:

Take the theory PA' = PA + the infinitely many sentences "c > 0", "c > S 0", "c > S (S 0)", etc. (with a new constant c)

We can probably agree that the standard natural numbers aren't a model of PA' because there is no natural number greater than every other natural number.

However, the natural numbers are clearly a model of any finite subset of that theory. And here's where we use the compactness theorem (which is easily proven from completeness): https://en.m.wikipedia.org/wiki/Compactness_theorem

By compactness, PA' has a model. But that model can't be the standard model of arithmetic. Yet it is also a model of PA. Thus we have a model of PA in which infinite numbers exist - so we can't prove that all natural numbers are finite from PA.

To rule out such models - and therefore prove that every natural number is finite - you need the second-order induction axiom.


> To rule out such models - and therefore prove that every natural number is finite - you need the second-order induction axiom.

Sorry, I don't see why second-order induction is required. AFAICT all you need is a first-order induction axiom instantiated on the predicated FINITE.


I don't know why I wrote an entire proof outlining why first-order PA cannot rule out infinite numbers, if you just ignore it and insist on your misconception. The set of finite numbers is undefinable in the language of first-order PA, which is why first-order induction isn't enough.

Read a textbook, these are well-known facts. Here's a free one that is quite good, chapter 3 is relevant: https://milneopentextbooks.org/a-friendly-introduction-to-ma...


Well, you can define something meaning "finite" in first order logic. It just won't mean what we want it to mean.

Take your example. "All natural numbers are finite." How do we define the natural numbers in first order logic? Let's use PA. Now take a nonstandard model of PA in which you can prove that PA is inconsistent. You will have a "proof" in that model that PA is inconsistent. That "proof" has a length. In that model, that "proof" will be of "finite" length.

There is no contradiction here...but only because that "proof" is (from our point of view) infinitely long. So the proof can go wrong even though it goes wrong at no step.

From my point of view, first order logic captures the idea of "finite" as well (or poorly) as classical mathematics captures the idea of "exists". And second order logic requires accepting the truth of unprovable things. But if you're willing to accept the truth of unprovable things, and the existence of non-constructible ones, we get the standard mathematics that mathematicians use without questioning too hard.


> Now take a nonstandard model of PA in which you can prove that PA is inconsistent.

Sorry, I lost you there. You don't prove things in models, you prove things in systems. Models are semantic things, proofs are syntactic things, so a "proof in a model" sounds like a category error to me.


Proofs are natural numbers via Gödel's encoding, or any other encoding scheme that you'd like (e.g. ASCII). The sentence "PA is inconsistent" can be stated in the language of PA and it means "there is a number that encodes a valid proof from the axioms of PA to a contradiction (e.g. 0 ≠ 0)". This sentence is either true in all models of PA (in which case PA would be inconsistent, which we don't think it is), or it is true in some models and false in others - it can't be false in all models because otherwise, by completeness, there'd be a proof of PA's consistency in PA which is what the second incompleteness theorem rules out.

The reason why this is not a category error is because of the self-referentiality of strong enough systems of logic.


I guess I calibrated your understanding of formal logic to the wrong place. Let me try again.

PA is a set of axioms satisfied by the natural numbers. As Gödel showed, all of first order logic can be encoded into the natural numbers. And questions like, "This is a proof of X" can be viewed as arithmetic statements. (We can actually go further, and encode all of computation into the natural numbers, and reason about computation using nothing more than PA.)

Gödel famously went further. Suppose that X is an axiom system. Then Gödel's famous incompleteness theorem says:

    ((X => PA) + (X => Con(X))) => not Con(X)
Let's take that apart. By (X => PA) I mean that from X we can find the existence of something that satisfies the Peano axioms. By (X => Con(X)) I mean that there is a proof from X that X is consistent. And from those facts, Gödel can construct a proof from X that X is not consistent.

It is very important here to note. If we actually have a proof from X of Con(X), then Gödel actually gives us a proof from X of not Con(X).

But, thanks to the Gödel numbering, (X => Con(X)) can also be interpreted as a statement about the natural numbers. As such, (X => Con(X)) is a potential axiom in first order logic. Of course all proofs that "really are proofs" correspond to actual finite numbers. So if there "isn't really" a proof of Con(X) from X, then any model of X + (X => Con(X)) will have the Gödel number of that proof not be a finite number. But that's OK because first order logic can't rule out nonstandard models. And nonstandard models of PA include infinite integers. So any model of X + (X => Con(X)) will have a "proof" of X + (X => Con(X)). We can decode as much of that "proof" as we like. But we can't decode the whole thing, and it "isn't really" a proof after all.

Of course if we have that then Gödel's construction gives us a Gödel number for the proof of not Con(X).

This is an absolutely general line of reasoning. Take any consistent set of first order axioms X which imply PA. If it is consistent, we don't create a contradiction by adding the axiom that there exists a Gödel number representing (X => Con(X)). Because this new set of axioms is consistent, there must exist (for some definition of existence) a model for it. That model for it will include a model of PA. That model of PA will include a Gödel number for a proof that isn't really a proof, meaning that it can't really a be finite integer. Therefore it must be an infinite integer. And therefore the set of first order axioms X cannot distinguish between what we want finite to mean, and something that is clearly not finite.

Second order logic gets around this problem. But it does so by accepting that things can be true without there being any proof that it is true. This is one of the potential sticking points for someone with Constructivist tendencies.


Thanks for taking the time to explain this. I thought I understood this stuff, but apparently I'm still missing something. I obviously need to ponder it some more, because I still don't understand why second-order induction succeeds where first-order induction fails. We are only dealing with one predicate here, FINITE. So why should it make a difference if induction on FINITE is a first-order instantiation of an axiom schema, or an instantiation of a second-order axiom? Proving that all natural numbers are finite still boils down to induction on FINITE, no?


What is "FINITE"?


Well, that is a good question. It’s a concept I’ve always just taken for granted. How do you define it? I don’t know.


So the problem with logic and in particular model theory is that you always have to assume some sort of logic for the world where your models live in (a meta-logic) - otherwise you can't talk about anything.

If you take e.g. ZFC for granted in the world of your models, then you can talk about "the" natural numbers, where every number is finite, as a model of PA. And you can also define non-standard models of PA which include infinite numbers. If you use some other foundation, you can probably also define these models in some way. Now whether these constructions mean anything to you, or capture your intuition about "finiteness" is another thing entirely...

The problem is that from within the theory you cannot distinguish between these different models. There's no way of defining a predicate "finite" in first-order PA that agrees with the notion of "finite" that you've established in your meta-logic.

And in general, it's worse than that. As long as your first-order theory allows for any infinite model, there exist models of that theory of any infinite cardinality (this is the Löwenheim-Skolem theorem). So, in general, first-order logic cannot pin down structures uniquely. (Again, this is assuming ZFC in the world of our models.)

This btw also leads to the "Skolem paradox" which states that there are countable models of (first-order) ZFC, even though ZFC says that there exist uncountable sets. But this is only a "paradox" until you realise that "countable" and "uncountable" don't mean the same thing within the model as they do outside of the model.


> There's no way of defining a predicate "finite" in first-order PA that agrees with the notion of "finite" that you've established in your meta-logic.

Why doesn't this work?

Define the predecessor function P such that P(S(n)) = n for all n (and maybe S(P(n)) = n for all n != 0).

FINITE(n) := (n=0) OR FINITE(P(n))

i.e. a number is finite if it is zero or if it predecessor is finite.

Why doesn't that work?

BTW, as an aside, since you seem to know your stuff I thought I'd ask: I noticed as part of a separate discussion over on /r/math that AFAICT there is nothing in PA that distinguishes successor and predecessor until you get to the definition of multiplication. Is that right? Is it noteworthy?


> I noticed as part of a separate discussion over on /r/math that AFAICT there is nothing in PA that distinguishes successor and predecessor until you get to the definition of multiplication. Is that right? Is it noteworthy?

I'm not entirely sure what you mean by this. "PA without multiplication" is basically Presburger Arithmetic (https://en.m.wikipedia.org/wiki/Presburger_arithmetic). One of its axioms (as in PA) is that 0 has no predecessor - but clearly every number, including 0, has a successor. So that's a way of distinguishing predecessor and successor functions.

I believe that when you look at nonstandard models (which basically look like N + any number of copies of Z), the order isn't fixed in those copies of Z (unlike N), so you could swap predecessor and succesor within those copies, but I haven't really looked into that all that much.


Yeah, that's not quite what I meant. What I was trying to get at was more analogous to the ambiguity of i and -i. Suppose I wanted to create a system of math whose intended model was the negative naturals rather than the positive naturals. Let's call this new system NPA. NPA has the exact same axioms as PA except with predecessor instead of successor as the core axiomatic construct. So zero has a predecessor, but no successor. Everything works as before, except with the letter P instead of the letter S. Addition still works, you're just adding negative numbers and getting negative numbers as the result, which is what you would expect. It doesn't break down until you get to multiplication, where, if you follow the same definition as standard PA you get the "wrong" result because you want the product of two negative numbers to be a positive number, and there are no positive numbers in NPA. So AFAICT multiplication is required to distinguish PA from NPA. Which raises an interesting question: is there a non-trivial operation you can define to distinguish i from -i the way that multiplication allows you to distinguish 1 from -1?


In every model of a theory, you can just consistently relabel elements in order to get another model. But that doesn't really tell you anything because that model is just isomorphic to the old one (that term has a precise meaning, but I think the "relabeling" intuition captures it).

So your NPA is just a consistent relabeling of elements n to elements -n - it's fully isomorphic to PA. There's nothing particularly interesting about it. Notice that "the product of two negative numbers is positive" doesn't help because in your alternative model, "positive" (or rather, "nonnegative") just means something different (or more precisely, it doesn't even make sense to talk about "nonnegative numbers" because that's all of them).

It only becomes interesting when you have different non-isomorphic models of a theory - that then means that there are (distinguishable) properties of the model that the theory can't talk about.

In the case of the complex numbers, if you just exchange i and -i, I believe that you get a field which is fully isomorphic to C.


> In the case of the complex numbers, if you just exchange i and -i, I believe that you get a field which is fully isomorphic to C.

To add to this and expand a little, there are only two isomorphisms between C and itself which keep all the real numbers fixed: the identity and the conjugate map (which sends a+ib to a-ib, and in particular i to -i).

If you don't fix the real numbers there are other such isomorphisms but I believe you can only construct them via the axiom of choice (so you can't really "write them down").

In the case of the real numbers there's really only one such isomorphism, the identity.


Right. So in those terms, NPA is isomorphic to PA -- until you introduce multiplication, at which point the isomorphism goes away becauase S(0)S(0) is S(0) but P(0)P(0) is not P(0). That seems like an interesting fact to me, and feels like it might be a clue pointing to some deeper truth.


You can't define a model "NPA" in which all numbers are nonpositive but then define multiplication such that it can yield positive numbers - that just wouldn't be a function.

It's pretty easy to prove that S(0)*S(0) = S(0) with the axioms of PA. That has to be true no matter what interpretation you give "S" in your model, so in your proposed model "NPA", in which the domain are the nonpositive integers and S is the predecessor function, of course it has to be true that the "product" of two -1 and -1 is -1. It just means that in such a model, multiplication isn't what you think it is.

One should really be careful not to mix up logical systems and their models. The way you're using notation, it's really easy to get confused.


If you start with NPA it is identical as PA. You can apply all the same logic and all the first theorems. The problem with multiplication is not that the proofs fall apart, it is that multiplication does not mean what you want it to mean.

When you go to the complex numbers, you have a similar situation. The only thing that distinguishes i from -i is what you want the principal branch of sqrt to be. Otherwise nothing would change if you flip them.


> The problem with multiplication is not that the proofs fall apart, it is that multiplication does not mean what you want it to mean.

Well, yeah, that's kind of the point: the positiveness of the naturals is not just a convention, it is actually a logical consequence of something in the axioms. Specifically, it is a logical consequence of the definition of multiplication. I find that interesting.


That ties to one of the standard axioms of the real numbers in classical mathematics.

There exists a set P which is closed under addition and multiplication, such that for every real number x, one of these three things is true: x is in P, -x is in P, x = 0.

P is, of course, the positive numbers. This axiom makes the reals into an ordered field. Add in completeness, and the reals are unique (up to choice of set theory).


You can have an infinite chain of predecessors of non-standard numbers, each claiming being FINITE(). That satisfies your definition. This could be easily demonstrated in Robinson arithmetic, which has simple, understandable non-standard models.


Robinson arithmetic is an excellent example here.

Using ZFC, Robinson can take a standard model of the reals and construct a nonstandard model, and a function from the standard model to the nonstandard model. Every possible statement of first order logic that can be made about the standard model is true in the nonstandard model. Likewise every possible statement of first order logic that can be made in the nonstandard model only involving things that came from the standard model are also true of the standard model. This is his "transfer principle". However the nonstandard model comes with things like infinitesmals, and infinite numbers. So in the nonstandard world we can really do things like define a derivative as dy/dx, or an integral as an infinite sum. Then recover the classical definitions through the transfer principle.

See https://terrytao.wordpress.com/2007/06/25/ultrafilters-nonst... for an explanation of how this actually works in some detail. But if you get your head around it, you'll see exactly why first order logic cannot possibly capture the idea of being a standard model. Because, in first order logic, the nonstandard model really is indistinguishable. And so finite cannot be expressed in first order logic.


Note that i meant https://en.wikipedia.org/wiki/Robinson_arithmetic , while you wrote about results from different Robinson: https://en.wikipedia.org/wiki/Nonstandard_analysis .


Oops, thank you for the correction! I learned something today.

Though I'm sure you can see how I thought that the other Robinson fitted in the conversation,


> You can have an infinite chain of predecessors of non-standard numbers, each claiming being FINITE().

Ah. That makes sense. Thanks.

So surely someone has come up with a system that captures the intuitive spirit of mathematical induction, i.e. that the chain has to actually be rooted at 0 rather than have an infinite chain of predecessors or be a loop? How hard could that be?


How hard could that be?

It seems like it should be straightforward. But it's mathematically impossible.

Similarly, it seemed to David Hilbert like it should be possible to have a system of formal logic that all could accept, which would prove itself to be consistent. But that also proved to be impossible. And the logic which lead to that being proved impossible is deeply connected to the first line of reasoning that I gave you for why this is also impossible.

And in the end what they all come down to is some kind of self-reference. In this case, to have a finite statement define what it means to be finite, we must already know what finite means. Second order logic sort of assumes that we can simply know what finite means. First order logic doesn't allow you to wave your hands like that. You can't cheat in any way. And so you can't find a way to do it.


> Similarly, it seemed to David Hilbert like it should be possible to have a system of formal logic that all could accept, which would prove itself to be consistent.

Minor nitpick, but I don't think this fully captures the essence of Hilbert's program. If we had a system that could prove itself consistent, that would mean nothing, because the system could be inconsistent, therefore unsound and so its consistency "proof" wouldn't mean anything.

Rather, the goal was to start from an arguably "uncontroversial" base of mathematics (something like primitive recursive arithmetic, which intuitionists might also accept) and use that to bootstrap "infinitary mathematics" of the kind the formalists such as Hilbert were arguing for. The hope was that one could prove the consistency of such more abstract mathematics (PA or ZFC) from the more concrete foundations.

Unfortunately, Gödel's theorems completely demolish that idea. It's not just that a (sufficiently strong) theory can't prove itself consistent, it also can't be proven consistent by a strictly weaker theory such as PRA (because if such a proof existed in the weaker system, it would of course also exist in original system).

Interestingly, there are systems which are neither stronger nor weaker than PA which can prove PA consistent: https://en.m.wikipedia.org/wiki/Gentzen%27s_consistency_proo...


That goal is what I meant by the phrase "that all could accept". You're right that I could have expanded on it though.

That said, Gentzen's result is new to me. Though I have seen other results that are somewhat similar.


> Second order logic sort of assumes that we can simply know what finite means. First order logic doesn't allow you to wave your hands like that.

OK, this I do not understand. How does 2ndOL "sort of assume that we can simply know what finite means"? That seems like an extraordinarily imprecise characterization of 2ndOL. 2ndOL is a formal system just like 1stOL. Neither one lets you "wave your hands" about anything AFAICT.


Read https://plato.stanford.edu/entries/logic-higher-order/ if you wish.

I view the "for all properties" bit as handwaving. See particularly section 6 of that article, which at one point gives:

The semantics of second-order logic depending on these hard questions of set theory puts second-order logic into the same “basket” with set theory. Criticism of set theory becomes criticism of second-order logic and vice versa.

My Constructivist tendencies are due to my discomfort with accepting the validity of set-theoretic constructions with decisions based on unverifiable truths. Accepting second order logic requires accepting quantifying over similar collections of unverifiable claims. I'm going to be no more sympathetic.


Thanks.


In any model of PA, all natural numbers will be finite by this definition. Even if we, from outside of the model, know that many of them aren't.

You've been presented with two different ways to prove this, and a standard textbook. Perhaps read the textbook?

On your aside, in PA there is one element that is not the successor of any other. And it likewise has no predecessor. (It is a matter of convention whether you call that special number 0 or 1.) That special number is always the base case of all induction proofs.


You don't define it.

You list things that you want to be true of it. The Peano Axioms are examples. But no matter how long your list gets, it can always get longer. For example we can ask whether a particular polynomial has integer roots. See https://en.wikipedia.org/wiki/List_of_statements_independent.... (It doesn't, but you can't prove that in ZFC.)

Even so, there is always an, "...and so on." You never finish, because finite does not actually have a finite definition.

Second order logic gets around it because you can say something that simultaneously implies all the things that you want to say about "finite" in one go. Such as, "All possible functions from itself to itself that are onto are also one-to-one." A first order model need not capture the full intent of this statement, because it does not contain all possible functions. As we can see when we look at a model from within other models.

But when I look at what the second order idea of "all possible functions" actually means, I think we're just begging the question of what it means to be finite.

The way that I think about it is that some things are finite. Some things are infinite. And some things on the boundary are not yet determined. But then again, I do have Constructivist sympathies. And this is not a point of view that most mathematicians are very sympathetic towards.


"finite" means what you think it means: a program that halts.

Look at Peano Axioms:

https://en.m.wikipedia.org/wiki/Peano_axioms

The Induction axiom allows us to write finite proofs of otherwise infinite facts, like every positive integer has a predecessor.

Without it, you'd be stuck with finitism (since only a finite number of numbers can be constructed in finite time, there is only a finite number of numbers.)


I think you're confusing PA with computability theory and Turing machines. "Program" is not a thing in PA.


PA can model Turing Machines and reason about them. I feel like you missed the important link between logic and computability theory.


Probability is relative to an axiom systems.

A more powerful system can prove statements that are true but not provable in a weaker system.

For example "This sentence is not provable." (Godel's statement, written informally) is true, but not provable, in first order arithmetic logic.

https://en.m.wikipedia.org/wiki/Diagonal_lemma

In constructivist mathematics, proving that a statement is not false is not a proof that it is true. Non-constructivist mathematics, on the other hand, has the axiom, for all P: P or not-P, also stated as not-not-P implies P.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: