Hacker News new | past | comments | ask | show | jobs | submit login

> In a formal set theory, like ZFC, we postulate the existence of the set of natural numbers.

But if your theory is a first-order theory, you cannot pin down "the set of natural numbers" to the one you and everyone else is actually thinking of; no matter what set of axioms you use to define "the set of natural numbers" (the Peano axioms, etc.), there will be semantic models other than the set of natural numbers you are thinking of that satisfy those axioms. The only way to precisely pin down "the set of natural numbers" to one semantic model--the one you actually want--is to use second-order logic.

So it seems to me that there is a genuine tradeoff here. If you want enough expressive power to be able to precisely pin down what semantic model your theory is talking about, you need to accept that your theory will be making statements that are non-constructive--you can have an axiom that says every set has a power set, but you can't construct that power set in the general case. If you want your theory to be constructive, so that for every set that your theory says exists, you can see how to construct it explicitly, you need to accept that your theory will have multiple semantic models and only one of them is the one you want, and your theory can't pin it down for you.




It's not about being constructive. ZFC easily allows defining sets non-constructively. It's about accepting a theory whose meaning refers to objects that are not generally viewed as sufficiently basic. There seems to be a limitation on precisely describing uncountable infinities using finite strings of characters from a finite alphabet.


> It's not about being constructive

I agree that "constructive" was not a good term. This...

> It's about accepting a theory whose meaning refers to objects that are not generally viewed as sufficiently basic.

...is fine as a better phrasing for what I was thinking of. My point is that there is a tradeoff between the property "the theory is built entirely from objects that are sufficiently basic" and the ability to precisely pin down a single semantic model; having either one means giving up the other.


The usual objection, which I happen to agree with, is that this is a false tradeoff. The two are inextricably linked.

In the case of second-order logic, although it seems like we're getting a precise, single semantic model, in fact all we've done is push the ambiguity to the metatheory which we use to interpret the semantics of second-order logic.

For example, there's the infamous "CH statement" of second-order logic, which has a model if and only if the Continuum Hypothesis holds in the metatheory. I mean sure that's precision if you believe that the Continuum Hypothesis has an absolute truth value (that is you can confidently say "yes CH is true/false in reality") but that's really just the same thing to me at the end of the day as choosing a particular model for a first-order theory and saying "yes that model is the one true model" (which I don't really philosophically agree with in the first place since I'm usually not a strong Platonist).

Even the choice to use full semantics instead of Henkin semantics (an example of what is essentially multi-sorted FOL as @pron refers to) is an ambiguous choice one must make in deciding second-order semantics.


> In the case of second-order logic, although it seems like we're getting a precise, single semantic model, in fact all we've done is push the ambiguity to the metatheory which we use to interpret the semantics of second-order logic.

So how would this work in, say, the case of the natural numbers? What ambiguity gets pushed to the metatheory if I claim that the second order theory of the natural numbers pins down a single semantic model (the "standard" natural numbers and that's all)? If it matters, assume we are using full semantics, not Henkin semantics (see further comments on that below).

> Even the choice to use full semantics instead of Henkin semantics (an example of what is essentially multi-sorted FOL as @pron refers to) is an ambiguous choice one must make in deciding second-order semantics.

I'm not sure I would call this choice "ambiguous" since the difference between the two choices is perfectly clear, and only one of them (full semantics) preserves the key property of second order logic, of pinning down a single semantic model.


Caveat: All of this is essentially philosophy at this point and so there's no rigorous argument I can present that second-order logic is wrong. But I think you've asked reasonable questions so let's dive in.

So the CH statement (let's call it CH_2 where 2 is for second-order) I was referring to actually uses the empty vocabulary (i.e. only the usual symbols from second-order logic). In particular that means it also applies to second-order Peano Arithmetic.

So does your model of the natural numbers satisfy CH_2 or not (or put differently, is your proposed model of the natural numbers actually a model of the natural numbers)? Depends on your metatheory.

More problematically, what this demonstrates (because CH is independent of ZFC) is that second-order logic loses the absoluteness of semantic truth. That is the statement "M is a model of my theory T under a given interpretation I" can flip-flop between true and false and back again as you continue to expand your background model of your metatheory (think of e.g. Cohen forcing).

So while you may have specified exactly one model, it's getting pretty hard to see just exactly which one it is.

First-order logic does not have this phenomenon, which is why we can often get away with not going into too much detail about our metatheory in FOL (and why recursive metatheories built of FOL "all the way down" aren't too problematic). If M is a model of T under I then it will continue to be a model of T under expansion of my background model. So as long as I sketch out the basics of my metatheory (and by implication my background model), I know I'm good.

Philosophically this means that in the realm of SOL the only way you can be sure that you've really got "the" standard model and haven't mistakenly grabbed another one is if you can confidently state the truth value of every one of a never-ending stream of theorems which cannot be proved using your metatheory.

Indeed statements like "Theorem A is independent of theory T" are very murky and difficult to understand in a second-order setting. For example, CH_2 is definitely not provable using second-order PA, but does that mean we can accept systems with either CH_2 or Not(CH_2)? Well that seems to be false if we take "a single canonical model" at face value, which would imply that only one of the two can be "true," but then how do you decide whether to accept CH_2 or Not(CH_2)? No syntactic argument will suffice since we've lost completeness.

That's not to say I think that second-order logic has no value. I think second-order logic, interpreted using first-order semantics, is a valuable language. However, I agree very much with Vaananen's contention that in practice, SOL reduces to multi-sorted FOL.

> if second-order logic is used in formalizing or axiomatizing mathematics, the choice of semantics is irrelevant: it cannot meaningfully be asked whether one should use Henkin semantics or full semantics. This question arises only if we formalize second-order logic after we have formalized basic mathematical concepts needed for semantics. A choice between the Henkin second-order logic and the full second-order logic as a primary formalization of mathematics cannot be made; they both come out the same.

http://www.math.helsinki.fi/logic/people/jouko.vaananen/VaaS...


> in the realm of SOL the only way you can be sure that you've really got "the" standard model and haven't mistakenly grabbed another one is if you can confidently state the truth value of every one of a never-ending stream of theorems which cannot be proved using your metatheory

Let me try to restate this and see if it helps.

I say I've specified a second order formal theory that has the standard natural numbers as its unique semantic model. But "the standard natural numbers", from the standpoint of set theory, is a set, and I can apply set theory operations to it to obtain other sets. For example, I can apply the power set operation. Or I can apply the construction that is used to obtain omega-1 (the first uncountable ordinal). So it's not really true that my semantic model "only" includes the standard natural numbers. Unless I simply disallow any set theory operations altogether (which throws away most of the usefulness of the theory), I have to accept that my semantic model includes sets I might not have thought of when I set up my formal theory.

So, now that I have recognized that, for example, I can apply two different operations to my set N (the set of standard natural numbers), the power set operation P and the omega-1 construction O, I then have an obvious question: are P(N) and O(N) the same set, or different sets? And if CH is indeed logically independent of my formal theory, then the answer is that there are two semantic models of my formal theory, not one: one model in which P(N) and O(N) are the same set, and another model in which they aren't. So I haven't fully pinned down a unique semantic model after all.

Or, if CH is not independent of my formal theory, then I don't know what, exactly, my semantic model is--what sets are in it--until I figure out whether CH holds in my model or not, and that might take a while. And there are actually an infinite number of possible cases where this kind of issue might arise.

Is this a fair summary of the issue you have been trying to describe?


A late reply, but hopefully late is better than never.

Sort of. I think talking about it as if it's a phenomenon unique to sets (although indeed things are ultimately traceable back to the close relationship between second order logic and set theory) obfuscates things somewhat.

Let's take a step back. You give me a theory (second order Peano Arithmetic, let's call it PA2). Let's call the unique model (from the point of view of the metatheory) that satisfies PA2 N. Let's call our metatheory T and the implicit background model of T M.

We decide to test whether we both have the same N by asking whether N satisfies a sentence s in the language of PA2.

It turns out that whether N satisfies s depends on a theorem t that is independent of T. Do we agree on the same N? Perhaps we have different M that respectively do and don't satisfy t.

Okay that's fine, maybe our different M can be unified under a single M* that subsumes both to give us a definitive answer on N. Unfortunately it turns out that as we look at increasingly larger M* the answer of whether N satisfies s flips back and forth. There is no universal M* we can appeal to.

That's basically what's going on here when I say that all second-order logic has done is push ambiguity to the metatheory (and metamodel).

Now you can maybe argue that second-order logic is at least less ambiguous. After all the overall structure of N is assured right? We don't have non-standard natural numbers right? Well this is a bit tricky to say. Perhaps our metamodel M is actually ill-founded (which is possible from the perspective of a metametatheory even if it satisfies something like ZFC), causing N to have non-standard natural numbers as well. Now there is the possible objection that if you mess with the metamodel all bets are off. But the problem is that second-order logic with full semantics, because it lacks completeness, constantly relies instead on the metatheory and by extension metamodel to actually do proofs. That is you rarely have interesting, new proofs in second-order logic, but rather meta-proofs using the metatheory. So it is totally fair game to start examining the metamodel if everything you're doing is meta-proofs!

Moreover, from my perspective as someone who disagrees with strong, mono-universe Platonism (a term I made up that refers to people who believe that every axiom has an objective truth value and we cannot "choose" axioms in any real sense but rather are only on a journey to discover the "true" axioms of mathematics), giving up completeness is a really hard pill to swallow.

This makes it really hard to understand notions like "independence" and "consistency." If we lose completeness, consistency is no longer a sufficient criterion to admit a theory, which makes it hard to define something like independence. If we say both t and Not(t) are consistent with T, does that mean both t and Not(t) perfectly admissible axioms to add to T? The answer in the absence of completeness is no. We must investigate further. And in the case of categoricity such as with PA2, the answer is a definitive no. Only one can be admissible not both.

This can lead to really strong philosophical positions that I disagree with. For example, ZFC2 (second-order ZFC) is categorical. So either CH or Not(CH) is "true." But both are potentially consistent with ZFC2 (I'm handwaving this because second-order logic losing completeness makes even talking about consistency kind of weird unless you have completeness for your metatheory and metamodels).

But CH seems to me so clearly artificial. It's just a weird artifact of random cardinals we've made up. You're telling me that I'm either allowed to use it or not allowed to use it, but we don't know which? That seems way too strong to me.

This is especially problematic if we think back to M* where it's not clear at all when we should stop going up the hierarchy of M*s, since our answer keeps going back and forth.

That's why although I'm okay with second-order logic as a formal tool, I prefer to think in a metatheory that embraces completeness (you'll see I did that when I started talking about t being independent of T), which usually ends up being some version of FOL. Otherwise I think it's very difficult to work with mutually incompatible mathematical axiom sets for different problems since you're basically assuming completeness at a philosophical level when you do that.

At the end of the day, from my point of view, while models are important tools for examining mathematical theories as objects in their own right, the realm of formal mathematics is proofs. I hold the ability to completely formalize an argument in a computer, even if to actually do so would be extremely tedious and even impractical, as my standard for mathematical proof. A proof that is actually impossible to completely formalize constitutes "hand-waving" for me.

By that philosophical standard, we must have completeness because we are putting proofs first. This is what Vaananen means when he says that we realistically can't choose between full semantics and Henkin semantics if we wish to use second-order logic for mathematical foundations. If you want to preserve that sort of rigor, you really do need completeness and by extension you really need to default to Henkin semantics, which ends up just being multi-sorted FOL.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: