Hacker News new | past | comments | ask | show | jobs | submit login

Which formal logic do you suggest to replace it?



Second-order logic. (Not really to "replace", but to augment--to deal with those situations where first-order logic creates issues.) But I understand that not everyone would agree with that (apparently Scott Aaronson himself falls into this category, based on some of his comments in the comment section of the article).


Second-order logic is generally avoided in discussion of mathematical foundations because it is not very "foundational"; it is too vague or "mysterious" to serve as a comfortable foundation. Unlike FOL, it itself is incomplete.

In a formal set theory, like ZFC, we postulate the existence of the set of natural numbers. This is something most logicians are comfortable with as being "mathematically foundational" ("God made the natural numbers; all else is the work of man"). Then, to describe the set of sets of natural numbers we use the powerset axiom that defines what exactly we mean by "all sets of natural numbers." Then, Löwenheim–Skolem appears and says that there are models of this set theory where the number of sets that satisfy this definition is countable (not inside the logic, but looking from the outside), to which we say: well, so be it. The reason this happens is because there is a fundamental limitation to the power of describing things using finite strings of symbols. Second-order can just say, no, when I say "all subsets" I mean absolutely all. But can we precisely formalise, i.e. write down in symbols, what we mean by "all" using some basic, "simple" foundation? The answer is no. Second-order logic just vaguely gestures toward "all" and says, "by all we mean all". It circumvents the limitations of formalisation -- i.e. describing things in finite sentences -- by alluding to non-describable things.

Most logicians feel this isn't very foundational to rely on allusions to things that aren't themselves reducible to axioms in a basic foundation.

Even when higher-order logics are used in formal mathematics, like various set theories or Isabelle's HOL, they are (at best) picked to be as powerful as (multi-sorted) FOL rather than "actual" SOL.


> In a formal set theory, like ZFC, we postulate the existence of the set of natural numbers.

But if your theory is a first-order theory, you cannot pin down "the set of natural numbers" to the one you and everyone else is actually thinking of; no matter what set of axioms you use to define "the set of natural numbers" (the Peano axioms, etc.), there will be semantic models other than the set of natural numbers you are thinking of that satisfy those axioms. The only way to precisely pin down "the set of natural numbers" to one semantic model--the one you actually want--is to use second-order logic.

So it seems to me that there is a genuine tradeoff here. If you want enough expressive power to be able to precisely pin down what semantic model your theory is talking about, you need to accept that your theory will be making statements that are non-constructive--you can have an axiom that says every set has a power set, but you can't construct that power set in the general case. If you want your theory to be constructive, so that for every set that your theory says exists, you can see how to construct it explicitly, you need to accept that your theory will have multiple semantic models and only one of them is the one you want, and your theory can't pin it down for you.


It's not about being constructive. ZFC easily allows defining sets non-constructively. It's about accepting a theory whose meaning refers to objects that are not generally viewed as sufficiently basic. There seems to be a limitation on precisely describing uncountable infinities using finite strings of characters from a finite alphabet.


> It's not about being constructive

I agree that "constructive" was not a good term. This...

> It's about accepting a theory whose meaning refers to objects that are not generally viewed as sufficiently basic.

...is fine as a better phrasing for what I was thinking of. My point is that there is a tradeoff between the property "the theory is built entirely from objects that are sufficiently basic" and the ability to precisely pin down a single semantic model; having either one means giving up the other.


The usual objection, which I happen to agree with, is that this is a false tradeoff. The two are inextricably linked.

In the case of second-order logic, although it seems like we're getting a precise, single semantic model, in fact all we've done is push the ambiguity to the metatheory which we use to interpret the semantics of second-order logic.

For example, there's the infamous "CH statement" of second-order logic, which has a model if and only if the Continuum Hypothesis holds in the metatheory. I mean sure that's precision if you believe that the Continuum Hypothesis has an absolute truth value (that is you can confidently say "yes CH is true/false in reality") but that's really just the same thing to me at the end of the day as choosing a particular model for a first-order theory and saying "yes that model is the one true model" (which I don't really philosophically agree with in the first place since I'm usually not a strong Platonist).

Even the choice to use full semantics instead of Henkin semantics (an example of what is essentially multi-sorted FOL as @pron refers to) is an ambiguous choice one must make in deciding second-order semantics.


> In the case of second-order logic, although it seems like we're getting a precise, single semantic model, in fact all we've done is push the ambiguity to the metatheory which we use to interpret the semantics of second-order logic.

So how would this work in, say, the case of the natural numbers? What ambiguity gets pushed to the metatheory if I claim that the second order theory of the natural numbers pins down a single semantic model (the "standard" natural numbers and that's all)? If it matters, assume we are using full semantics, not Henkin semantics (see further comments on that below).

> Even the choice to use full semantics instead of Henkin semantics (an example of what is essentially multi-sorted FOL as @pron refers to) is an ambiguous choice one must make in deciding second-order semantics.

I'm not sure I would call this choice "ambiguous" since the difference between the two choices is perfectly clear, and only one of them (full semantics) preserves the key property of second order logic, of pinning down a single semantic model.


Caveat: All of this is essentially philosophy at this point and so there's no rigorous argument I can present that second-order logic is wrong. But I think you've asked reasonable questions so let's dive in.

So the CH statement (let's call it CH_2 where 2 is for second-order) I was referring to actually uses the empty vocabulary (i.e. only the usual symbols from second-order logic). In particular that means it also applies to second-order Peano Arithmetic.

So does your model of the natural numbers satisfy CH_2 or not (or put differently, is your proposed model of the natural numbers actually a model of the natural numbers)? Depends on your metatheory.

More problematically, what this demonstrates (because CH is independent of ZFC) is that second-order logic loses the absoluteness of semantic truth. That is the statement "M is a model of my theory T under a given interpretation I" can flip-flop between true and false and back again as you continue to expand your background model of your metatheory (think of e.g. Cohen forcing).

So while you may have specified exactly one model, it's getting pretty hard to see just exactly which one it is.

First-order logic does not have this phenomenon, which is why we can often get away with not going into too much detail about our metatheory in FOL (and why recursive metatheories built of FOL "all the way down" aren't too problematic). If M is a model of T under I then it will continue to be a model of T under expansion of my background model. So as long as I sketch out the basics of my metatheory (and by implication my background model), I know I'm good.

Philosophically this means that in the realm of SOL the only way you can be sure that you've really got "the" standard model and haven't mistakenly grabbed another one is if you can confidently state the truth value of every one of a never-ending stream of theorems which cannot be proved using your metatheory.

Indeed statements like "Theorem A is independent of theory T" are very murky and difficult to understand in a second-order setting. For example, CH_2 is definitely not provable using second-order PA, but does that mean we can accept systems with either CH_2 or Not(CH_2)? Well that seems to be false if we take "a single canonical model" at face value, which would imply that only one of the two can be "true," but then how do you decide whether to accept CH_2 or Not(CH_2)? No syntactic argument will suffice since we've lost completeness.

That's not to say I think that second-order logic has no value. I think second-order logic, interpreted using first-order semantics, is a valuable language. However, I agree very much with Vaananen's contention that in practice, SOL reduces to multi-sorted FOL.

> if second-order logic is used in formalizing or axiomatizing mathematics, the choice of semantics is irrelevant: it cannot meaningfully be asked whether one should use Henkin semantics or full semantics. This question arises only if we formalize second-order logic after we have formalized basic mathematical concepts needed for semantics. A choice between the Henkin second-order logic and the full second-order logic as a primary formalization of mathematics cannot be made; they both come out the same.

http://www.math.helsinki.fi/logic/people/jouko.vaananen/VaaS...


> in the realm of SOL the only way you can be sure that you've really got "the" standard model and haven't mistakenly grabbed another one is if you can confidently state the truth value of every one of a never-ending stream of theorems which cannot be proved using your metatheory

Let me try to restate this and see if it helps.

I say I've specified a second order formal theory that has the standard natural numbers as its unique semantic model. But "the standard natural numbers", from the standpoint of set theory, is a set, and I can apply set theory operations to it to obtain other sets. For example, I can apply the power set operation. Or I can apply the construction that is used to obtain omega-1 (the first uncountable ordinal). So it's not really true that my semantic model "only" includes the standard natural numbers. Unless I simply disallow any set theory operations altogether (which throws away most of the usefulness of the theory), I have to accept that my semantic model includes sets I might not have thought of when I set up my formal theory.

So, now that I have recognized that, for example, I can apply two different operations to my set N (the set of standard natural numbers), the power set operation P and the omega-1 construction O, I then have an obvious question: are P(N) and O(N) the same set, or different sets? And if CH is indeed logically independent of my formal theory, then the answer is that there are two semantic models of my formal theory, not one: one model in which P(N) and O(N) are the same set, and another model in which they aren't. So I haven't fully pinned down a unique semantic model after all.

Or, if CH is not independent of my formal theory, then I don't know what, exactly, my semantic model is--what sets are in it--until I figure out whether CH holds in my model or not, and that might take a while. And there are actually an infinite number of possible cases where this kind of issue might arise.

Is this a fair summary of the issue you have been trying to describe?


A late reply, but hopefully late is better than never.

Sort of. I think talking about it as if it's a phenomenon unique to sets (although indeed things are ultimately traceable back to the close relationship between second order logic and set theory) obfuscates things somewhat.

Let's take a step back. You give me a theory (second order Peano Arithmetic, let's call it PA2). Let's call the unique model (from the point of view of the metatheory) that satisfies PA2 N. Let's call our metatheory T and the implicit background model of T M.

We decide to test whether we both have the same N by asking whether N satisfies a sentence s in the language of PA2.

It turns out that whether N satisfies s depends on a theorem t that is independent of T. Do we agree on the same N? Perhaps we have different M that respectively do and don't satisfy t.

Okay that's fine, maybe our different M can be unified under a single M* that subsumes both to give us a definitive answer on N. Unfortunately it turns out that as we look at increasingly larger M* the answer of whether N satisfies s flips back and forth. There is no universal M* we can appeal to.

That's basically what's going on here when I say that all second-order logic has done is push ambiguity to the metatheory (and metamodel).

Now you can maybe argue that second-order logic is at least less ambiguous. After all the overall structure of N is assured right? We don't have non-standard natural numbers right? Well this is a bit tricky to say. Perhaps our metamodel M is actually ill-founded (which is possible from the perspective of a metametatheory even if it satisfies something like ZFC), causing N to have non-standard natural numbers as well. Now there is the possible objection that if you mess with the metamodel all bets are off. But the problem is that second-order logic with full semantics, because it lacks completeness, constantly relies instead on the metatheory and by extension metamodel to actually do proofs. That is you rarely have interesting, new proofs in second-order logic, but rather meta-proofs using the metatheory. So it is totally fair game to start examining the metamodel if everything you're doing is meta-proofs!

Moreover, from my perspective as someone who disagrees with strong, mono-universe Platonism (a term I made up that refers to people who believe that every axiom has an objective truth value and we cannot "choose" axioms in any real sense but rather are only on a journey to discover the "true" axioms of mathematics), giving up completeness is a really hard pill to swallow.

This makes it really hard to understand notions like "independence" and "consistency." If we lose completeness, consistency is no longer a sufficient criterion to admit a theory, which makes it hard to define something like independence. If we say both t and Not(t) are consistent with T, does that mean both t and Not(t) perfectly admissible axioms to add to T? The answer in the absence of completeness is no. We must investigate further. And in the case of categoricity such as with PA2, the answer is a definitive no. Only one can be admissible not both.

This can lead to really strong philosophical positions that I disagree with. For example, ZFC2 (second-order ZFC) is categorical. So either CH or Not(CH) is "true." But both are potentially consistent with ZFC2 (I'm handwaving this because second-order logic losing completeness makes even talking about consistency kind of weird unless you have completeness for your metatheory and metamodels).

But CH seems to me so clearly artificial. It's just a weird artifact of random cardinals we've made up. You're telling me that I'm either allowed to use it or not allowed to use it, but we don't know which? That seems way too strong to me.

This is especially problematic if we think back to M* where it's not clear at all when we should stop going up the hierarchy of M*s, since our answer keeps going back and forth.

That's why although I'm okay with second-order logic as a formal tool, I prefer to think in a metatheory that embraces completeness (you'll see I did that when I started talking about t being independent of T), which usually ends up being some version of FOL. Otherwise I think it's very difficult to work with mutually incompatible mathematical axiom sets for different problems since you're basically assuming completeness at a philosophical level when you do that.

At the end of the day, from my point of view, while models are important tools for examining mathematical theories as objects in their own right, the realm of formal mathematics is proofs. I hold the ability to completely formalize an argument in a computer, even if to actually do so would be extremely tedious and even impractical, as my standard for mathematical proof. A proof that is actually impossible to completely formalize constitutes "hand-waving" for me.

By that philosophical standard, we must have completeness because we are putting proofs first. This is what Vaananen means when he says that we realistically can't choose between full semantics and Henkin semantics if we wish to use second-order logic for mathematical foundations. If you want to preserve that sort of rigor, you really do need completeness and by extension you really need to default to Henkin semantics, which ends up just being multi-sorted FOL.


The traditional reason that logicians do not trust second-order logic is that usually attempts to try to formalize it start to lean back on first-order logic, e.g. by depending on a background theory of ZFC which is a first-order theory or via Henkin semantics which are also essentially first-order logic semantics and share all the same paradoxes of first-order logic.

Hence the solutions that second-order logic presents to the issues of first-order logic are really just smoke and mirrors that fall apart and turn into the same problems of first-order logic on closer inspection.


Can you point me to (or explain briefly) what is second order logic?


In first order logic one can only quantify over objects of the domain: in "for all x such that ..." the variable "x" represent an object.

In second order logic, one can quantify over relations (and functions, sets...): "for all P such that ...".

The wikipaedia page [1] has more details and is quite readable. Be careful about falling into a rabbit hole ;)

[1] https://en.wikipedia.org/wiki/Second-order_logic


Thanks, I will read that more. I am still confused, maybe you can tell me: for 'normal math'(what is usually taught in a formal 3 year degree), is first order logic enough (for getting all the results). What I am trying is to see where this second order logic fits in to what I use already.


Simply put first order logic is enough to do almost all of mathematics and is certainly enough to cover all the results of an undergraduate degree.

More generally speaking the term "first order logic" is ambiguous by itself since we are not specifying our domain of discourse. For example, take the group axioms (e.g. there exists an identity element, every element has an inverse, etc.). In this context our domain of discourse is the elements of a single group. Hence while we can say things such as "there exists only one identity element," which is existential quantification over the elements of the group, we cannot say things such as "there exists a group with only one element" since that is quantification over groups themselves and disallowed by first order logic.

However, if we change our domain of discourse and instead e.g. use ZFC (a first order theory) as our starting point and simply augment ZFC with the language of group theory, then both statements are expressible, since both elements of a group and a group itself are sets in ZFC.

So insofar as something like ZFC is enough to do almost all of mathematics, first order logic is enough to do almost all of mathematics.


Thanks a lot! I think now I start to understand this.


You can do a lot of math (almost all of it?) without ever touching logic. It's only important when you ask yourself things like "hmm, okay, the reals are closed under taking supremum of bounded sets, isn't that too much to ask for? how do I know reals are possible?". And of course the great logicians showed us that you can find "significant problems" already in natural numbers.


Thats not what i meant. I mean certainly you are using some logic for doing math, do you agree with this? And if you agree, is the logic we use 1st or 1st and 1nd order?


The overwhelming majority of math is done using informal logic, similar to using pseudocode to describe an algorithm. Almost no one proves mathematical theorems using formal logic, whether it's first order logic or otherwise.


Normal practice is not that formal (except for logicians ;), but would definitely be second order logic. It's common to quantify over "all functions such that..." or over set of sets for example. Second order logic is more expressive.


Thanks. I dont understand why for all elements of a set is first order and for all functions is second order - since functions (lets say R->R) are elements of the powerset of the cartesian product of RxR. So functions in math are usually part of first order logic, no?


As said by @dwohnitmok it depends on the domain you consider. If you have a proposition quantifying over objects and functions operating on said objects, it's second order. But if you have a proposition considering only said functions as objects, it is first order (with a different domain: not the base objects, but functions over those objects).

And again: you can ignore such subtleties for most of mathematics.


> Which formal logic do you suggest to replace it?

Your question assumes that there's any "logic" to it all and that we should "replace" a "logic" by other "logic". That's a very strong assumption.


"... for the purposes of formalizing mathematical reasoning?"


From outside this all looks like “Glasperlenspiel” to me. That “mathematical reasoning” assumes that there should a “logic” behind its “operations”, but why should that be so it’s never told, it’s just assumed.


Most modern mathematicians don't work with foundations but it was a hot topic at the beginning of the 20th century. Most mathematicians today wouldn't even be able to tell you when they are using the axiom of replacement, an important but technical piece of ZFC. Before the work on foundations mathematicians frequently relied on intuition to explain their ideas. The point of mathematical precision is to be able to transmit mathematical ideas with fidelity to other mathematicians; common intuitions are important for that even today.

However, it is frequently the case that mathematical intuition breaks down and paradoxical ideas need to be reckoned with. One historical example is Weierstrass's "monster": a function that is everywhere continuous but nowhere differentiable. The intuition at the time was that a continuous function could only fail to be smooth at a set of isolated points, a very tame set. It is no coincidence that he came up with this example in a period when his school was trying to recast calculus on more rigorous footing.

As for the motivation for founding mathematics on formal systems, it's because set theory is so useful for writing down mathematical ideas but naive set theory is plagued with contraditions, such Russel's paradox. Common language, too, is afflicted with contradictions such as Richard's paradox. The question was how to separate the wheat from the chaff so one does good mathematics and not nonsense. Influential mathematicians like Hilbert set out programs to establish set theory and arithmetic on a sound footing so that its use would not be held suspect. This required a more thorough mathematical development of logic as started by Boole, Frege, and others (my recollection of the precise history of these ideas is vague at this point).

So concurrently we saw the mathematization of logic, arithmetic, and language and the establishment of a reasonable set theory that did not give rise to antinomies. None of the answers were entirely satisfactory but they were good enough that common mathematical activity could continue without too much doubt. When there are questions of precision, one tries to fall back on the informal mathematical language of sets which is commonly presumed to be formalizable in systems such as ZFC.

A lot of development since then in formal mathematical systems has to do with the study of computation and the mechanization of mathematical reasoning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: