Part of the fun of logic is realizing that there are multiple forms of it which change one's language and ability to express ideas. They also don't dominate one another—some statements' truth depends upon what logic they're formalized within.
Model theory is good here in that it studies the connection between logical language and more "physical" models. Some of these models can give a relatable account for what a logic "means". For instance, Boolean logic correspond with set operations and therefore we can use that correspondence to portray a meaning for statements in Boolean-like logics.
Intuitionistic logic is an interesting one if you're not familiar with it. It can be seen as the logic of working with "demonstrations", building and analyzing them. For instance, if I have an object "X" such that witnessing it immediately convinces someone of the truth of "1+1=2", and if I have a whole collection of such items, then intuitionistic logic talks about how to compose and decompose collections of these things. Another interesting operation is "not" such that "not X" is a tool for using any X you happen to have to show that the world is inconsistent. This acts as a refutation of the existence of values like X!
This differs from set-like logics because, for instance, "not (not X)" is not the same as X: being able to show that if you had (a way to prove the world inconsistent if you had an X) you could show the world as inconsistent... it's a far cry away from actually having an X!
Intuitionistic logic is interesting for two reasons. First, it models communication and persuasion by being an algebra of "persuasive items" like X. Second, it's "constructive" and thus works similarly to how programming languages express things through their creation.
Correct me if I'm wrong, I've all ways seen Maths to be easy part. The translation of English Language (grammar, sentence structure) that can be ambiguous to a very precise meaning such as Math's to be the hardest part and where people with good translation skills excel.
I think much of maths is that translation. You learn a lot of mathematics and you recognize powerful, subtle, deep patterns. Then you spend your time looking for those patterns in the world.
It's the same thing with even the most basic "translation problems" that kids in schools learn. They're armed with arithmetic and high-school algebra so they get arithmetic and high-school algebra "real world" scenarios to analyze.
At higher levels its the same. The problems are just harder and the patterns aren't necessarily given to you. Sometimes, oftentimes, nobody even knows what they are.
Gödel's arithmetization of logic is one hell of a beautiful mathematical idea, up there with linear algebra or probability theory. One of its big selling points for me was how easily it solves the unexpected hanging paradox. (The simplest arithmetization of the judge's self-referential statement is a statement about numbers that can be shown to be self-contradictory.) It's also cool how it turns Russell's paradox into Gödel's theorem, Curry's paradox into Löb's theorem, etc. The connections to algorithms and computability theory are also neat (Gödel's idea of "effectively axiomatized system" is any computer program that can print sentences, which has just the right amount of generality and connects to the halting problem in the obvious way). To me arithmetization is simply the right approach to logic, which easily subsumes everything that makes sense and rejects everything that doesn't.
I've read about a third of it. It's too long and too focused on other topics for me personally.
I also read the book 'Gödel's Proof' a long time ago, before I had much familiarity with pure math, and remember it describing a very interesting sort of 'architecture' that Gödel had put together—but I left still wondering about any applications other than producing the incompleteness theorems. Maybe I'd get more out of it now though...
I read GEB a few years ago, and it struck me as a book full of ideas that would have transformed me had I been encountering them for the first time. I think the book may have been so influential that its big ideas had largely filtered out into places where I'd encountered them. If I'd read it at 19, say, I think it would have rocked my world.
Note that I'm mostly talking about the book's first half (about incompleteness and canons and whatnot). The second half (ants, etc) struck me as much more dated.
I lost interest when certain other ideas made me think his basic project of accounting for consciousness mathematically was deeply flawed (and it wasn't Penrose's argument about formal systems!). I did find it entertaining though, so maybe I'll revisit before too long.
I'm curious what shift it caused for you, if you don't mind saying.
At least for me, it wasn't so much about computers or computer science, as it was the first time I really deeply considered what consciousness is. It's actually the book, along with The Selfish Gene that made me dismiss the necessity of any kind of 'soul', fresh out of Catholic School.
The more time I've spent reading about and considering the nature of subjectivity, the more I believe arguments from evolution or computer simulation are irrelevant. They're talking about something different.
I liked "Godels Theorem Simplified". Although it looks like it is now a lot more expensive than it used to be. Maybe get it through and interlibrary loan.
My take: it's a human means of systematically exploring implications of already accepted beliefs.
There are implicit and explicit forms of it.
The implicit form is something our brains do automatically and is a consequence of its structure. Perhaps the way it models things intrinsically does not allow for what we would call contradicting statements (though it contains many separate models which if unified would contain contradictions); then, the space of desirable implications is narrowed by a motivation to find a certain kind of implication (i.e. there is typically a goal when engaging in reasoning—we aren't often indifferently interested in all implications) combined with the fact that contradictions aren't allowed. It seems like implications are usually discovering that some entity belongs to a class it wasn't previously known to belong to, at which point it inherits the attributes of other things in that class.
The explicit forms seem like attempts to model the implicit form in mathematical language, in order to bring logical processes more under control of conscious thought, or 'executive function' (plus reasoning on paper extends working memory). It seems possible that with the 'correct' formulation, a logic could viewed as a theoretical science describing properties of the structure of the human brain that give rise to reasoning.
The interesting point of the given article was that pre-mathematical-formalization, logic was backwards of that purpose, and often remains so even today. The search does not start with assumptions and seek conclusions; rather, it starts with conclusions, and then seeks which sets of axioms fulfill them. A vast majority of mathematicians don't care about the axiomatic foundations of mathematics, nor mind if it gets entirely reformulated; they already know math works, and its irrelevant which particular set of axioms is used to prove that.
The assuptions => theorm structure is an artificial abstraction over a very different logic process. Symbolic logic is a framework for mechanical validation, rather then anything like how people actually do logic.
> The search does not start with assumptions and seek conclusions; rather, it starts with conclusions, and then seeks which sets of axioms fulfill them.
Right, there was an issue with labeling justification 'logic'. An interesting point, but not as interesting as the title of this story—which I prefer to discuss :)
I was getting at the general process of inference making in the human brain (not even conscious inference making necessarily), which I see as the real root of all the different things we call logic.
Speaking of classical logic, I came across a book on Fred Sommers' work entitled "The Old New Logic" (David Oderberg is editor). I haven't read it yet, so I can't comment, but based on some excerpts and reviews, it sounds like a good read. I thought I'd mention it here for those interested. Link: https://mitpress.mit.edu/books/old-new-logic
Also, those interested in the fallout of Russell's paradox are encouraged to look into Lesniewski's mereology (this is a topic that occurs in my master's thesis where I attempted to formalize the structure of Aristotelian living things mereologically using many-sorted logic). In short, Lesniewski held that the axiomatic set theory of the day betrayed the intuitions of naive set theory and resulted in paradoxes. He attempted to dispense with the these paradoxes by proposing a formal system known as mereology that replaced the primitive element relation in set theory with the parthood relation (other relations can be defined in terms of parthood, e.g., overlap, disjoint, proper/improper parthood depending on which one you assume, and vice versa). Link: https://plato.stanford.edu/entries/lesniewski/
"The idea that ordinary language is expressively inadequate to account for mathematical (or even logical) reasoning became a recurring theme in the ensuing tradition of mathematical logic, so much so that the term ‘symbolic logic’ became synonymous with this tradition. Doing logic came to mean simply working with special symbols, not with ordinary words. In this respect, it is worth noting that the humanist authors had criticised the Latin of scholastic logicians precisely as ‘too artificial’, and even the Greek language that Aristotle relies on for syllogistic logic is regimented and removed from ordinary ways of speaking at the time. In a sense, perhaps a certain degree of ‘artificiality’ is at the core of logic throughout history, as it operates at levels of abstraction that are at odds with ordinary language usage."
True that, but were/are there alternatives? Difficult to say.
> In a sense, perhaps a certain degree of ‘artificiality’ is at the core of logic throughout history, as it operates at levels of abstraction that are at odds with ordinary language usage
Of course, you could say the exact same thing for programming languages (after all, what do programmers do if not "simply working with special symbols, not with ordinary words"? And isn't this the primary criticism of programming brought by people who find it distasteful -- that expressing their fuzzy intuitions in a precise enough manner for the computer feels like a pointless exercise in frustration?)
Really, the only difference between the proponents of mathematical logic over informal prose in the early 20th century and the proponents of programming languages over informal prose today is that computers obviate the problem with fuzzy thinking and fuzzier exposition. The philosopher or mathematician could hand-wave about their holes in their arguments, but the programmer can only hand-wave so much the uselessness of his unexecutable pseudo-code.
The golden standard would be a form of speech that is natural and ordinary, but also avoids misleading, omitting, and eliding essential details.
In other words, a language that is formal enough to be parsed, checked, and generated by a proof assistant program. But also natural enough to be read as if it's prose written by a native speaker of a natural language.
I'm of the opinion that such a golden standard does not exist, and so we're forever doomed to incremental improvement.
On a slightly orthogonal note, I think that not only can you think of programming languages this way, but that you should—programming languages and formal logics are ultimately instances of the same general idea. I've found this to be a consistent and powerful view for reasoning about programs, programming languages, proofs and logics in a uniform sort of way.
The Curry-Howard correspondence is an important idea that's a specific instance of this, showing a direct relationship between specific kinds of logic and specific kinds of typed programming languages, but I think it's useful to think about programming languages that don't correspond to well-studied logical systems in a similar way.
This helps explain why the study of programming languages—and especially programming language theory—isn't really about programming languages: it's more about abstraction and reasoning in general. It's deeper than you'd suspect from the name. I'd go so far as saying that programming language theory is less a theory of programming languages and more a general theory of computation from a language point of view, as compared to "normal" theoretical CS.
programming languages and
formal logics are ultimately
instances of the same general
idea.
That's deeply questionable. The CH-correspondence breaks down as soon as you your computation includes non-termination, concurrency, timing, distribution etc. Classical logic doesn't really have wholly convincing CH-correspondences either.
It seems to me that constructive proofs are a special class of programs, and fall under the purview of programming language, but not the other way around.
Non termination is modeled as false: a function that takes an A and never terminates has type A -> F, or ~A.
There's some connection between classical logics and concurrency, though it's hard to get at. You can use callcc to implement threads, and Getzen's classical sequent calculus has multiple conclusions.
Classical logic is perfectly modeled with callcc, which has the type Peirce's Law, and allows one to write terms with the types of the law of excluded middle or Double Negation Elimination.
That means the CH correspondence is not interesting. If all non-terminating processes are falsity, types say nothing whatsoever of interest (from a specifiation and verification point of view) about non-terminating computation.
The T. Murphy PhD doesn't deal with distribution because it lacks the key feature of distribution which is partial failure.
As far as I'm aware the work on modelling double negation by call/cc doesn't actually provide a 1-to-1 correspondence between programs and proofs. I've forgotten the details but I think only some programs come out being proofs.
> The CH-correspondence breaks down as soon as you your computation includes non-termination, concurrency, timing, distribution etc.
No it doesn't. Each of those features leads to a different logic. You can even reason about the composition of logics via category theory to see how various features interact with each other.
> Classical logic doesn't really have wholly convincing CH-correspondences either.
I'm not aware of a fully fledged approach to logical composition that tells me an interesting story about the combination of logics that arise in computation, I know that people work on this, but as far as I can see, this work is far from finished. Please point to papers if you know more than that.
first class continuations
As far as I'm aware, Griffin's work, which was a stroke of genius, does not provide a 1-to-1 correspondence between classical proofs and programs. That's why even today, 1/4 of a century later, people work on CH for classical logic.
It's true that the CH approach to classical logic is by no means exhausted. I see this as a feature of classical logic's rich structure. In any case, I disagree that this is a break down of the CH correspondence. Analyzing the type theories related to classical logic demonstrates to me that the CH correspondence is a gift that keeps on giving.
It's kind of cutting edge and has only yet been applied to a few situations, but it seems to be the correct notion of composing type theoretic (and hence logical) structures in compatible ways. Another approach I like that is strictly logical is Nuel Belnap's display logic, which characterizes a ridiculously broad range of logics and proves a general cut elimination theorem for them. The relationship between the categorical approach and the display logic approach is not entirely clear yet.
In any case, the CH correspondence is a rich and very active area of research. Just because we haven't nailed it down exactly yet doesn't mean it's broken! Our current understanding is a shadow of a much deeper concept.
The essence of CH is "proofs = programs" and "program execution = cut elimination". As far as I understand this does not work precisely as of Jan 2017 for most forms of computation beyond strongly normalising pure functions.
If you are broadening what you mean by CH, then fair enough, but it's shifting the argument.
The point is that they're both formal systems of reasoning, not that they have exactly the same properties. The techniques and abstractions from both PL and formal logic often generalize well to both.
The whole drive of mathematics (and logic) is ultimately that syntax is the semantics.
This, of course, is annoying for almost all communication, particularly when we want to use fuzzy, hard to define terms. Languages are useful because you can pretty heavily butcher the syntax, but still be understood. (And arguably, that fuzziness is what drives new thoughts.)
> “It’s logical, but logic has nothing to do with reality.” Logic is the art or skill of non-contradictory identification. Logic has a single law, the Law of Identity, and its various corollaries. If logic has nothing to do with reality, it means that the Law of Identity is inapplicable to reality. If so, then: a. things are not what they are; b. things can be and not be at the same time, in the same respect, i.e., reality is made up of contradictions. If so, by what means did anyone discover it? By illogical means. (This last is for sure.) The purpose of that notion is crudely obvious. Its actual meaning is not: “Logic has nothing to do with reality,” but: “I, the speaker, have nothing to do with logic (or with reality).” When people use that catch phrase, they mean either: “It’s logical, but I don’t choose to be logical” or: “It’s logical, but people are not logical, they don’t think—and I intend to pander to their irrationality.”
> If logic has nothing to do with reality, it means that the Law of Identity is inapplicable to reality.
That may be the case. If our brain is only capable of forming self-consistent schemas (consider how it reacts when encountering new facts that contradict existing schemas: https://en.wikipedia.org/wiki/Cognitive_dissonance), then the world's apparent consistency may only be a consequence of our contemplating it through the human brain. (To be clear though, it's consistency within single schemas—not between, where contradictions abound.)
> If so, then: a. things are not what they are; b. things can be and not be at the same time
Can someone be a hero and not be a hero at the same time? I think there are some subtleties to this issue arising from the nature of language. Certainly at our scale (i.e. not quantum scales—although even there I don't really think 'real' inconsistency is happening, but it's less clear cut) a certain kind of consistency has very strong empirical support (it's probably our best supported hypothesis!), so it would be adaptive for us to build models of the world that would reflect that property—but this doesn't mean we've hit on something deep about the nature of reality.
To be honest with you, I just about fell out of my chair reading your comment. Seriously, this is the first time I've posted a relevant quote from Ayn Rand on the web and received a substantive and thoughtful response.
From what I know, pure logic or even math are useless without assumptions. This is what reason and science give us: a standard set of axioms from which logic and math can lead us to useful conclusions.
Thing is, there is no universally accepted "standard set of axioms" in mathematics (let alone in all of "science"). Not since around the early 1900s anyway. Even assumptions that may seem obvious to most people, such as the famous Axiom of Choice, are only accepted provisionally in modern mathematical texts. In the extreme case of what is known as the "intuitionistic logic", even some of the most widely accepted laws of logical reasoning are rejected. If you add the fact (Goedel's Incompleteness Theorem) that given enough axioms there always be statements that are impossible to prove or disprove, the whole situation with logic will become, well, complicated.
This is based on type theory (and has a model in set theory), but isn't as powerful as the logics used in Coq and similar theorem provers (the Calculus of (Inductive) Constructions).
I'm not sure what the parent commenter means by "reason and science" giving us axioms, but for mathematics the most common axiomatic system is ZFC set theory. Some of the axioms are easy to understand even with a mathematics background:
I would consider these to be mathematical axioms. I was looking for what pitaj was describing as coming from outside of math. Maybe I'm just misunderstanding what pitaj was saying.
I don't think I would agree with the general statement that science gives us axioms. In normal usage, "axioms" are not the same as scientific "givens" or "data" even though it makes sense to interpret the word that way. In math and logic, the axioms we use are typically very foundational. We might say, "the empty set exists," or "for all x, it holds that x = x." There's no need to invoke science for these sorts of axioms and we can make many useful conclusions without it.
You can create any set of axioms you want, but unless they carry a reference to something that happen "outside of Math", at the "Real World", they are only useful for nerd-sniping.
I've been surprised to not find anything for ctrl-F "Bayes". It turns out that traditional deductive logic is simply a special case of applying Bayes rule in order to find out the state of some binary variables given the other variables in which the probabilities are only 0 or 1. See for example these examples from Barber's "Bayesian Reasoning and Machine Learning" (p. 39):
Deductive logic is sort of like the machine code for reasoning. It encompasses everything, but it is hard to see in daily usage, just like machine code. Bayes Rule is just one part of it.
If you look at the Mizar mathematical library you will find a formalization of Bayes Rule using plain FOL.
In fact, one of my Logic II (an intermediate logic class) exercises was formalizing a version of Bayes' rule in classic first-order logic.
I would add that Frege, Russell and Whitehead did not just want to logicize mathematics. They thought that reality itself followed this logic, and so were doing metaphysics. The later Wittgenstein is a decisive refutation of this idea, and this is a key reason logic has gone out of fashion.
this is amusingly missing the point. logic is, by its nature, an abstraction away from reality. it's a set of specific (not general) rules for evaluating statements of language (whether that language is natural, mathematical, or artificial/computer) such that meaning can be disambiguated.
"reality", whatever it is, has no such requirement that it can be expressed in unambiguous statements.
> it's a set of specific (not general) rules for evaluating statements of language
Not the OP, but you could say that no "language" exists outside of "reality", and stretching it further you could also say that all types of languages (formal, mathematical, artificial/computer), including the associated "rules", are social constructs which depend heavily on the time and place of their "existence", i.e. on "reality".
Until we meet some quantum-like entity which might "talk" to us from outside the boundaries of this physical Universe we're doomed to be solipsistic: we construct "abstractions" that are meant to distance us from "reality" only to find out in the end that "reality" was behind those "abstractions" all along.
Model theory is good here in that it studies the connection between logical language and more "physical" models. Some of these models can give a relatable account for what a logic "means". For instance, Boolean logic correspond with set operations and therefore we can use that correspondence to portray a meaning for statements in Boolean-like logics.
Intuitionistic logic is an interesting one if you're not familiar with it. It can be seen as the logic of working with "demonstrations", building and analyzing them. For instance, if I have an object "X" such that witnessing it immediately convinces someone of the truth of "1+1=2", and if I have a whole collection of such items, then intuitionistic logic talks about how to compose and decompose collections of these things. Another interesting operation is "not" such that "not X" is a tool for using any X you happen to have to show that the world is inconsistent. This acts as a refutation of the existence of values like X!
This differs from set-like logics because, for instance, "not (not X)" is not the same as X: being able to show that if you had (a way to prove the world inconsistent if you had an X) you could show the world as inconsistent... it's a far cry away from actually having an X!
Intuitionistic logic is interesting for two reasons. First, it models communication and persuasion by being an algebra of "persuasive items" like X. Second, it's "constructive" and thus works similarly to how programming languages express things through their creation.