Hacker News new | past | comments | ask | show | jobs | submit login
Proofs are Programs – 19th Century Logic and 21st Century Computing (2000) [pdf] (ed.ac.uk)
152 points by michaelsbradley on Jan 26, 2017 | hide | past | favorite | 77 comments



People like to see logic and computing in this way and the author is a great storyteller. But it is a somewhat misleading perspective that is kind of similar to "Whig historiography" (https://en.wikipedia.org/wiki/Whig_history):

> Whig is an approach to historiography that presents the past as an inevitable progression towards ever greater liberty and enlightenment, culminating in modern forms of liberal democracy and constitutional monarchy.

Just replace the "greater liberty" bit with what computer scientists (and the author) argue is the right way of doing computer science today.

I'd love to read a historical treatment of the topic that is written by someone who is a historian rather than computer scientist twisting history to support their world view.


That's not a charitable interpretation ("twisting history to support their world view"), and your comment seems to be conflating the language-logic view itself with the history presented here. The relationship between programs and logic is compelling for many reasons and can stand independent of how the two happened to be developed; my take is that the historical story here—which is definitely too pat—is presented more as an illustration of the idea than anything else.

This is no different than how math textbooks present the history of any particular mathematical abstraction or theorem: they don't go in depth on all the false starts and dead ends people explored before making real progress. That's certainly something interesting but it has no place—and no room—in textbooks dedicated to explaining abstractions. The simplified story of how something could have been developed is, I believe, more relevant to understanding the importance of an idea than the way it was actually developed. The real history is full of noise, a function of the person who first happened upon the idea. It's not hard to image thousands of alternate worlds where the exact development of an idea was slightly different, but reflecting the same fundamentals—those differences end up as irrelevant details due to chance.


> This is no different than how math textbooks present the history of any particular mathematical abstraction or theorem: they don't go in depth on all the false starts and dead ends people explored before making real progress. That's certainly something interesting but it has no place—and no room—in textbooks dedicated to explaining abstractions. The simplified story of how something could have been developed is, I believe, more relevant to understanding the importance of an idea than the way it was actually developed.

Unfortunately, textbooks rarely do even this. That is, they don't supply nearly enough justification/motivation for the complicated definition they posit and the structures they develop. For the canonical (and accessible) bit of philosophy of mathematics on this, I highly recommend Lakatos’ "Proof and refutations" [1]. As a demonstration, it gives a very non-historical and exhaustive explanation for the definition of an "honest" polyhedron with respect to Euler characteristics [2], and is liberally footnoted with the actual history. Unfortunately, textbooks have not improved much since it was written in 1963.

[1]: https://math.berkeley.edu/~kpmann/Lakatos.pdf [2]: https://en.wikipedia.org/wiki/Euler_characteristic


Ah, that's a fair point. Most math textbooks are not that great pedagogically—perhaps I was just projecting how I think mathematical ideas should be introduced :).

I'm definitely trying to do this myself: whenever I teach an abstract concept (more from PL than math), I've started to explicitly note both the formal definition and why it's interesting or relevant. Many explanations of abstract concepts I run into forget one of these components or the other, and it never ends well...


> This is no different than how math textbooks present the history of any particular mathematical abstraction or theorem: they don't go in depth on all the false starts and dead ends people explored before making real progress.

That's true, but it is important to point out when the story is told in a specific way to promote a certain ideology. I'm not saying it's wrong to promote an ideology (on the contrary; ideology is often very important) or that it's rare, but readers should be aware that a certain ideology is being promoted. Wadler is trying to promote functional programming by presenting it as a natural result of logic, and in many ways the essence of his story is correct (from other, more neutral accounts that I've read). But what readers may not know is that logic has not been the only motivating force in theoretical computer science (nor the most powerful), that there are other useful analogies between computation and other phenomena, and that what's presented here is just one perspective on computation.

For example, Turing, the father of theoretical computer science which only gets a brief mention in this account[1], turned his attention to neural networks, biological computation and possibly even quantum computation (although he died while working on some physics results). Much of the main thrust of theoretical computer science was similarly influenced by information theory and the search for the "physical laws" of computation which led to complexity theory, something that is nearly entirely missing from the logical treatment.

[1]: Both Wadler and Bob Harper often try to downplay Turing's role for ideological reasons (which, I think, is pretty funny considering the history of computer science), and neglect to mention that Church's relatively narrow perspective caused him to miss something very basic about computation, while Turing, who was opposed to logicism[2], i.e. reducing math to logic, was able to therefore grasp something much more fundamental about computation than Church. Interestingly, Brouwer, who is another figure that provides inspiration to proponents of FP, actually did not fall into this trap, and had the philosophical breadth that had led him to come closer to Turing's more general view of computation than Church, by trying, like Turing, to think -- philosophically rather than formally -- how the mind of a mathematician works regardless of a specific formalism.

[2]: https://mdetlefsen.nd.edu/assets/201037/jf.turing.pdf Some interesting quotes:

> On 1 December 1933, the minutes in the Moral Science Club records: “A.M. Turing read a paper on ‘Mathematics and Logic’. He suggested that a purely logistic view of mathematics was inadequate; and that mathematical propositions possessed a variety of interpretations, of which the logistic was merely one.”

> He was engaged in a program of putting Russell’s new logic into place, critiquing, not only its position or office, but perspectives on mathematics being forwarded by the “purely logistic” views. Turing was insisting that mathematical discourse stand on its own autonomous feet. And he was rejecting the idea that Principia — or any other construal of mathematics as a whole in terms of symbolic logic — offers mathematics a “foundation” of any ultimate or privileged sort, or reduces it to logic. A “purely logistic view” is but one (philosophically tendentious) “interpretation” among others, and is therefore “inadequate”. This is thoroughly consonant with the opening of Wittgenstein’s 1932 Cambridge lectures “Philosophy for Mathematicians” we have quoted above: Principia is simply part of mathematics, and there is nothing wrong with mathematics before its foundations are laid.

> Turing developed an “anthropological” approach to the foundations of logic, influenced by Wittgenstein, in which “common sense” plays a foundational role.


About Turing's critique of Russell's logicism: is it really applicable to proofs-as-programs? It seems to me that constructive mathematics is pretty far from logicism (not that I understand the subject very well, sorry if I'm talking nonsense).


I think it is applicable to "programs as proofs" (i.e. the opposite direction, which Wadler also makes), but more as an analogy. Once you say programs are proofs, then you reduce programs to logic, while computation is a natural (as well as mathematical) phenomenon that exists outside of any formalism.


I'm not sure that's true. Certainly if you fix a particular programming language, that's equivalent to picking a particular logical formalism (and by Gödel's incompleteness theorem we know that can't be the whole story).

But saying that proofs are programs is a more general statement. Computations are always possible to write down as programs, and by the Church-Turing thesis we know that we can pick any reasonably computational model (Turing machines, say, or untyped lambda calculus), and know that we have captured all of them--we don't have to worry about "missing" some extra computations outside the formalism.

The flips side is that without a type system, you can't in general tell which programs are proofs and which are not. But that's as it should be---truth is always an undecidable notion. This means that if we want a foundation for mathematics we should consider _realizability_, not _typeability/provability_ as the most basic notion.

Bob Harper wrote a blog post about this: https://existentialtype.wordpress.com/2012/08/11/extensional...

(Edit: ok, I see now that you already discussed this with fmap in another thread.)


I have no problem with "proofs are programs", only with "programs are proofs", and I think that most of theoretical computer science refuses, almost on principle, to view computation merely as programs expressed in any particular formalism, just as physicists would say that a certain ODE is a description of a falling apple, not that a falling apple is an ODE, nor that an apple can only fall if we can describe it in some formalism.

See the discussion here about realizers vs proofs: https://news.ycombinator.com/item?id=13495547. Perhaps you can shed more light on the subject. Harper's post is interesting, but while it does offer less narrow a view than Wadler's using a different definition of proof (which I don't think deserves the name proof, but nevermind), it is still too narrow a definition of computation (and his introduction of types seems to muddy the water and complicate matters further, partly because the use of types necessitates yet another computation, because types are certificates). Computation is a wider notion than constructive mathematics (as every construction is a computation but not every computation is a construction, and two different computations can yield the same construct).

As to the choice of a foundation of mathematics, I think this is largely an aesthetic matter. It is certainly a separate discussion from computation in the sense that computation does not depend on the choice of mathematical foundation, nor that the choice of mathematical foundation must be built around the notion of computation. I like Turing's view that there are no privileged foundations, and a mathematical formalism should be chosen based on utility, on a case-by-case basis.


I would recommend Andrew Hodges. Read this to get a taste: Did Church and Turing Have a Thesis about Machines?[1] (spoiler alert: probably)

For the history of logic, Jean van Heijenoort's[2] annotated anthology, From Frege to Gödel[3] is fascinating.

And, BTW, I think your hunch (about Whig history) is correct.

[1]: http://www.turing.org.uk/publications/ct70.pdf

[2]: https://en.wikipedia.org/wiki/Jean_van_Heijenoort

[3]: https://www.amazon.com/Frege-Godel-Mathematical-1879-1931-Sc...


Ooof, this is too true. As someone greatly interested in type theory and the advancement of programming, I think this is definitely an issue to watch out for.

Side note: an even better example than Whig history might be Marx's historical materialism. (Also an analysis I am sympathetic to but equally fraught with this peril.)


I love Wadler's papers. He did one on Monads that was approachable and revealing for me. I also love logic and the predicate calculus.

Highly recommend reading this one. :)


Some of his conference talks can be found on YT too, recommended


Which paper?



Anyone really familiar with this: are non-constructive proofs, things like Cantor Diagonalization or Erdos's Probabilistic Method, programs? Are they just non-terminating programs?

I can see Cantor Diagonalization as a non-terminating program really easily, but I don't understand what the Probabilistic Method looks like as a program.


Cantor's diagonalisation is certainly constructive. Given a a sequence of sequences of binary digits, it gives a way of constructing a sequence not occurring in the sequence of sequences. The resulting sequence is as computable as the input.

But to answer your question: Many mathematicians regard their (classical) proofs as algorithms. They just allow themselves to use an oracle call for any well-formed question:

Say I want to prove B. Then, at some point in my proof, I formulate a sentence A, and make a call to the oracle. My proof/algorithm then branches: in one branch I assume A is true, in the other I assume A is false. If I can continue such each branch gives B true, then my proof works (classically). [0]

The constructive critique is that such an algorithm cannot be executed my humans, or turing machines for that matter.

Also there is a whole continuum of stronger and stronger oracles below this ultimate oracle allowed by classical logic — which is an interesting continuum to study. Often one can locate an exact class of oracles needed for a given classical proof to work out.

[0]: This is the «church encoding» version of the law of excluded middle: For all B, we have (A → B) → (¬A → B) → B, which is equivalent to the usual formulation (A ∨ ¬A).


Gotcha. So basically the "axiom" of the law of excluded middle would become an oracle, and then with a call to that oracle a problem requiring the law of excluded middle becomes a proof. Basically - the axiomatic decisions made by mathematicians can be represented as programs by turning them into oracles.

In this case, with the axiom of choice implemented as an oracle, the program for Cantor's Diagonalization can be written.

This makes a kind of ultimate sense. Rice's Theorem shows how a program with an oracle about the behavior of a Turing Machine is contradictory, and this implies that "even with a finite set of axioms" there are no proofs about general classes of Turing Machines and thus there are statements independent of mathematics (a kind of Godelian statement).

Is that right?

If so, that's profoundly simple and also incredible to me. (Hopefully I haven't excited myself into thinking I "got it".)

Could you help me understand what Erdo's Probablistic Method looks like as a program? Which oracles might be required and how you would call them?


I was thinking about LEM today. Without the oracle, you can prove that the double negation of the law of the excluded middle is true, which can be represented with the following type:

    negneg_lem :: ((Either a (a -> r)) -> r) -> r
I'm using a polymorphic r to represent the void type, and I'm using the idea that "not a" is the same as "a -> r".

The proof of negneg_lem is just

    negneg_lem f = (f . Right)(f . Left)
No oracle there. If you had an oracle

    oracle :: ((a -> r) -> r) -> a
then you would get the normal LEM as

    lem = oracle negneg_lem
An interesting thing about the type (a -> r) -> r, i.e., double negation, is that it is the type for continuation-passing style. This suggests that a classical proof is something which is allowed to do backtracking search, but I'm still trying to understand exactly how that is.

(Also, the oracle is somewhat absurd. Really, if you have a function which takes functions, you can produce an element of the function's domain? It might almost make more sense if r was a void type.)


Yes, classical propositional logic can be given a constructive interpretation using call-with-current-continuation. This was first worked out by Timothy Griffin [1], and there is a cute story by Philip Wadler about making a deal with the devil [2, section 4].

[1] http://www.cl.cam.ac.uk/~tgg22/publications/popl90.pdf [2] http://homepages.inf.ed.ac.uk/wadler/papers/dual/dual.pdf


As black_knight said, you don't need the axiom of choice for Cantor's diagonalization proof---that particular proof is completely constructive and can be written as an ordinary non-oracular program.


Huh looks like I need to do a lot more reading to understand all of this! :)

I'm still trying to understand what the Probablistic Method looks like as a program. Any thoughts on that?


I think it is hard to talk about cantor diagonalization as constructive or non-constructive because it is reasoning about uncountable sets.

A more down to earth example of a non-constructive proof would be proofs using non-constructive elements of classical logic like excluded middle (P V ~P), double negation elimination (~~P -> P) or Peirce's law (((P->Q)->P)->P). For example, this proof[1].

There are some similarities between non-constructive logic and non-purely-functional programming. It it possible to see language features such as mutable assignment, exceptions and call-with-current-continuation as analogous to the non-constructive parts of classical logic. These features all make the language depend on the order of evaluation, which destroys makes the program non-constructive[2]

Anyway, I had a really good reference for this but I can't find it right now :( Hopefully someone else can provide a better link explaining how call/cc is equivalent to peirce's law.

[1] http://math.stackexchange.com/a/1118851 [2] http://stackoverflow.com/questions/24711643


I've always seen some close parallels between Peirce's approach to logic and Dijkstra's approach to program construction. Both start with an "empty" object and then transform it according to simple rules until the finished product satisfies some chosen set of predicates.

But most everyone here knows Dijkstra, so I'm mostly just replying to thank you for bringing up Peirce. He's wildly under acknowledged and more importantly under-appreciated. Truly an American genius.


Nit: I'm not sure I would consider Cantor Diagonalization non-constructive. I wouldn't consider it constructive either, though. It's used to prove things like the uncountability of the reals: how do you constructively prove that no bijection exists between the naturals and the reals?


Sure, you can add an oracle for axiom of choice and you're done (note that AoC -> Excluded Middle). A proof is then "constructive" in the same way (e.g. Can be also written as a program, there).


This may be slightly OT - but the one thing I have always wondered about the 19th century is why Babbage didn't go down the route of Boolean logic/math for computation? Furthermore, why he didn't incorporate and use relays (from telegraphy)? For that matter, why didn't Hollerith?

Ultimately, to me it seems all connected to this "fixed" idea (that is, thinking "inside the box") that computation was about numbers - that there was only calculation and tabulation - counting, iow.

While Boole came up with the math/logic - few pursued the idea of making machines to work with it (a few -did- exist, but were obscure, iirc); embodying logic using electrical circuits would have to wait. Then there was the "revelation" of computation being about symbols, not numbers - which opened everything up.

I'm just kinda rambling here; my apologies. And, hindsight is always easier than foresight, of course. It just bugs me, and makes me wonder what "obvious" ideas we are missing today, that will change the world, that we won't know until we look back. I guess I wonder why many of the most world changing ideas (in the long run) are often very simple, and why did it take so long for humanity to see/discover them? I wonder if anyone has studied this - of if it is something that can be studied...?


The first relay was invented by Henry in 1835. Babbage built his first difference engine in 1822. Mechanical arithmetic dates from the Leibniz stepped reckoner of 1694, which, like Leibniz, was way ahead of its time. It had a hardware multiplier. Babbage clearly knew about that. He was basically adding control logic around mechanical arithmetic.

As for Hollerith, he did use relays. The 1890 Census gear was just counters, but he followed up, establishing the Tabulating Machine Company, which became the Computing-Tabulating-Recording company, which became International Business Machines Corporation, which is today's IBM. Tabulators gained more relays and electrical components over the decades, although they still used mechanical adding wheels.

19th century relays were bulky, unreliable, and needed manual adjustment and contact cleaning. With telephony, telegraph, railroad signaling, and IBM efforts, they were made much more reliable and somewhat smaller in the early 20th century.


I would think that computers were being made to be useful first and foremost. And binary computing was not really useful for logic and number crunching until a number of advancements.

Babbage analytical engine - 1833

Boolean algebra - 1847

two's complement - 1945


Your list is missing a very important advancement from 1937:

https://en.m.wikipedia.org/wiki/A_Symbolic_Analysis_of_Relay...


To be fair, Ada Lovelace was probably the first to think about using it for something other than pure computation. I may be wrong, but I believe there's nothing special about the representation being binary. You just map "logic" to digits (one or more). The base (or a generic set) you obtain the digits from doesn't matter.


   Lovelace was probably the first to think 
   about using it for something other than 
   pure computation.
This is deeply misleading. For example the idea to have automata make music, and more generally for automata to be creative, predates Lovelace, Music making automata were a staple of the renaissance. For example the mathematician and astronomer Johannes Kepler, when visiting the "Kunstkammer" of Rudolf II in 1598, was amazed at an automaton representing a drummer who could "beat his drum with greater self-assurance than a live one" [1]. Incidentally, Kepler corresponded with Wilhelm Schickard on the latter's "arithmeticum organum", the first ever proper mechanical calculator (could do addition, subtraction, multiplication and division). Automating creativity was very much an idea with much currency in the renaissance. Indeed some of the key advances in mechanical automata, which later evolved into computers, where driven by the desire to automate creativity [2]. The "conceptual leap" that some people lazily ascribe to Lovelace, wasn't hers! Using numbers to represent syntax (what you lazily attribute to Lovelace too) is much older and can be found for example in Leibniz's 1666 "Dissertatio de arte combinatoria", where one finds a detailed exposition of a method to associate numbers with linguistic notions. I have no idea if Leibniz was the first to do this. Leibniz also built some early calculators/computers, and is thus a cornerstone of the tradition that lead to the emergence of computers. This tradition was known in Babbage's time, and most likely to Babbage.

[1] W. Zhao, Rendering the Illusion of Life: Art, Science and Politics in Automata from the Central European Princely Collections.

[2] D. Summers Stay, Machinamenta: The thousand year quest to build a creative machine.


I don't think that is what is meant by "be the first to" here. I'm no expert in english, but in my native language the analog expression could mean "if somebody would think of X, she surely would".


I don't think that applies to English. As I read it, the sentence is definitely a claim of priority.


There's a crucial distinction between "would be" and "was". You'd be correct if the sentence used "would be", but "was" is very definite.


> Babbage considered using number systems other than decimal including binary as well as number bases 3, 4, 5, 12, 16 and 100. He settled for decimal out of engineering efficiency - to reduce the number of moving parts - as well as for their everyday familiarity.

http://www.computerhistory.org/babbage/engines/


> a few -did- exist, but were obscure, iirc

In a similar vein, the next big thing has probably been discovered, it just isn't big yet.


That's certainly a very language-and-logic perspective on things. But one of the reasons Church missed the general concept of computation whereas Turing didn't was precisely because Church was stuck in the formalism of logic, and couldn't recognize that computation extends well beyond it. So proofs are programs, but so are, say, metabolic networks, neuron networks, and just about any discrete dynamical system. But are metabolic networks proofs? That's not so clear, and therefore it's not so clear that there is a universal equivalence here.


Its the program itself that is the proof, not the computer running the program.

In a sufficiently powerful type system you can express any proposition as a function type. A proof of that proposition is also a program. The proof checker is a type checker. If your proof is invalid, your program won't type-check.

Coq (https://coq.inria.fr/) is a real-world example of exactly this.


But what if the program that runs the proof will never terminate? For example I could write a 'proof' for some property of natural numbers by just iterating through all of them and every time validating the property.

Isn't a proof only valid if it can be validated within a reasonable amount of time, either by some human or by a computer?


Quoting from Wikipedia: "Because of the possibility of writing non-terminating programs, Turing-complete models of computation (such as languages with arbitrary recursive functions) must be interpreted with care, as naive application of the correspondence leads to an inconsistent logic. The best way of dealing with arbitrary computation from a logical point of view is still an actively debated research question, but one popular approach is based on using monads to segregate provably terminating from potentially non-terminating code (an approach that also generalizes to much richer models of computation,[6] and is itself related to modal logic by a natural extension of the Curry–Howard isomorphism[ext 1]). A more radical approach, advocated by total functional programming, is to eliminate unrestricted recursion (and forgo Turing completeness, although still retaining high computational complexity), using more controlled corecursion wherever non-terminating behavior is actually desired."


That is not what programs as proofs means. It is not that the program finds or performs the proof; the program is the proof, in a certain formal sense. The thing it proves is its type. The validation step is type-checking. For most programming languages, type-checking is decidable, so testing whether the proof is valid is fine. However, most programming languages correspond to trivial proof systems (every type is inhabitable, so every proposition is provable).

This is part of the problem with proofs-as-programs: it's a beautiful theory, and inspires a lot of useful work, but if taken too literally it's wrong and useless.


Yes. Coq doesn't validate non terminating proofs.


What about programs that, by design, run indefinitely and do useful things? Say, a server that just responds to requests as they come. Is that "not a real program" for purposes of CH?

Or what about ones with side effects?

It seems to assume away a lot of (what is called) programs in actual practice.


There are accounts of infinitely-running programs as proofs; they have to do with the proof technique of conduction, which has to do with proving properties of infinite data structures; a counterpart of the more familiar technique of induction, which lets you prove things about finite data structures.

Side effects are a little harder to see proof-theoretically; for some effects there are reasonable stories (exceptions, for example, are a special case of continuations, which have connections to classical logic), but for many I think the story is still murky.

The Curry-Howard correspondence is more of a lens through which to approach PL theory, or an evolving research agenda, than it is a settled account of all programming practice.


As long as there is any structure or internal logic to what the program does, it can be reflected in its types.

For your example of a server, look at the Servant library in haskell. It allows you to define type safe routes and links, ensuring that your program exactly matches the API, and that no links are ever broken.

https://haskell-servant.github.io/


What is commonly known as Curry-Howard correspondence (the duality between programs/proofs, computing/math) actually goes way deeper.

The reason is that "computation" is very useful concept but it has taken us ages to define what it means precisely. But that didn't stop people of reinventing it in other areas.

Curry-Howard is only one of many other isomorphism between programs and other areas. See "Physics, Topology, Logic and Computation: A Rosetta Stone"[1] and in particular Table 4 on pg. 66. Curry-Howard correspondence is the last two columns of the table; the other three columns are other "correspondences" in topology, physics and category theory.

[1] http://math.ucr.edu/home/baez/rosetta.pdf


Guess I'll not be a coward & hurry! To study Howard & Curry.


I'm not sure if you can really cast this as a debate between Church and Turing. It is certainly a difference between Brouwerian constructivism and this presentation of the Curry-Howard isomorphism. I think you'd be much happier with the (Brouwerian) realizability interpretation of logic. The idea here is that a proof is a (untyped) program, and types are predicates on programs. By Curry-Howard, propositions are certain special types, so this gives you a way of defining when a program p is a proof of a proposition A, written below as "p |= A". Here are some examples:

p |= A /\ B: if p evaluates to a pair of a proof of A and a proof of B.

p |= A -> B: if p implements a total function mapping proofs of A into proofs of B.

p |= exists n, A: if p evaluates to a pair of a number k and a proof of the statement A where n is replaced by k.

This is, for instance, implemented in NuPRL. It validates all laws of intuitionistic logic, but is also open ended. You can always add new axioms, so long as you can implement a realizer for them. E.g. if P is a boolean predicate on natural numbers, the statement "(~forall n, ~P n) -> exists n, P n" is true, since we can realize it using linear search, which terminates because of the assumption on P. Similarly, we have a realizer for the statement that "all function on real numbers are continuous", so what you get here is truly distinct from classical logic.

Of course, realizability has the problem that it already presupposes that you know how to show properties of arbitrary programs! On the other hand, if you have a proof for a positive statement you can always check the truth of special instances by running a program and looking at the results.

Anyway, to get back to your question. Under this interpretation, you can implement a proof on a metabolic network, in the same way as you can implement it using sticks and stones. Whether or not that has anything to do with the "Curry-Howard" correspondence is another question, though.


Are constructive realizers the same thing as C-H proofs, though? A realizer is indeed any Turing-computation that decides a proposition, but I don't think it's the same thing as a proof in the C-H sense. The latter must be a lambda expression or a term of some other typed formalism, rather than an arbitrary description of a Turing computation. I.e., you could implement a realizer for a proposition with a metabolic network, but the network is not a proof that it's the realizer you're looking for. In other words, I don't think you can reconstruct the proof from the realizer, once the realizer is no longer in the formalism of the proof and is just a TM. In fact, I think that the halting theorem precludes you from telling what proposition a TM realizes (if any), and so a TM can no longer be the proof of the original proposition it realizes. The two coincide, I think, only if the realizer is still expressed as a typed term.

And I didn't try to cast it as a debate between Church and Turing (which never existed, AFAIK), but to point out that equating programs with proofs is one rather narrow perspective of looking at programs. Arbitrary programs expressed in arbitrary ways are rarely proofs of anything (or anything nontrivial, that is), even if they happen to correspond realizers. I mentioned Turing to point out how this narrow view may result in missing out on fundamental qualities of computation.


I think historically the Curry-Howard correspondence grew out of the observation that combinatory logic and Hilbert proofs look very similar. There does not seem to be a large overarching story that doesn't makeup some history after the fact.

What I can say is that these days the Curry-Howard correspondence is typically taken to state that "Propositions are Types". And for typed languages, the difference between intrinsic types (Church style) and types as properties of programs (Curry style) are just two extremes on a very fluid spectrum.

You are absolutely right that this realizability interpretation as stated doesn't really help you define a complete notion of truth, since it doesn't solve the (unsolvable) problem of telling which proposition a program realizes. This is one of the reasons why Brouwer was so opposed to basing everything in mathematics on a system of inference rules. Without "intuition" about whether a particular program realizes a proposition you cannot really get off the ground.


My point isn't so much about a "notion of truth" (classical or intuitionistic), nor is it about "proofs as programs", but about "programs as proofs", which is really only true when programs are viewed narrowly, and in a specific formalism that Wadler is trying to promote. In other words, I take issue with presenting computation as equivalent to logic. The two obviously have things in common, and ideas flow in both directions, but presenting them as being one and the same, or two sides of the same coin, is misleading.


> But are metabolic networks proofs?

Are they not equivalent to computers running some program? Equivalence is transitive.

But I don't think you will be able to get any useful proof from one of them. That's because they are hard to program, they are still running some proof.


But a "computer running a program" is not an interesting proof of anything. The proof depends on how the program is written. For example, a program written in an untyped language encodes only a very trivial proof by the C-H correspondence (of `tt ⇒ tt` or something like that) even though it performs the same computation of a program written in another formalism that expresses a much more interesting proof. In other words, the proof does not lie in the dynamical process of computation, but in properties of the language used to express the computation. This is a much less universal claim.


(nitpick) Are metabolic systems and neuron networks really discrete (as opposed to analog) systems?


I think that the common belief is that they are (or, at least, that a discrete model can capture their behavior), although I don't think it matters, as the common belief is that analog computers and digital computers have the same computational power.


What does it mean to say that analog and digital computers have the same computational power? My intuition would be something like: for any function calculable by an analog computer, for any desired degree of precision, there is a Turing-computable function which approximates it to that degree of precision. But you have to be a bit finnicky, since you can't actually feed an analogue input to a digital computer, either. So it has to be some relation between the precision of the input and the precision of the output. And of course there's the other direction, that analogue computers can simulate digital ones.

Do you know whether anything like this been formalized and proven?


Turing himself addressed this issue, and actually his famous 1936 paper talks about computing real numbers, not natural numbers. See http://cs.stackexchange.com/questions/35343/analog-computers... for a brief discussion. In short, we don't know of anything that an analog computer can do that a digital computer can't. If you think a bit about your point on precision, you'll see that "desired digital precision" actually covers a problem that is essential to the analog computation itself. In other words, try to think what "infinite precision" actually means.

But let me give an example that would make it more concrete. Suppose you want to build a computer that simulates the behavior of n bodies in gravitational interaction in space. It's obvious that with a digital computer, there would eventually be discrepancies between the actual system and the simulation (which would only grow). But now consider this: suppose that instead you want to do the simulation by building a "computer" which is nothing more than a copy of the original system (i.e., the "simulation" is composed of another set of n bodies in space). Is it possible to have this analog "simulation" more accurate than a digital simulation? The answer is no, because even supposing a true continuum of positions and velocities exists in the first place, in order to reconstruct the system precisely, you would need to pass an infinite amount of information to the "simulation", which is impossible.


I had math professors that would express this sentiment in undergrad. It is very true.

However, problem sets generally aren't graded like they're programs, haha.


curry-howard and Wadler's "Propositions as Types" talk at StrangeLoop* opened my eyes pretty early on to the sweet, sweet utility of logic in my every day programming. it sounds odd to start with but I never realized why these things mattered prior to viewing this talk.

props to wadler. best ~40 minutes I've ever spent.

*https://www.youtube.com/watch?v=IOiZatlZtGU


Thank you, that was a delightful talk, I did not know Wadler was so enthusiastic (but I have to say, most logicians and theoretical computer scientists are).


Does there exists research that uses results from computation theory and applies them to physics? Admittedly I am not sure what exactly I mean by that or what I am searching for, but I believe there is some deep relationship between limits in computation and limits about what one can say about the physical universe (which could be the same).


"Physics, Topology, Logic and Computation: A Rosetta Stone"[1] might be a starting point.

[1] http://math.ucr.edu/home/baez/rosetta.pdf


i am having my doubts these days.

i would like to read my programs as proofs. then I ask myself what is the Amazon.com front page proving?

we talk about proof writing in Haskell or Agda. What about Python? Can we prove things there?

And finally I think about the statements I would actually like to prove? Much of mathematics requires second order logic or beyond, such as the Fundamental Theorem of Calculus.

So for me the triad between Math and Logic and Computer Science is less than air-tight and will remain that way.


The Amazon.com front page generating code is a proof relating some properties of its output to its input and that it halts. However it's also basically incomprehensible, so I doubt anyone alive could actually tell you precisely what it's proving.

Your ability to prove something in any language is dependent on having a formal semantic for that language, or at least for a cognitively manageable subset that you're willing to restrict yourself to. The language semantics and memory model serve as axioms.

Given how American programmers are trained, it's not realistic to expect them to be able to think of programs in a logical semantic way. There is something of an art to it and it's hard to learn a new way of thinking when you're satisfied with a way you already have.

I agree that you can have an enjoyable hobby or make a living programming without really understanding what you're doing beyond intuitively and that's totally fine. Computing time is cheap and most applications don't cause any great harm when they behave in unintended ways. However the isomorphism between predicate calculus and computer programming isn't subjective. Programs really are proofs, just some of them are very sloppy.


Amazon's front page proves that by typing in the amazon.com url, you get a page of whatever content amazon generates. Which isn't a very useful proof.

Python is dynamically typed, so it doesn't prove much either. But in a statically typed language, if you have a method that takes an integer and returns a string, then you have proven you can generate a string from an integer. If it runs, then you have proven it. Granted, that's also not particularly useful, but it's proven if it compiles/runs.

To prove, you need types. Typeless proofs/programs are just really general, not-very-useful proofs, same with dynamic typing. That's why static typing gets you a little closer to the correspondence/isomorphism.

Dependent types gets you closer still - like, if you have a method that will only compile if the method takes an array of numbers, and returns only the subset of those numbers that are even - then the correspondence is starting to get a bit useful. Then you know your output won't have numbers that weren't in the input.

I don't have much experience with this so I'm probably explaining it badly.


Write a program to prove another one. It's pretty cool in concept, but is it useful in practice?


This isn't necessarily about proving other programs. It is about showing that at a very fundamental level, formal logic (as studied by logicians) and typed programming languages (as studied by computer scientists) are actually the same thing, but written in different ways.

The basic idea is that there every typed programming language is analogous to a mathematical logic system and that

* programs are analogous to logic statements (theorem) * a program's source code is equivalent to a proof of the theorem. * type-checking a program is equivalent to checking if a proof is correct. * Executing a program is equivalent to simplifying a proof.

If we restrict ourselves to the sort of type systems in mainstream languages the logic statements that the types can express are somewhat limited and not very interesting. The proofs as programs analogy really starts getting interesting when you have a very powerful type system...

One place where you see these powerful type systems is in the proof assistants for people who want to formalize mathematics. It turns out that writing proofs on these proof assistants is surprisingly similar to programming in a typed functional language! It is kind of interesting because all that you care about is that the program you write (the proof) type checks (is valid), and you don't actually ever run it.

Coming closer to the software development side of things, some of these proof assistants can be used to produce formally verified software. Basically, what they do is that instead of throwing away the programs (proofs) that you write, they compile them down to executable code.

This is not something that is very applicable to the general software population though since formal verification is a niche thing. It also doesn't help that these tools are currently super clunky to use and have a steep learning curve involving lots of difficult math.


Hum, admittedly, most of this is past my knowledge graph, since I know nothing of mathematical logic.

Are you saying the practical relevancy here is actually to logicians? In that it means they can now more easily come up with proofs for their theorems by simply writing a program instead of whatever more complicated paper proof they used to have to come up with?

And that for software, it's only helpful in special cases where you need to guarantee a small piece of logic is 100% correct?


TDD, formal verification

i'd say yes


I know TDD has gotten out of fashion, where people seem to acknowledge today that the right balance of tests is better, more practical, then a test first and test everything approach.

Formal verification hasn't really caught on yet. The extra effort needed to formally prove even a trivial program seems to be too high for the value it gives to most stakeholders.

I'm really hopeful for simpler methods of formalized proofs, or even automated testing. I think today, the practical cost is maybe still too high for people to care.


Formal verification will never catch on, because it's silly and prohibitively difficult for all but the simplest programs.

On the other hand, formal construction is an idea whose time is long overdue. Rather than asking if program text T satisfies predicate P, instead start with P and then construct a T mechanically such that it must satisfy P. The latter is a much easier task than the former. Sort of like how it's easy to know N is the product of two large primes if you just multiplied two large primes together to get it.


How would we specify P? I can't get away from the idea that specifying P is similar to constructing T.


Dijkstra and Scholten's Predicate Calculus and Program Semantics is a good textbook on the subject. Dijkstra's a Discipline of Programming is much more informal, but I find it more readable and it has a number of good examples.


This is probably difficult to explain without proper context, but could I ask you for a summary? (Thanks for the book references.)


Property-based testing can sometimes be a practical step inbetween unit-testing and formal proofs. Scala is where this is most common/well-supported, I think http://www.scalatest.org/user_guide/property_based_testing


shame about the misplaced apostrophes




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: