Hacker News new | past | comments | ask | show | jobs | submit login
Relation Between Type Theory, Category Theory and Logic (ncatlab.org)
177 points by rndn on July 10, 2015 | hide | past | favorite | 70 comments



Can someone give me an absolute layman explanation to homotopy type theory (and why it's so interesting)? I'm hearing about it everywhere, but most of the introductions to the subject assume the reader is already familiar with either type theory or category theory.


Typing systems (at least some of them) are also logics. This is called the Curry-Howard correspondence (CHC).

At first the CHC was made to work for propositional logic only, i.e. logic without for-all and existential quantification. Later typing systems were developed which correspond to logic with quantification. This requires dependent types. However, early typing systems with dependent types had an inelegant handling of equality. Logic needs equality to express things like forall x. x = 3 => x > 2. To overcome this problem, Per Martin-Loef introduced a dependent type-theory (MLTT) with "identity-types" which enables an elegant handling of equality. MLTT and all other extant formalisations of mathematics still suffered from another problem, in that many mathematical constructs are 'morally' the same, but formally distinct. For example you can define a group to be a triple ( G, 1, * ) where G is a set, 1 the neutral element and * the binary operation, or you can define it as ( G, *, 1 ). The former and the latter are not the same thing, but we don't really want to say that they formalise different concepts. This is similar to how in many programming languages certain types are formally distinct but somehow capture the same content.

Homotopy type theory (HoTT) overcomes this problem (or parts of this problem) by adding a single axiom to MLTT, called the "univalence axiom" which can informally be rendered as:

   Things that are 'morally' the same, really are identical.
The key idea behind the univalence axiom is that MLTT (and hence HoTT) restricts the mathematical objects that can be constructed such that whenever you have objects O1 and O2 that are morally the same, then there is a function that transforms any construction involving O1 automatically into a construction involving O2 or vice versa, but in a truth preserving way.

It turns out that proofs in HoTT are formally similar to certain aspects of geometry/topology.

Much of current research on HoTT is about the consequences of the univalence axiom and the similarity between logic and geometry/topology.

The key hope in all this is that HoTT will streamline formalised mathematics.


Wonderfully accessible synopsis. One ever so _minor_ point. Did you really mean to say "The former and the latter are not the same thing" or "The former and the latter are not _formally_ the same thing" ???

Here's a fun fact for you, you might enjoy this. I picked up a copy of a 1943, second edition, hardback copy of Bertrand Russell's An Inquiry into Meaning and Truth. (In an antiquarian bookstore for €25, as one does... the colophon even states BOOK PRODUCTION WAR ECONOMY STANDARD)

Anyway, chapter XX is titled "The Law of The Excluded Middle" and in it Russell states, "[…] As everyone knows, Brouwer has challenged the law, and has done so on epistemological grounds. He, in common with many others, holds that “truth” can only be defined in terms of “verifiability”, which is obviously a concept belonging to the theory of knowledge. If he is right, it follows that the law of the excluded middle, and the law of contradiction also, belong to epistemology, and must be reconsidered in the light of whatever definition of truth and falsehood epistemology permits."

What strikes me about this (apart from the fact that I did not know previously that Russell tackled the question of constructivism/intuitionism so directly) is that he claims that everyone! knows this. If we are being charitable by everyone he means mathematicians, and perhaps logicians, maybe even philosophers, I still think it's quite a broad statement. It shows that the school of the logical positivists springs from Brouwer -- which firstly I was never taught, and which secondly does not appear in, for instance, Language, Truth, and Logic[1] by A.J. Ayer. Finally I can see how the law of double negation becomes an epistemological matter, but not the law of what he calls here contradiction (noncontradiction?).

He goes on to disprove the verifiability claim over many pages, and thus constructivism. There's a swipe at Wittgenstein further on still. It's a fascinating text. Russell grapples with mathematical and logical issues in a clear fashion using everyday language. Kind of like what you might get if you took the intro to HoTT and lengthened it and made it more philosophical.

[1] https://archive.org/stream/AlfredAyer/LanguageTruthAndLogic_...

ps: is there markup for blockquote on HN?

EDIT: clarity, markup, prose style


Wow. Replying to myself. What is brought into question is the status of Proof by Contradiction, not the status of the Law of Noncontradiction. Russell seems to have confused these two things. And I don't blame him. It was 1940 after all, and there was a war on.

Another interesting fact. The book is built up from from accumulated material finally delivered over a course of lectures at Harvard University in a series called the William James Lectures in 1940 when England was at war but the USA was not.


Just reading your comment I was thinking about the closed world assumption in logic (prolog), perhaps there are some middle concept between the Proof by Contradiction and the closed world assumption. In the real world we learn concepts (think about relativity or a non planar world) and then the old question is no longer a two value alternative.

Perhaps something like: In this field a new conception could appear that would invalidate what we consider today to be only two alternatives. Is not about many worlds reality or quantum computation, (3 minutes thinking time out).


We have something between the two in HoTT - universes of types is stratified by homotopy levels, corresponding to how many dimensions of structure a type has. A space with only points is thus a 0-type, a space with at most 1 point is a -1-type, and a space with only one is a -2-type.

The catch is that univalence is inconsistent with LEM at h-levels greater than -1, but assuming it is perfectly consistent for -1 types, which can be thought of as the "at most true" propositions of classical logic.


This is out of my depth but the work of the physicists David Bohm and Basil Hiley (an article with him was recently linked to HN) seems to point in this direction? See, for instance, http://www.amazon.com/The-Undivided-Universe-Ontological-Int...

But like I say, out of my depth! :)


Another addendum to the discussion, from an angle more relevant to computer programmers: if we find a computational meaning for the univalence axiom, then it'll be a crazy nuclear weapon of generic programming and code generation.

The univalence axiom says that two types are equal if we can convert their values back and forth (without loss of information). For example, one Boolean type may be defined as an enum containing True and False. But we could rename the values to Foo and Bar and it would be still a fine implementation, because it's easy to convert between the two representations. We can imagine more far-fetched representations too, with more complicated conversion functions.

But in type theory, if two types are equal, then we can freely substitute one for another in any context! For example, if we have a function "Bool -> Bool", and we know that Bool is equal to Bool2, then we can coerce the function to type "Bool2 -> Bool2".

Imagine a complex program that uses a complex piece of data type. If we can provide a correct conversion function to another type (i. e. prove that the type is equivalent to another one), then the univalence axiom can take the conversion function and our program as input, and spits out a new program that uses the new data type, and also preserves all properties of the program. By our current standards of generic programming, this is just insane!

Now, the issue is that currently we don't actually have a computational interpretation for the univalence axiom, so the above magic doesn't yet work. But there is meaningful progress towards that, and there are already some experimental implementations that can do some magic, for example this one:

https://github.com/mortberg/cubicaltt


What is the state of the art of the research on the computational interpretation of univalence? What's working, what's not working? I seem to remember that A Jeffrey said not too long ago that he had a working interpretation.


A lot of work was going into the cubical model, but IIRC they realized it was a dead end about a month and a half ago. Right now the most promising work looks to be formalizing the set theoretic model in NuPRL.


Could you give pointers wrt cubical model being a dead end? As yet I've heard nothing about this.



When you think about it we do have a computational meaning and interpretation for the univalence axiom. Your very comment was such an exposition!

What we perhaps don't have is an automated implementation of your exposition. What you're talking about is typecasting where all the terms of one type map to all the terms of another type. And that once shown this the machine could perform type substitution kind of like how type inference works now. (And maybe the machine wouldn't always have to be shown??) Neat idea. Automatic type inference is pretty powerful. Automatic type substitution could be just as powerful perhaps.


I must be missing something.

A proof that Bool and Bool2 are equal is a bijection f :: Bool -> Bool2. So given any g :: Bool -> Bool we have (f . g . f^-1) :: Bool2 -> Bool2. So it seems trivial.


A more interesting example: substitute natural numbers with binary trees. After all, they're equivalent. Now univalence would have to implement the binary tree analogue for every function on naturals, in a way that preserves semantics with respect to our conversion function.


Yes, that's pretty much it. Doing this systematically for all types including dependent types that quantify over Bool, as well as for user defined higher inductive types is another matter.


In C, you can convert back and forth between void * and some integer type, aliased as intptr_t since C99, so they must be equal under the univalence axiom.

Ah, "nuclear weapon" ... gotcha!


> The univalence axiom says that two types are equal if we can convert their values back and forth (without loss of information).

So strings are equivalent to everything, and vice-versa.

> Now, the issue is that currently we don't actually have a computational interpretation for the univalence axiom

JSON? Sexps?


Doesn't the halting problem pose an obstacle in deciding whether two types act in an equivalent way (similar to how the equivalence between two programs is undecidable)?


No. Homotopy Type Theory is a total language, so even type-level programming simply has to terminate, or the proof-checker won't accept it.

Proving that nontrivially recursive programs terminate can be a bugger.


You do not in general decide whether two types are equivalent. You prove it...


Why would it do that? Univalence is unrelated to the halting problem.

What the univalence axiom says is that you can treat types you have proven isomorphic as equal.


In the comments of a recent mathematics-without-apologies blog post[1], there was a back-and-forth between Jacob Lurie and Urs Schreiber (the author of the OP article on nLab). I've been trying to parse the material on nLab for years, good to know I'm not the only one who seems to have trouble with it. A refreshingly blunt discussion, not to be missed!

1. https://mathematicswithoutapologies.wordpress.com/2015/05/13...


I'll try to keep it short and skip the details of the mathematical framework itself.

Back in the early 1900's there was a school of mathematical logic that denied a certain axiom called the law of the excluded middle, that !!p <=> p. They said that if you used this axiom (roughly) your proof was not "constructive," whereas if you avoided it your proof was "constructive." They went as far as to deny all math that was not constructive (which I think is silly).

Their ideas faded out of popularity among mainstream mathematicians, but their goal was to rewrite the foundations of all mathematics using only constructive proofs, and to their credit they got pretty damn far in certain subjects like real analysis (calculus).

Today, their ideas are seeing a resurgence in the homotopy type theorists (HTTs). Their goal is also to rewrite all of mathematics in a constructive way, but now they have the hindsight of these great tools like category theory and homology and computers and all these great things developed in the mid to late 20th century. So they are working on building this framework and seeing what it can prove.

The HTTs are also claiming that proofs in their framework can be logically checked by a computer (because it is constructive) and one of their main selling points is that their work will eventually lead to "computer assisted mathematics," not in the sense of Stephen Wolfram but rather in the sense that a computer will check your proofs as you prove them and find stupid mistakes.

I personally am not all that excited about HTT, due to what I see as misguided hype around it. My preferred way to view it is as logicians studying a really fascinating new logical system and seeing how far they can push it. Saying things like "I deny all mathematics that's not constructive" and "computers will start doing our proofs for us," makes me cringe. But it is what it is :)


j2kun's characterisation of HoTT is misleading.

    Their goal is also to rewrite all of mathematics in a constructive way,
This is not the goal of HoTT, and also not possible as some mathematics is intrinsically non-constructive.

HoTT allows non-constructive reasoning, see section 3.4 "Classical vs. intuitionistic logic" of the HoTT book http://homotopytypetheory.org/book/. You can say that HoTT derives non-constructive mathematics on top of constructive foundations.

   The HTTs are also claiming that proofs in their framework can be logically checked by a computer (because it is constructive) 
No. Whether a proof can be logically checked by a computer has nothing to do with whether it is constructive or not. A proof is just a syntactic object. It's just as easy to check if a proof step uses excluded middle or double negation as it is to check whether it uses a construtive principle like /\-introduction. There are many proof assitants that work with classical logic, e.g. Mizar, HOL, HOL light, Isabelle/HOL ... All of SAT-solving works classically.

The novelty of HoTT, and only extension over intensional Martin-Loef type-theory, is the univalence axiom.


A proof in mathematics is not purely syntactic because it's not purely logical. For instance, in some steps of a proof, we do something to both sides of an equation and argue that it is still balanced, based on rules that are outside of logic. Such a step can have a nonconstructive element.


    A proof in mathematics is not purely syntactic because it's not purely logical.
This is confusing constructive reasoning with the decision problem for proof validity. They are different things.

The very point of logic, constructive or otherwise, or at least one of the key points for having a logic is to be able to decide whether a given proof object is indeed a valid proof. Valid proofs of course typically also make use of non-logical axioms of the ambient theory. If proof-hood is not decidable, it's not a logic.

Reasoning is constructive if it avoids certain proof principle like LEM (law of excluded middle) or double negation.


I seem to have the impression that constructivism in math is the same thing as constructive logic, though they are related.

Constructivism in mathematics:

https://en.wikipedia.org/wiki/Constructivism_%28mathematics%...

Constructive (intuitionistic) logic:

https://en.wikipedia.org/wiki/Intuitionistic_logic

Surely, one can be a constructivist mathematician, yet not eschew double negatives?


No.

Classically, LEM and double negation are equivalent.

Intuitionistically the situation is more complicated, see the discussion in this thread by IngoBlechschmid: We can prove that LEM implies double negation, but not the other way around. Intuitionistically, we can only prove that ⊢¬¬A⊃A, then ⊢A∨¬A, but not ⊢(¬¬A⊃A)⊃(A∨¬A).

For these and related reasons, double negation is not considered constructivistically valid.


Your comment is a good first approximation to the motivation for HoTT! However, let me add a couple of remarks.

Firstly, HoTT can also be used with classical logic, i.e. where you freely use the law of excluded middle. This does not impact the ability to check proofs with the computer in any way.

Secondly, HoTT is the first foundation with which computer-formalized proofs of nontrivial theorems in a subject called "homotopy theory" are possible. In theory, this is possible with any foundational theory; but in practice, the necessary encodings to, for instance, Zermelo--Fraenkel set theory, are too involved for computer formalization.

In HoTT, one can define (and reason about) the basic concepts of homotopy theory straight from "day one". In contrast, with a more traditional foundation, one needs to setup a host of intermediate concepts first (like the real numbers, topological spaces, homotopies, ...). This is no problem for a trained mathematician, but becomes a nuisance when working with proof assistants.

Thirdly, one intriguing aspect of HoTT is that it might be possible to "run" proofs (this is the objective of current research). For example, if you prove that some group contains finitely many elements, the vision is that you could run this proof to find out how many elements it contains. This "computational interpretation" is not unique to HoTT; in fact, this is what the linked nLab article is about. But because of the second point, many more theorems are amenable to such an approach.

A last remark, not pertaining to HoTT but constructive mathematics. A simple reason why one might be interested in restricting oneself to constructive arguments is that thereby they are applicable to more general situations. There is a metatheorem to the effect that "a statement with a constructive proof holds in any topos", where "topos" is a precise formulation of "alternative mathematical universe". Such universes are routinely used in several subjects of mathematics, for instance algebraic geometry (after Grothendieck's revolution).

There are also other reasons for why it's fun to argue constructively. See for instance these slides: https://github.com/iblech/talk-constructive-mathematics/raw/... (PDF pages 8ff.)


Very nice descriptions.

While constructive reasoning is has many advantages, it has one disadvantage that is rarely pointed out, but sometimes causes real trouble.

Let's say we have a formula A which is classically and constructively true. It may be the case that constructive proofs of A are longer and more complicated than classical proofs of A. If you are working with an interactive proof assistant, this may be the difference between the proof automation being able to deal with A automatically or not.

This explosion in proof size is the reason why the (perfectly constructive) nominal techniques of Pitts et al have been implemented to a high standard in Isabelle/HOL are are used frequently, but are not really in Coq.


Careful!

Here is LEM: ⊢. ~p ∨ p

(one of either p or p's inverse must be true -- and thus the other false -- with no third (truth) value allowed), hence also called principle of bivalence.

What you wrote is the principle of double negation.

In general: (~p ∨ p) ⇒ (¬¬p ⇔ p) but not the other way around if I not mistaken? Please correct me if I am wrong.


You are right that for a particular formula p, the implication (~p ∨ p) ⇒ (¬¬p ⇔ p) is an intuitionistic tautology and that the reverse implication is not (in general).

However the situation is different when you quantify over all formulas. If you have (¬¬p ⇔ p) for _any_ formula p, you also have (~q ∨ q) for any formula q (written with a different letter to not get lost in variable capturing issues).

This is because, for any formula q, we have the intuitionistic tautology ~~(~q v q). (Proving this is a fun exercise. If you get stuck, see page 26 of https://github.com/iblech/talk-constructive-mathematics/raw/... or elsewhere on the Internet.) So, if you assume the principle of double negation for all formulas, then in particular you assume that, for any formula q, the implication ~~(~q v q) ⇒ (~q v q) holds. Since the antecedent is unconditionally valid, the consequent is as well. Thus you have the law of excluded middle for q.

I highly recommend Andrej Bauer's blog for such matters. The post http://math.andrej.com/2008/08/13/intuitionistic-mathematics... is a good starting point.


You melted my brain. What does “quantify over all formulas” mean?

From the way you state it, that's an interesting perspective. But it seems to be from a perspective of totality which I'm wary of. Not saying you're wrong, going to check out those links. Andrej Bauer was very kind to answer an out-of-the-blue email query of mine once :)

Seriously appreciative of your response, a thousand thanks!

EDIT: Ok, I've read your paper/slides. Very interesting. Don't you show that for specific constructions you can make LEM work for classical proofs with continuation monads? How does that show that _in general_ in constructive mathematics that the implication (~p ∨ p) ⇒ (¬¬p ⇔ p) holds but not its inverse? Also, your bit about rewinding time makes complete sense imho because classical logic is atemporal and continuation monads inject the idea of flowing time into those proofs in a controlled manner.

EDIT 2: I _highly_ reccommend Dennis E. Hesseling's Gnomes in the Fog: The Reception of Brouwer's Intuitionism in the 1920s which I think gives a different slant on what Brouwer actually claimed than what you might imagine if you go with Andrej Bauer's interpretation. I can shoot you a copy if you can't lay your hands on it online :)


Sorry, didn't intend to use so much jargon. By "quantifying over all formulas" I just meant that one has to distinguish between the following two statements:

1. The law of excluded middle holds for a particular formula p.

2. The law of excluded middle holds for _all_ formulas p.

Also note that "formula" just means "formal logical statement", i.e. structured string consisting of v, ==>, and so on.

What do you mean by "perspective of totality"?

Regarding your first edit: There is indeed a metatheorem which says "any classical proof can be turned into a constructive proof". The catch is that this does _not_ apply to all formulas (of course, else the discussion would be moot), but only to formulas of a special kind. Not many formulas are of this kind, but for instance "There exist infinitely many prime numbers" and "A particular Turing machine halts" are (when suitably formulated). If you want to lookup the details, look for "Friedman's Trick".

That the implication (~p ∨ p) ⇒ (¬¬p ⇔ p) holds is because there is a direct proof of this. I didn't show (here or in the slides) that the reverse implication is not intuitionistically valid. A simple way to do this would be to proceed along your lines, that is construct a Heyting algebra in which (¬¬p ⇔ p) holds for a particular element p but in which (~p v p) does not holds for this p. A fancier way would be to look at toposes arising "in nature", not specifically constructed for giving a counterexample.

Thanks for the book recommendation! I'll check it out. I won't have Internet over the weekend but I'd be glad to continue the discussion later.


I too would be glad, it's been enlightening already. We have common interests it seems :)

By structured string you mean well-formed formula I guess.

"perspective of totality" means taking all `things' together at once (if that were possible)



In other words, what you wrote is that every value when negated twice returns to itself. Here's a truth table of 3 values for the negation operator

     v | 0 | 1 | 2
    --------------
    ¬v | 1 | 0 | 2
So ¬¬0 ⇔ 0 and ¬¬1 ⇔ 1 and ¬¬2 ⇔ 2 which satisfies the principle of double negation but not LEM. Correct?


Your counterexample could work, but it is not obvious that it does so. This is because you also have to give definitions of ∧, v, and ==>, and then check that your definitions satisfy the axioms of intuitionistic logic.

As stated, your example only shows that one can define a function f on a set which contains more than two elements such that f . f = id. But this by itself is a trivial statement.


Once those definitions are given then my trivial example holds though, correct?

Which are you saying; that it is, or, it is not possible to give those definitions?


I didn't check it. If you want to do it (the key step being to verify that the axioms of intuitionistic logic holds for your definitions of AND and OR), you can check Wikipedia for the list of axioms to verify: https://en.wikipedia.org/wiki/Heyting_algebra


If you limit yourself to only constructive proofs, do you even get the real numbers? I'm not very clear on the definition of "constructive," but given that almost all real numbers are non-computable (i.e. most likely, the set of all real numbers you have ever encountered outside of computability theory has the same cardinality as the integers), it doesn't feel very constructive to me. I've always found it odd that we learn early on in discrete mathematics about how the cardinality of the reals is larger than that of the integers and rationals, and yet it seems like you'll never deal with anything from this larger set of reals unless you're studying computability.


Rest assured that (contrary to j2kun's misleading claims) if some of the world's top mathematicians and computer scientists propose a new foundation of mathematics, they don't forget real numbers.

You need to distinguish between non-computable and non-constructive. The proof that the cardinality of the reals is non-countable is perfectly constructive, see [1] for a discussion of these and related issues.

[1] https://en.wikipedia.org/wiki/Cantor%27s_first_uncountabilit...


Eh? Your very own link says that there is debate within the mathematical community about whether Cantor's uncountability proof is constructive or not.

I just realised that in my mind uncountable (not in a one-to-one correspondence with the natural numbers) means non-constructive. For me countability just says that we have a (constructive) method for generating a sequence of what would be all the terms in the sequence (if it were ever to terminate which it doesn't) and no term outside the sequence in a non-repeating fashion that is unbounded.

Clearly if some sequence is uncountable it means that one of these criteria fails to hold. Which one? Not the non-repeating stipulation. Not the no term outside the sequence stipulation. It must be the stipulation that every term will be encountered during the generation. So there isn't a (constructive) method for generating uncountable sequences. The reals are uncountable, ergo there is no constructive method for generating the reals.

Where am I going wrong?

NB: I am not saying that you can't have a method for constructing individual reals like e and pi and so on.


Even though there isn't a constructive method of generating uncountable sequences, that doesn't mean that this also assumed from within the logic. In conventional mathematics you can also only write down countably many sequences, but the space of all sequences is still uncountable. We don't assume that the set of all sequences is the set of sequences which we can actually define. Similarly in constructive mathematics we don't say that ALL sequences are generated by some computer program, just that YOU are only allowed to generate sequences in that way.

So this doesn't contradict in any way that Cantor's proof is constructive (which contrary to that wikipedia article, it most definitely is).


Exactly. Here is an implementation of Cantor's diagonal method in Agda which is entirely constructive: http://www.playingwithpointers.com/agda-cantor.html


Yes, of course. I was coming dangerously close to conflating the fact of a sequence being uncountable with the proof that the sequence is uncountable being constructive. In other words, having no constructive method for generating the reals is not at all the same thing, of course, as having a proof that the reals are uncountable. In fact having a constructive proof that the reals are uncountable indicates to us that there is no constructive method for generating them! Oof. You have to so careful with this stuff :)


I'm familiar with the diagonal argument, but the version I learned in Discrete Math courses defines the reals as little more than a (potentially infinite) series of digits with a decimal point somewhere, which I thought didn't hold up to much scrutiny. But this stuff is way above my mathematics pay grade. :) I stumbled onto the Wikipedia article "Construction of the real numbers" and I have no clue what is going on.

https://en.wikipedia.org/wiki/Construction_of_the_real_numbe...


Check out this Scientific American article[1], especially the video of a presentation given by Voevodsky.

Edit: Here's a more direct page[2] if you just want the video.

[1] http://blogs.scientificamerican.com/guest-blog/voevodskye280...

[2] http://www.heidelberg-laureate-forum.org/blog/video/lecture-...


Homotopy type theory formalizes informal reasoning style when things which have "the same structure" are considered equal in a constructive type theory setting. Another achievement is formulation of synthetic homotopy type theory, i.e. axiomatic as opposed to based on geometric type theory.


I don't think synthetic homotopy type theory is known to cover all of conventional homotopy theory, so it's work in progress.


First the type theory part of homotopy type theory. Mathematicians have been doing logic for a long time, but mathematical statements and proofs were not themselves mathematical objects. They were just human language text with some math notation thrown in. Around 1900 Brouwer, Heyting and Komolgorov came up with a cool idea: treat proofs as first class mathematical objects. Then giving a proof is equivalent to constructing a mathematical object.

* A proof of `P and Q` is a pair of proofs (a,b) where a is a proof of P and b is a proof of Q

* A proof of `P implies Q` is a function that takes a proof of P and returns a proof of Q

* A proof of `P or Q` is a pair (x,c) where either x=0 and c is a proof of P or x=1 and c is a proof of Q

In modern terms; to give a proof of `P and Q` is to give a value of the pair type `(P, Q)`. To give a proof of `P or Q` is to give a value of the disjoint sum type `P + Q`. To give a proof of `P implies Q` is to give a value of the function type `P -> Q`.

Take the theorem `(A and B) implies (B and A)`. A proof of this is, in Haskell notation:

    proof :: (A,B) -> (B,A)
    proof (a,b) = (b,a)
So a proof of `(A and B) implies (B and A)` is a function that swaps the order of a pair.

An interesting property of this way of doing things is that statements are not merely true or false, but statements can in general be true in more than one way. For example the statement `(A and A) implies A` has two different proofs:

    proof1 :: (A,A) -> A
    proof1 (a1,a2) = a1
 
    proof2 :: (A,A) -> A
    proof2 (a1,a2) = a2
The proof1 of `(A and A) implies A` is a function that returns the first component of a pair, and proof2 a function that returns the second component.

I'll skip over dependent types.

That's the type theory part. Now the homotopy part. For a long time there was a problem with type theory. There was no really good way to do equality in it. To do math you want to use equality all the time. Since we've defined what the proofs of `P and Q`, `P or Q`, `P implies Q` are, what are the proofs of `x = y`? There has existed some notion of equality in type theory for a long time, but it wasn't satisfactory for various reasons. You couldn't prove two functions equal even though they gave the same outputs for the same inputs. Homotopy type theory fixes equality in type theory. The idea is that a proof of `x = y` depends on the type of x and y.

* Two pairs c,d are equal if their first components are equal and their second components are equal.

* Two functions f,g are equal if for all x, f x is equal to g x.

* etc.

More precisely, the proofs of equality on the type (A,B) is a pair of a proof of equality on A and a proof of equality on B. Just like previously a statement can be true in more than one way (have more than one proof), things can now also be equal in more than one way (have more than one proof of equality).

Another extension that homotopy theory theory does is with user defined types. When you define a type you are allowed to define its type of equality proofs at the same time. This allows you to define things like integers modulo k ((i mod k) is equal to (i+k mod k)), unordered sets (a set S is equal to any permutation of S), and more.

I've brushed over a lot of details and presented only a very narrow way of looking at type theory:

1. A type itself is a value.

2. Full dependent types.

3. When are two types equal?

4. When are two proofs of equality equal?

5. What exactly is a function, what is a pair, etc.? This leads us into category theory, which gives an abstract description of functions and pairs.

Hopefully this still made some amount of sense.

Two line summary:

1. Type theory: using conventional mathematical objects like pairs and functions as proof objects.

2. Homotopy type theory: extends type theory with proofs of equality in a proper way.


It is wrong to say that it's key novelty is that "Homotopy type theory: extends type theory with proofs of equality in a proper way". This was done already with Martin Lof's type theory, as used in e.g. Agda. Homotopy type theory adds univalence!


I wouldn't call that a proper treatment of equality. It is unsatisfactory in several ways: no proper equality on functions (function extensionality), no proper equality on types (univalence), no quotient types (~= HITs), and since refl is the only proof of equality it doesn't even make much sense to talk about equality on equalities. More fundamentally the idea that refl is the only proof of equality just doesn't sit well in the philosophy of type theory. The whole idea of type theory is that things can have more than one proof and are not 'just true' as in conventional mathematics.


This discussion goes way beyond an "absolute layman explanation to homotopy type theory" ...

Anyway, I disagree that the "whole idea of type theory is that things can have more than one proof". UIP is about identity types and their proofs. Identity types are very special types. It is far from obvious that they should be inhabited by more than one proof. Indeed, it was a surprising breakthrough when Streicher/Hoffmann discovered that UIP does not hold in general and new axioms like Streicher's K or Univalence needed to be formulated to deal with this question.


It's true that it was a discovery that Martin Lof identity types could have multiple inhabitants, but it's a general principle of type theory that types can have multiple inhabitants (unlike in set theory, where propositions are just true). That this also applies to identity types in HoTT is a sign that something is going right.

Furthermore, from another perspective it's "obvious" that identity types should have multiple inhabitants. There were two approaches to equality in type theory. One is Martin Lof identity types, which follow naturally from Leibniz' principle that A and B are equal then they are indistinguishable. The other is defining equality for each type on a case by case basis (pairs are equal if their components are equal, functions are equal if they give equal outputs, etc.), like in section 2.4 Equality of this paper: http://www.andres-loeh.de/PiSigma/PiSigma.pdf

In the latter approach it's obvious that equality has multiple inhabitants, because it's defined that way. For example we can define equality of unordered pairs as the type:

    id_unordered_pair : (A,A) -> (A,A) -> Type
    id_unordered_pair (a,b) (c,d) = (id_A a c)*(id_A b d) + (id_A a d)*(id_A b c)
i.e. two unordered pairs (a,b) and (c,d) are equal if a=c and b=d or a=d and b=c.

This has two inhabitants even if id_A has only a single inhabitant. This approach naturally accommodates quotient types. The problem with it is that the type system has no idea that this is equality. It's just another definition. The point of equality is that all functions respect equality (equal inputs to equal outputs). Not all functions respect equality on these unordered pairs (e.g. a function that extracts the first component).

This approach to equality has the converse of Leibniz' rule: we can define equality so that if two things are indistinguishable then they are equal.

I think a good view of HoTT is as unifying these two approaches to equality. We get both Leibniz' principle and its converse, and we get user defined equality types whose equality is respected by all functions, we get multiple inhabitants for equality types.


Something I looked for on this very intriguing site was a reconciliation in my mind between dependent types and higher kinded types. This has been a nagging question since the announcement of Scala moving to the DOT calculus[1].

While this page[2] helped a bit, doing more research unveiled an example put out by Runar here[3].

Has anyone else considered a reasonable method for addressing what higher-kinded types provide currently in a DOT calculus, especially as it pertains to "matching" the type parameters in a higher-kinded type?

1 - http://lampwww.epfl.ch/~amin/dot/fool.pdf

2 - http://ncatlab.org/nlab/show/dependent+type+theory

3 - https://gist.github.com/runarorama/33986541f0f1ddf4a3c7


   reconciliation in my mind between dependent types and higher kinded types. 
They are orthogonal concepts. This is made very clear in Barendregt's λ-cube [1]. Orthogonal here means that a typing system might be higher-kinded without allowing type-dependency, or it might allow type-dependency without having higher-kinds. An example of the latter is LF, the Logical Framework of Harper et al. Haskell, or at least some forms of Haskell are an example of the former.

Higher-kinded types simply allow functions and function application at the type level which can take functions as arguments and can return functions. For example (lambda x.x => bool) and (lambda xy. x => y) are functions at the type level.

An example of a dependent type is List(2+4)[bool], of lists of length 6 carrying booleans. Here (2+4) is a program. This parameterisation happens without having type-functions.

[1] https://en.wikipedia.org/wiki/Lambda_cube


Also, see some of Robert Harper's lectures on the same subject: https://www.youtube.com/watch?v=9SnefrwBIDc&list=PLGCr8P_Ync...


See also Bob Harper's blog for a more leisurely exposition:

https://existentialtype.wordpress.com/2011/03/27/the-holy-tr...


"Imagine a world in which logic, programming, and mathematics are unified, in which every proof corresponds to a program, every program to a mapping, every mapping to a proof!"

The thing I never understood about statements of this sort is that in my understanding of model theory, Godel's theorem and so-forth, a proof is a rare thing. Most of the true statements in a given model don't have proofs. Any consistent proof system admits an uncountable sets of independent theorems and so-forth.


While Goedel's first incompleteness theorem indeed shows that there will always be true statements that don't follow from a given set of (computable) axioms, this is almost never a problem in practise. It is hard to find natural examples of such statements. Almost every mathematical statement (or its negation) you or I can come up with is a consequence of the axioms of ZFC set theory, or whatever other foundation you prefer.


It doesn't follow in normal mathematical practice I think because mathematicians want natural, sane axioms that lead to human understandable models.

Computing already starts out much messier in its activities. Determining what will happen when a large system gets input is tricky.

I'm not sure what could proved about the operations of a "deep neural net" for example.


Bob is coming from the perspective of Brouwerian Intuitionism, and so he is not really that interested in a closed "formal system". So, "proofs" in the setting that Bob cares about are not derivations in a formal system, so Gödel's result doesn't really apply.


I'm pretty sure if you are talking about proofs having to do with a determined, computation system, Godel applies.

I mean, the unsolvability of the halting problem certainly applies and this gives a somewhat similar picture of the world.


In bob's setting, the computation system is not determined or fixed. It is open-ended...

In intuitionistic mathematics, we of course accept that there is no Turing computable halting oracle, but we do not rule out the possibility that there is some other effective oracle that can decide halting. (This is in contrast to recursive mathematics / Russian constructivism)


Unfortunately, that isn't a world in which you could connect a USB camera to your laptop and just have it work after the system finds the right driver.


Most true statements are uninteresting.


Indeed but so are most programs.


[flagged]


Please don't post unsubstantive comments to Hacker News.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: