Hacker News new | past | comments | ask | show | jobs | submit login

We shouldn't, that's the point! At least not in mathematics, programming is a different story. It's perfectly fine to have type of all integers alongside type of all squares and types that only contain number 1 or number 1729.

Relations of these types will then reflect the relations of their respective proofs. There is no need to consider "proof equivalence" or other kind of proof properties. That's already accomplished by studying types themselves.

The choice of types already reflects what we want to study.




I would think that asking if two proofs are equivalent would be analogous to “do these two expressions of type integer evaluate to the same value?” ?


Hm. Take for example formalization in Calculus of Constructions (Coq, Lean). What you propose is essentially beta-equivalence of the underlying lambda calculus terms.

However, this is not without problems (undecidability aside). Different CoC formalizations can use, for example, different LC representations of natural numbers (Church numerals, binary..) or other basic types, such as pairs.

Does that mean that proof using Church numerals and proof using binary numerals are different? Intuitively, I don't think so, yet they are according to above definition.

Another example. One can embed classical propositional logic into intuitionistic one using translation of LC terms (basically continuation-passing style transform). Is the translated proof the same or different? It operates on different terms and is thus different as a program.

I think the fundamental problem is the idea that there is some universal and absolute equivalence relation between proofs. Rejecting that leads to accepting equivalence relative to the type of the proposition. (I think you can get some absoluteness back using univalence axiom, but I don't comprehend that.)


It isn’t clear to me why the encoding of the natural numbers would come up. I would have thought that some statement about the natural numbers would involve some type, considered as the type of natural numbers, which I suppose if you construct it from other simpler primitives would have some kind of encoding, But, if the proof uses only uses the terms of that type through the use of the constructors etc. associated with that type, you wouldn’t have multiple proofs which differ only by the encoding, and belonging to the same type (proposition), as the encoding would just be whatever encoding is used for the type of integers used.

It seems likely that I’ve misunderstood what you meant.

Still, if you have two different natural numbers objects, there’s an isomorphism between them… I guess one could use this to translate between proofs for one and proofs for the other, but I don’t know if translating and then translating back would yield the original proof.

I wouldn’t be surprised if by default it doesn’t.

It wouldn’t surprise me if there is a way to “make it the same” while still keeping other proofs distinct, though.

I suppose one could make some sort of quotient type?

Also, for (intuitionistic) existential statements, the proof can almost be seen as just the thing claimed to exist along with a witness that it is that thing.

So, if you say, “for all x:X, there exists y:Y, such that P(x,y)” , then two proofs might differ by giving two different functions x:X->(y:Y,p:P(x,y)) which, could be regarded as different if the X->Y part is different without considering whether the p part is different.

Maybe if one e.g. required equality types to be sub-singletons (i.e. if p,q : Eq(Nat,x,y) then p is q ) but allowed other propositions to have multiple distinct elements, that could be, something?

You might know more about this than me.


I appreciate your comment but I think you're overcomplicating it for yourself. (And btw I doubt I know more about this than you.)

You say: "It isn’t clear to me why the encoding of the natural numbers would come up. I would have thought that some statement about the natural numbers would involve some type, considered as the type of natural numbers, which I suppose if you construct it from other simpler primitives would have some kind of encoding."

OK, so let's say we have an agreement, that your representation for the type of natural numbers is practically - from a typing viewpoint - identical to my representation for the type of natural numbers. So then we don't have to worry about the encoding. And it makes sense to do this for all the primitive types we are using.

But then consider your proposition "for all x:X, there exists y:Y, such that P(x,y)". This proposition is itself a type, which is constructed from primitive types. So why would you here, all of sudden, want to distinguish between different proofs, or rather, representations of this particular type?

I think, if we both accept proposition-as-types paradigm, then I am reasoning backwards from it. If the structure of two different proofs of the same (propositional) type matters (whatever the reason), not just existence of these proofs, why shouldn't it matter with the primitive types? It implies that we need to come to an agreement what primitive types we are using, because their structure might matter, too.

And that's why I said - you effectively want your "universal proof equivalence" to be the beta-equivalence of underlying lambda terms, because lambda calculus is a mechanism how we construct the complex types from primitive ones (not all constructions are valid though, there are typing constraints). But, as I already explained, it doesn't really give you a grounding, because there are multiple ways to define primitives in lambda calculus which cause the terms not to be beta-equivalent. So you have to care about having such agreement how you implement these primitives. Hopefully this clears up where it comes from.

But having that agreement in itself then limits what you can prove (in particular, you cannot prove equivalence of different representations, since you're already taking it as an axiom), so you don't really want to have such an agreement. That's why I believe the only sensible way out is to propagate the proof indifference "upwards" - even for complicated types (practically usable propositions), the actual representation shouldn't matter.

No worries, though. If you want the representation to matter, you would do the same thing you do in lambda calculus - you choose some encoding of lambda terms, and take an interpreter for this encoding which implements beta-reduction. When interpreting an encoded lambda term, you can discern different implementations (for example, their complexity properties) of otherwise beta-equivalent lambda terms. Mathematically, this gives a different theory. (Also note there is no reason for your encoding to be working with just lambda abstraction and application as primitives, you can encode different lambda terms in different ways, if you need to. That's what combinatory logic does, it lets you use an arbitrary basis of primitives.)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: