Hacker News new | past | comments | ask | show | jobs | submit | ebingdom's comments login

> It was to provide a small, parsimonious foundation for all of mathematics with a minimal number of "obvious" commitments, to give us confidence that the mathematics we're doing is consistent

I would argue that type theory does a better job at this than set theory. With set theory, you need to believe in two separate things: (1) the language of first-order logic (or some other logic) with its inference rules, (2) the set theory axioms. With type theory, there is only the language of lambda terms. And the rules for type theory are straightforward and intuitive for programmers, e.g., you can only call a function on an argument if the function's domain matches the type of the argument. Contrast that with set theory, where you have highly counterintuitive and seemingly arbitrary axioms like the axiom of separation.


I'm not sure you can call the type rules for CiC "intuitive for programmers". They're quite powerful and go further than "type of argument matches expected type".

Compared to CoC, first order logic is a model of simplicity, and I think it's reasonable to argue that adding the inductive types to get to CiC is as complex as adding a handful of axioms to Fol to obtain ZFC. And don't forget to pick a flavor of universe polymorphism or cumulativity to make it usable. That's not exactly simple.

I think there's a good case for CiC or HoTT being nice and usable for mathematicians. I don't think they're simple or more appealing to programmers. A kernel for metamath is the simplest, and it has more independent implementations than any other system.


I suppose to some extent this is a matter of taste. I'll just say that, in my experience, people are typically very comfortable with, e.g., logical connectives and the primitive notion of a set of objects from grade school mathematics education. So this framework is "natural" and readily believed.

Further, in ZFC, the only basic notation is that of a set. In something like the calculus of constructions, there are five fundamental notions (if I remember correctly). From the standpoint of ontological parsimony, that's a win for ZFC.

Axiom of separation just says we can make subsets of things - I think this is not so hard to swallow. I'm curious what you find counterintuitive about it.


Nobody's comparing the aesthetics of sets to types. That's not the point... People want mathematical proofs to automatically determine programs, or to be more than just proofs somehow. They want to exploit the capabilities of constructive logic. The fact that arbitrary sets can intersect each other makes extracting computational meaning from set theory proofs harder. The fact that types can be like sets but don't have to be is also why they're interesting: The notion has more flexibility.


> I do not understand why homotopy type theory posts are so popular on this website.

Martin-Löf type theory (and, therefore, homotopy type theory) is like an idealized programming language that is capable of expressing both programs and proofs, such that you can prove your code correct in the same language. Hacker News is a mostly technical community that often likes to geek out on programming languages.

Homotopy type theory is an especially cool flavor of type theory that finally gives a satisfying answer to the question of when two types should be considered propositionally equal.


> finally gives a satisfying answer to the question of when two types should be considered propositionally equal

One of my fondest memories was listening to Walid Taha debate Jeremy Siek, Todd Veldhuizen, and others, over beers, about the best way to define type equivalence in nontrivial type systems. It seemed so abstract, until I had to debug a template instantiation issue in GCC.


I don't think you intended "propositionally" equal in your final sentence. Equality is data in HoTT. If you take the propositional truncation then you usually throw away too much.


Yeah, I only meant as opposed to judgmental equality, not the quality of being a proposition.


For people who know Rust, Haskell's type classes are like a more general version of Rust's traits. But that comparison suggests the wrong chronology; Haskell's type classes came a couple decades earlier.


> Thankfully, this sort of analysis tends to use "constructive logic"; in which case, we're told why some property isn't provable (either because we're given a counterexample proving it's incorrect, or we're told which assumptions/preconditions need to be proved first)

That's not what constructive logic means.


The last part about preconditions/assumptions isn't specific to constructive logic; it's about backwards-chaining proof tactics; for example, applying the `ring` tactic in Coq will tell us to prove that the given structure is a ring (see the Errors section of https://coq.inria.fr/refman/addendum/ring.html#concrete-usag... )

The reason constructive logic is nice for this is the first part: we'll be given a counterexample, or a total algorithm for generating counterexamples, etc. (depending on the nature of the statement and its quantifiers). This is preferable to classical logics, where an automated prover might tell us "false", but we have no idea why.


Yes - to elaborate, a constructive logic is a logic without P or not P. This doesn't have much to do with whether a tool can tell you what obligations you haven't proven. The separation logic based provers I've used have all had excluded middle.


That's a ridiculously high bar to set before you recognize any progress being made.


What? Are you saying that me literally sleeping until an impossible thing happens is too high a bar? I should say so!

I probably meant it figuratively, huh? What I am saying after having keynoted at a Microsoft formal methods conference (as a critic) and after playing with TLS+ (which was educational) is that formal verification will make little impact on commercial systems because such systems are obliged to be built with inherently buggy components.

I am happy to acknowledge that progress is happening, yet the ultimate source of trouble is human ambition and impatience. How do you solve that?


> The net effect of this would be that I type "mutable" all over the place

You might want to consider adopting a more modern programming style for the benefit of your coworkers (and possibly yourself). Mutability all over the place is a nightmare, speaking as someone who currently has to work in a large codebase written like that. It's hard to predict what value a variable is going to have at any particular point in your code, since instead of only having to check where the variable is defined, you have to audit all the code between the definition and the use. For the same reason, it's hard to guarantee that your invariants are maintained, since there is a much larger surface area to check.


> You might want to consider adopting a more modern programming style

Just because some people think a particular style is useful doesn't mean everyone does. You might want to consider checking your biases before making comments like this. I understand the (many) arguments for using `const`, and I've concluded (for myself) that it's not a useful construct. Read and internalize the rest of my previous comment for more information.


It's not about biases. Using immutability is not just some personal preference. It's common knowledge that mutability everywhere is a bad practice. That's why there's a trend toward immutability by default in newer languages.


> Reasoning About Software

What's wrong with this? I need a way to describe techniques that make it easier to...well I don't even know another way to say it. Maybe I'm biased by my experience in formal verification, a field in which it's expected that you can formally study a program's behavior.


It should be replaced with the more accurate term 'handwaving about software'.


I think there might be a loose connection. When you want to prove something using mathematical induction, you must come up with an "induction hypothesis". This requires cleverness which could possibly be described as the product of inductive reasoning (in the philosophical sense). Essentially it amounts to finding a general principle that can be specialized to prove each of the inductive steps.

The main difference to me is that, with inductive reasoning (in the philosophical sense), you converge on a general principle but it might be wrong—it is only probable. Mathematical induction is the tool needed to close the gap—to turn a hypothesis which could be wrong into a bulletproof mathematical theorem. Of course, if your induction hypothesis didn't turn out to be right, you won't be able to complete the proof.

I use the Coq theorem prover to write induction proofs all the time, and often you have to try out different induction hypotheses until you get one that finally works. The process my brain goes through to come up with these induction hypotheses feels like "inductive reasoning" in the philosophical sense as I understand it.


Yeah, I think you nailed the reason why they both use the same word: "mathematical induction" is deductive once the proof is finished, but coming up with the proof is absolutely an inductive process.


> It's a bit strange that it got BigInt before Int64

JavaScript has Int64?


Sorry, no it doesn't. I meant there should be an Int64 type and it probably should have been implemented before they dived into arbitrary precision integers.

I suppose the answer if you really care about performance is "just use WebAssembly" since that does have native 64 bit integers.


Why would it need a 64 bit int? How often do you go over 2*56 in Javascript?


All sorts of hash functions expect 64-bit intermediate values. Worse, these algorithms are liable to go haywire, and return very-not-randomized results if intermediate values are silently truncated to 56 bits (or magically become floating-point and thus lose their lower-order one bits) as happens when they're naively translated from C to JS.


As a pure quantity representing a real-world value, probably not very often, but just as a series of bits or a more abstract quantity like a memory address, it will often go above 2^56. Many algorithms and binary file encodings use 64 bit integers so interfacing with them in Javascript is hard.


> I wish that all language design went through a filter where every character mattered like the designer was in an episode of Squid Game.

Yes, I totally support this! Coming from languages like Haskell, most languages look so unnecessarily noisy to me. Cut the fat!

> Anyone who proposes a way to minimize Lisp parentheses, who hasn't introduced a symbol for missing outline levels, is just pulling out chunks and hasn't used their system to write thousands of lines of code.

What are outline levels? I tried Googling it but didn't find anything.

> C syntax is ground glass in my eyes.

Is that a good thing? I'm not sure what you mean by it.


I think they mean that looking at it is like getting lots of tiny, ground-up pieces of glass in their eyeballs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: