Hacker News new | past | comments | ask | show | jobs | submit login

They add significant complexity to the language and are intended to help with correctness.

Not exactly -- they're intended to help with reasoning about the structure of programs and hopefully reject the obviously wrong ones. Correctness would still require a more formal system of validation. Whether this is more easily achieved in a language like Haskell is still an open question. The LiquidHaskell team and Synquid project are both very interesting.

In a Haskell program the compiler trivially rejects programs that fail to provide a branch for a missing case.

I can reason about the scope of what a function can do (and what it cannot) based on it's type information.

I can reason about an equation in familiar terms of substitution and composition.

Types don't automatically ensure my program is correct, for some vague notion of correctness.

Either way, it's not a language that's designed to solve a problem for practitioners

If I am right to assume that practitioners are programmers then I have to disagree with the premise here.

I am not a researcher. I write software for factories to help real people I've met face to face solve real problems.

I have solved pragmatic, real-world problems by writing programs with Haskell and it was a productive, novel experience. The feature I recently shipped to production has migrated nearly a million schema-less JSON documents to a well-defined schema, continuously back-fills the new table from an AWS Lambda function when we receive data in the old format, and was largely only possible to do in such a short amount of time because of the features of Haskell's type system. I haven't had a single bug report or failure in 5 months to boot.

To compare: every single Javascript service we've shipped to production in the last three years, despite all of the testing, validation, and effort we put in to getting it right has at least one production error within three months. Tolerance for these failures has to be built into everything we do.

Haskell is not only suited for research. It is used in Facebook and Github. It has plenty of industrial uses and is definitely designed for and with the aid of practitioners. It's development is largely supported by volunteers: practitioners and researchers alike.

It may not be popular but all it takes is one startup to choose it as their basis to change the conversation. The Viaweb team chose Lisp, arguably as obscure as Haskell at the time, and did great work.

As for dependent types the why is simple: we can express richer specifications in the type system. It would enable Haskell to use a uniform language for terms and types. For programmers the simplest, minimal example are sized vectors as types. You can pass around a vector in Haskell that is sized so that out-of-bounds access is not possible. However the size of the vector is opaque at the type level. With a dependently-typed vector you could specify the bounds of the vector your algorithm supports which further reduces cases where a programmer mistakenly uses your function for unbounded inputs. This is only the minimal example but dependently typed languages scale up from there: there are research languages that have implemented full-on interactive theorem provers and libraries that can encode the whole of mathematical research and knowledge in them: Lean and mathlib. I firmly believe this is where programming is going to go. Dependently typed languages are the next step.

I started writing a guide for interested hackers to get started writing real, working programs in a research-heavy dependently-typed programming language called Lean which you can find here: https://agentultra.github.io/lean-for-hackers/

I suggest giving it a shot before condemning Haskell so easily. There's a lot of promise here.

And in my experience, selling Haskell is a lot easier than selling formal verification, fwiw. Hardly anyone uses TLA+ compared to Haskell. And I spend a fair amount of time and energy trying to sell both.




> working programs in a research-heavy dependently-typed programming language called Lean which you can find here

I have used Lean. It's nice and intersting, but not really a useful tool for software correctness intended for use by engineers. You can do some very interesting math with it, though.

> As for dependent types the why is simple: we can express richer specifications in the type system.

Sure, but why? Expressing richer specifications in the type systems also has a significant cost. A language whose goal is not research would likely spend years considering whether this is a good idea or not.

> I suggest giving it a shot before condemning Haskell so easily. There's a lot of promise here.

I have, and I don't think it's bad, I just don't particularly like it, for the same reasons some people don't particularly like Python or Java. I certainly don't think you can't use it for practical applications, but its evolution is, without a doubt, guided by research goals. And when it comes to promise, I don't know how many decades one can reasonably believe that. Haskell doesn't even spread in companies that are already using it, unlike languages that have contributed a significant bottom-line advantage, and it shows virtually no growth after 25 years, in many of them it was highly hyped. I have absolutely no problem with people who like it. Programming language preference is a matter of personal taste; I like Java, Clojure, Eve, ML and Zig, not crazy about Haskell, and dislike Scala, and others like Haskell and dislike the languages I like; it's perfectly reasonable and expected, like taste in music. But you cannot claim it's a language designed for practitioners and yet have its features guided by research questions.


Thanks for being generous with my long post.

> Sure, but why?

I already explained why.

Why do you even care about Haskell at all?

> Expressing richer specifications in the type systems also has a significant cost.

And everything that has a cost should have a benefit or we would abandon the whole enterprise.

The simply typed lambda calculus will be 80 years old this year, in June. What was Church going after when he published it to the Journal of Symbolic logic? If you go back and read it he was trying to make sense of the lambda calculus. That calculus gave us one branch of computing as we know it but it was not sufficiently expressive enough to make logical statements about different classes of functions: not all functions that have the same shape have the same meaning.

In a similar fashion not all programs are created equal. Given a set of functional requirements by a person we can probably independently write programs that satisfy them. Type theory is simply a means for us to reason about these programs. Whether you write your program that enables you to verify the internal correctness of your programs with respect to the type system used to check it or not doesn't matter one bit to the user of the system. To the programmer however, it can matter a great deal.

Of the programs we could write in languages that don't allow us to verify the correctness of our programs with respect to a particular type theory all we can say is: when I evaluated the program with these inputs at this time, it worked. We can be reasonably certain it will continue to work and it will likely work on more inputs.

Of the programs we write in a language that does allow us to verify the program is logically consistent and sound with respect to a particular type theory is that it is consistent. We would still be obligated to prove that the program meets the specified requirements. But we don't have to worry about whether we forgot to handle a branch, access data that isn't there, over write data we didn't intend to... and a whole slew of things that do plague us in languages without such tools. We can instead focus our effort on proving that we meet the requirements which is a much smaller and more tractable problem.

As an engineer I don't see the reason why theory and research must be separate. I see my work as sitting between both worlds: theory informs application which in turn gives me more problems to inform the direction to go.

> I just don't particularly like it, for the same reasons some people don't particularly like Python or Java.

That's fair, I don't particularly like Java either.

But I don't go into threads about Java and ask Java developers why they bother with it.

Some of the challenges posted by Stephan are legitimately interesting challenges for Haskell programmers. A dependently typed Haskell would, ironically, make Haskell a more simple language and allow us to continue pushing type-driven programming further for many of the benefits I've described. Many of the benefits to these issues have been described in the article.

> it shows virtually no growth after 25 years

I beg to differ. Even after 10 years it has shown a lot of growth. TFA opens with celebrating the advances made in the last decade.

The reason why Haskell isn't as popular as many mainstream languages, if it's so great, is network effects. Javascript is as popular as it is because it was the only language available for a long, long time on a platform many people wanted to write software for. Ruby would not even be on the radar if it wasn't for it's killer app: Rails. I dabbled in Ruby before Rails came along -- it was super obscure -- but no matter how good it was without Rails it would have languished in obscurity, I think. Some languages became popular by sheer force: Java was a solution in search of a problem and Oracle had a war chest to make sure people thought about it. I recall a time with Python was incredibly rare and obscure and had similar network effects to over-come. Haskell doesn't have a company or group of companies pumping millions of dollars into its development, is not a platform exclusive, and so far hasn't had a killer application where it excelled over other incumbents (except perhaps in compilers).

It also doesn't help that many programmers continue to persist FUD about Haskell for reasons I don't understand. Similar to how people feel justified for being contemptuous of Javascript or PHP I suppose.

However if Haskell was the platform-exclusive language of the web for several decades I think this conversation would be completely different.

Haskell is a fine language and the reasons for working on effect systems are good ones and I'm excited to see the results. If you don't get why then try harder to understand the work or find something constructive to say about it. The people working on it are brilliant and have good, practical reasons for it.


> I already explained why.

That doesn't answer why because you reversed the logical implication. See here: https://news.ycombinator.com/item?id=22140610

> Why do you even care about Haskell at all?

I don't know anything about, say Ruby, so I have nothing to say about it. But Haskell is an interesting and elegant language that I know, and I like to follow its progress. I just think it suffers from a split brain in the community. The language evolves to answer research questions that I'm interested in, yet some go out of their way to present it differently.

> And everything that has a cost should have a benefit or we would abandon the whole enterprise.

Yes, but that brings me to the point in the linked comments. We are interested in the question correctness => dependent types not dependent types => correctness. The answer to why Haskell should add dependent types is that it's an interesting research question, and that Haskell is designed to answer research questions.

> As an engineer I don't see the reason why theory and research must be separate.

They are separate because the concerns of engineers and researchers are vastly different. A researcher studies X and wants to know what X can do. An engineer wants to know the best way of doing Y. Not to mention that out of every 100 things in research, maybe one will ultimately, after years of engineering, will result in a solution engineers need. How do I know which one is it? I love both engineering and research, but the two are very far apart.

> But I don't go into threads about Java and ask Java developers why they bother with it.

If you were interested in Java then perhaps you should. BTW, I didn't ask why people who use Haskell bother with it. They do it for the same reason people use any non-popular language: they like it. I ask what they think the language is and why it is clearly one thing and yet they insist on presenting it as another. Haskell also suffers from a terrible evangelism disease, where its evangelists don't say things like "I really like Haskell, here's why, and perhaps you should to" but also make incorrect empirical claims about things I know a little about, like software correctness.

BTW, HN isn't a forum for Haskell fans, and anyone can say anything, as long as it's reasonable. I don't go to Haskell forums and discuss my problems with its community because I'm not in the target audience of those forums, but I think I am in HN's. This particular post is at the top of /r/Haskell right now, where it's being discussed by an exclusive group of Haskell fans.

But don't take what I write on social media too seriously. I don't. I think Haskell is a perfectly fine programming language, and it rightly has some fans.


> The answer to why Haskell should add dependent types is that it's an interesting research question, and that Haskell is designed to answer research questions.

Interesting, yes, but I believe it will also have practical benefits for engineers using the language as well.

I hope Haskell will migrate to dependent types because it will unify the language. Some of the more frustrating features of Haskell are due to it having a separate language for types and terms. When people see a huge list of language pragmas and the singleton libraries being pulled out this is what I hear people frequently complain about when they say Haskell is too complicated. If you can remove the barrier between types and terms then most of that added complexity goes away.

A good example of this is Lean. It's implemented in a dependently typed programming language because it unifies types and terms which makes writing the compiler and programs more simple. I think this could also be extended to Haskell.

It will be difficult, and presented as a "grand challenge," because unlike Lean, Haskell has many more libraries and applications that depend on it and we all know how breaking compatibility goes.

> They are separate because the concerns of engineers and researchers are vastly different.

In the twenty some years that I've been fortunate enough to write software for a living I don't believe this is true. I find that in software engineering that research and application inform one another and have a tight feedback loop. I read papers and often I implement them to help solve problems. Often in implementing that research I find feedback to give the authors or I find new problems that require some research in order to solve.

Pure mathematicians have it a bit harder. I lament that John Conway may never live to see the solution to some of the problems he's posed.

But I think we have it pretty good. The logicians have done most of the work for us. And we can implement new theories quite fast.

BTW I'm not saying you can't comment on or criticize Haskell, this is an interesting discussion. I am a little sensitive to people being contemptuous of the hard work others put into Haskell and was probably either misreading your comments or responding to that. For the people that do work on Haskell I think these are the great challenges for us to solve this decade. Algebraic effects have been used in production at Github and elsewhere and there's still work to do to make them shine but it's generally good stuff and moving in a positive direction. Same with dependent types.

Regardless, thank you for being patient. I appreciate your work and your writing here and elsewhere. All the best.


> A good example of this is Lean. It's implemented in a dependently typed programming language because it unifies types and terms which makes writing the compiler and programs more simple. I think this could also be extended to Haskell.

If in the next fifty years more than, say, 3% of programmers will be programming in a language like Lean, I will eat my hat. Take a Haskeller's favorite, simplest "function", id, and look at its Lean definition (as you know the universe polymorphism is required for consistency):

    universe u
    def id {α : Sort u} (a : α) : α := a
Now, try to explain to a programmer what kind of object id is (not why the definition must be this way, but what is it that it defines). And it's not as if there is some hidden universal mathematical truth about programming here; this is an unecessary complication due to Lean's chosen foundation, which was chosen not with the best interests of programmers in mind (which I know because I discussed this with Jeremy Avigad). In fact, few if any of Lean's author have any significant experience in software development in industry, or in software verification, and they made an error in their introductory paper that anyone with any experience with software correctness would immediately spot. Lean is fun in its special way, but I think it's a horrible, if entertaining, way to write programs.

> I find that in software engineering that research and application inform one another and have a tight feedback loop. I read papers and often I implement them to help solve problems.

That's not what I meant, though. Of course research and practice enrich one another. But their goals and concerns are drastically different. It is rarely the case that an interesting research question is a recommended practice for practitioners. In fact, the more interesting something is to a researcher, the less likely it is to be a good idea for a general recommendation for practitioners.


> If in the next fifty years more than, say, 3% of programmers will be programming in a language like Lean, I will eat my hat.

You're not even putting any stakes on the table. That's such a safe bet. It's unlikely you'd even have a chance at losing the odds are so stacked.

There's a great talk on why functional programming is not the norm: https://www.youtube.com/watch?v=QyJZzq0v7Z4

The short of it is: network effects. Haskell, OCaml, SML... they've never had the benefit of being the exclusive language to a popular platform like the web, macOS, or UNIX. They've not yet had a killer app like Rails or Numpy/SciPy/PyTorch. They've certainly not been foisted on the market with a war chest of marketing dollars like Java and C#. And they're not an incremental improvement on the status quo with an obvious, easy benefit.

> Now, try to explain to a programmer what kind of object id is

That's a rhetorical exercise. You can take a random snippet of APL or Fortran and the outcome would be the same. Every language requires training and effort to learn.

I teach people Haskell and TLA+. If you think teaching people the value of a language like Haskell is hard you should see the blank stares I get on the first day teaching experienced developers TLA+. Interested folks will come around eventually and have their moment but it takes training and dedication to get there.

> was chosen not with the best interests of programmers in mind

I'm well aware. I converse regularly with the maintainers of mathlib and the community. I know the focus there is on building a good tool for mathematicians.

I still think they built an interesting programming language even if that wasn't their intended purpose. That's why I started writing the hackers' guide. The documentation on the programming side is missing and needs to be filled in. They still need programmers to build the tools, the VM, the libraries, etc.

It's niche. And I'm not even the least bit surprised you find it horrible. I don't expect Lean to take off with programmers. I think the ideas in it are good and there is definitely some good things we could take from it in designing the dependently typed programming language that does penetrate the market and over-come the network effects.

Either way I think the people proposing the challenges for Haskell programmers to solve have a good reason to do so otherwise we would have a hard time finding the motivation for it. We want to see where algebraic effects go because new users often complain that monad transformers are hard to learn and users of mtl cite many problems that a well-done algebraic effect system would alleviate. Dependent types would remove the complexity of writing programs that work with types and enable us to write more programs with more expressive types. If you don't see why that's useful then what do you think the community should be focusing on?


> There's a great talk on why functional programming is not the norm

It's a great talk in the sense that it satisfies people who want excuses. The real reason is similar to the one in the talk, but more disappointing: over decades, functional programming was not found to be either significantly better or significantly worse than other paradigms in productivity or other similar metrics, but for many years imperative paradigms enjoyed a significant benefit in runtime performance, and so there's really no reason to switch. I honestly couldn't care -- I like, and hate, OOP and FP equally -- but paradigms exist to serve practitioners, not the other way around.

One thing is clear: you cannot at once say you're interested in serving the research of typed lambda calculus -- a worthy research goal -- and also in advancing the state-of-the-art in the software industry, which is another worthy goal, but a very different one. If you're interested in the latter you must conduct some sort of study on the problem at hand; if you're interested in the former, then you have zero information to make a claim about the latter. In other words, you can either want to advance programming or advance typed FP; you cannot do both at once, because doing they both require very different kinds of study. If you're for typed FP, then that position is no less valid than being for OOP, or any other paradigm some people like, but it's not more valid, either.

> And they're not an incremental improvement on the status quo with an obvious, easy benefit.

... or at all.

> That's a rhetorical exercise. You can take a random snippet of APL or Fortran and the outcome would be the same.

I totally disagree here. Simplifying ideas makes you understand what is essential and what isn't, and is absolutely required before some idea can get out of the lab and become a useful technology in the field. If the most basic subroutine represents some object in a meta-universe, that even 99% of mathematicians have no clue about -- you're off to a bad start with engineers. The use of constructive type theory as a foundation is interesting to logicians who want to study the properties of constructive type theories and of proof assistants based on them. It is nothing but an unnecessary complication for software developers. As you point out, there are often some necessary complications, so there's really no need to add unnecessary ones.

BTW, you mentioned TLA+. TLA+ also requires some effort, although significantly less than Lean (or Isabelle, or Coq), but TLA+ has made a historical breakthrough in formal logic. From its inception in the late 19th century, and even from its first beginnings in the 17th, formal logic aspired to be a useful tool for practitioners. But time and again it's fallen far short of that goal, something that logicians like John von Neumann and Alan Turing bemoaned. AFAIK, TLA+ is the first powerful (i.e. with arbitrary quantification) formal logic in the history of formal logic that has gained some use among ordinary practitioners -- to paraphrase Alan Turing, among "the programmers in the street" (Turing said that formal logic failed to find use for "the mathematician in the street," and tried, unsuccessfully, to rectify that). It was no miracle: Lamport spent years trying to simplify the logic and adapt it to the actual needs of engineers, including through "field research" (others, like David Harel, have done the same). That is something that the logicians behind other proof assistants seem to be uninterested in doing, and so their failure in achieving that particular goal is unsurprising.

> Every language requires training and effort to learn. ... Interested folks will come around eventually and have their moment but it takes training and dedication to get there.

But why get there at all? I'm interested, I learned some Haskell and some Lean, I think Haskell is OK -- not much better or much worse than most languages, but not my cup of tea -- and Lean is very interesting for those interested in formal logic, but offers little to engineers; I enjoy it because I'm interested in some theoretical aspects. But why would I subject other developers who aren't particularly interested in a certain paradigm to suffer until they "get there" just for the sake of that paradigm? Again, paradigms exist to serve practitioners, not the other way around.

> If you don't see why that's useful then what do you think the community should be focusing on?

I think the community should be honest about what Haskell is: a testing ground for ideas in the research of typed functional programming. Then, it should consider measuring some results before disappearing down its own rabbit hole, where it's of little use to anyone. Algebraic effects could be an interesting experiment at this point. I don't really care about the pains of all 300 Haskell developers with monad transformers, but if Haskell tries another approach to IO, then it might eventually find a generally useful ones; same goes for using linear types instead of monads. But dependent types? They're currently at the stage when even their biggest fans find it hard to say more good things about them than bad. If Haskell wants to serve software by testing out new ideas in typed functional programming, I think it should at least focus on ones that might still be in the lab but are at least out of the tube.


> BTW, you mentioned TLA+. TLA+ also requires some effort, although significantly less than Lean (or Isabelle, or Coq), but TLA+ has made a historical breakthrough in formal logic. From its inception in the late 19th century, and even from its first beginnings in the 17th, formal logic aspired to be a useful tool for practitioners. But time and again it's fallen far short of that goal, something that logicians like John von Neumann and Alan Turing bemoaned. AFAIK, TLA+ is the first powerful (i.e. with arbitrary quantification) formal logic in the history of formal logic that has gained some use among ordinary practitioners -- to paraphrase Alan Turing, among "the programmers in the street" (Turing said that formal logic failed to find use for "the mathematician in the street," and tried, unsuccessfully, to rectify that).

I will bet you 100 USD that TLA+ had less than 10% of the users of Z before the AWS paper.

> It was no miracle: Lamport spent years trying to simplify the logic and adapt it to the actual needs of engineers, including through "field research" (others, like David Harel, have done the same). That is something that the logicians behind other proof assistants seem to be uninterested in doing, and so their failure in achieving that particular goal is unsurprising.

I will bet you another 100$ that the success of TLA+ has nothing to do with making it "accessible" and everything to do with AWS vouching for it. If reaching the actual needs of the engineers was so important, why is the CLI tooling in such a bad state?


So what? TLA+ still achieved something that no other formal logic had before it (AFAIK). If you mean to say that Lean could do the same, I think spending a month with it will dissuade you of that notion, despite the fact that it is more similar to programming and despite it more programming-like tooling that programmers of certain backgrounds might appreciate.

Amazon may not have used TLA+ if not for TLC, but very good model checkers have existed for decades, have been used successfully in industry, and still have not made any inroads into mainstream software.

> If reaching the actual needs of the engineers was so important, why is the CLI tooling in such a bad state?

For one, resources are limited. For another, you assume that the best place to invest effort in order to help developers is in CLI tooling. I, for one, disagree. I think building a model-checker that works well is a higher priority; I also think that designing what is possibly the first "utilized" logic in the history of formal logic is also more important. Even with those two, there are other things that I think deserve more attention, because I think TLA+ has much bigger problems than how you launch some program.

I also think that some of the complaints about the tooling might be about something else altogether, but I'm not entirely sure what it is. Simply doing what beginners think they want is rarely a good strategy for product design. You get the feedback, but then you have to understand what the true issue is. Having said that, there can certainly be some reasonable disagreement over priorities.


> So what? TLA+ still achieved something that no other formal logic had before it (AFAIK).

What I'm saying is that either TLA+ wasn't the first (Z was earlier), _or_ TLA+'s success has much less to do with anything especially accessible about it and more because a high profile company wrote about it.

> because I think TLA+ has much bigger problems than how you launch some program. [...] I also think that some of the complaints about the tooling might be about something else altogether, but I'm not entirely sure what it is. Simply doing what beginners think they want is rarely a good strategy for product design. You get the feedback, but then you have to understand what the true issue is

When I asked "what would make TLA+ easier to use", tons of people mentioned better CLI tooling. You told them all that they don't actually need that and they are thinking about TLA+ wrong. Maybe you aren't actually listening to the feedback.

I've also heard it from tons of experienced TLA+ users that they'd like better CLI tooling.


> TLA+ wasn't the first (Z was earlier)

Was Z ever used outside high-assurance software (or the classroom)? BTW, while Z certainly employs a formal logic, I don't think it is a formal logic itself -- unlike TLA+ -- at least not in the common usage of the term.

> TLA+'s success has much less to do with anything especially accessible about it and more because a high profile company wrote about it.

But for a high-profile company to write about it, it first had to be used at that high-profile company for some mainstream software. My point is that being an accessible logic may not be a sufficient condition for adoption, but it is certainly a necessary one, and an extraordinary accomplishment. Both Isabelle and Coq have had some significant exposure, and yet they are still virtually unused by non-researchers or former researchers.

> When I asked "what would make TLA+ easier to use", tons of people mentioned better CLI tooling.You told them all that they don't actually need that and they are thinking about TLA+ wrong. Maybe you aren't actually listening to the feedback.

Maybe, but "listening to feedback" is not the same as accepting it at face value. The vast majority of those people were either relative beginners or not even that. To understand the feedback I need to first make sure they understand what TLA+ is and how it's supposed to be used, or else their very expectations could be misplaced. I've written my thoughts on why I thought the answer to that question is negative here: https://old.reddit.com/r/tlaplus/comments/edqf6j/an_interest... I don't make any decisions whatsoever on TLA+'s development, so I'm not the one who needs convincing, but as far as forming my personal opinion on the feedback, what I wrote there is my response to it. It's certainly possible that either my premise or my conclusion is wrong, or both. It's also possible that both are right, and still better CLI support should be a top priority. That is why I would very much like to understand the feedback in more depth. If you like, we could continue it on /r/tlaplus (or the mailing list), where it will be more easily discoverable.

> I've also heard it from tons of experienced TLA+ users that they'd like better CLI tooling.

Sure, but the question is, is that the most important thing to focus efforts on? Anyway, I'm sure that any contribution that makes any part of the TLA+ toolset more friendly, even if for a subset of users, would be welcomed and appreciated by the maintainers, as long as it doesn't interfere with what they consider more important goals.


> and also in advancing the state-of-the-art in the software industry, which is another worthy goal, but a very different one.

The state of the art doesn't advance until practitioners share the results of adopting new techniques and tools. How else would the SWEBOK guide [0] have a section on unit testing if, in study, we don't have results that correlate the practice of TDD with a reduction in software errors, productivity, or some form of correctness? Is it irrational of us to practice TDD anyway?

How do we know formal methods are effective if there hasn't been a large scale study?

An interesting example of this balance between research and practice came to me from learning about the construction of cathedrals, particularly the funicular system of Guadi [1]. Instead of conventional drawings he used hanging models of the structures. It was not widely practiced or a proven method but it enabled him to reduce his models to 1:10th the scale and accurately model the forces in complex curves. As he shared his work and others studied it the state of the art advanced.

> .. or at all.

> I think Haskell is OK -- not much better or much worse than most languages, but not my cup of tea -- and Lean is very interesting for those interested in formal logic, but offers little to engineers

I think we're at, or approaching, the point where we're talking past each other.

I heard the talk from the Semantic team at Github on their experience report with algebraic effects sounded positive [2]. They are probably the largest production deployment of a code base exploiting algebraic effects. Their primary area of focus is in semantic analysis of ASTs. And yet they would like to continue pushing the state of the art with algebraic effects because, in their words, In Haskell, control flow is not dictated by the language, but by the data structures used. Algebraic effect interpreters enable them to interpret and analyze unsafe code without having to maintain buggy, difficult to reason about exception-based work flows. They say this singular feature alone is essential for rapid development.

Yes, they're doing research. However their research isn't in answering hypothesis about effect interpreters. It's directly contributing to an interesting feature in one of the largest code-sharing platforms in the world. They were able to exploit algebraic effects so they could focus on their research, their real-world problem, without having to fuss around with the limitations of conventional mainstream languages.

I do wish more companies using Haskell would share their experiences with it. It would go a long way to dispelling the pervasive view that Haskell is only good for research. It's not: it's a practical, production ready language, run time, and ecosystem that can help anyone interested in using it to tackle hard problems.

In a similar fashion I think dependent types would make working with types in Haskell easier. Instead of having to learn the different mix of language extensions you need to enable in order to work with inductive type families, it could be more like Lean, where it's a normal data declaration. Haskell introduces a lot of extra concepts at the type level because it has two different languages. I can simplify a lot of code in libraries I've written if Haskell had dependent types (and I'm working on libraries in Lean that may eventually prove it).

It may take a while but I think Haskell will eventually get there. The community has been discussing and researching it for years. It's going at the pace it is because the Haskell community is not focused on pure research: there are a lot of production applications and libraries out there and they don't want to break compatibility for the sake of answering interesting research questions. Despite your opinions on type theory there's a real, practical reason I might exploit Rank-2 types or Generalized Algebraic Data Types and it's not simply to answer a pressing hypothetical question: I've used these in anger.

I hope some of this answers your questions as to why get there at all?

[0] https://www.computer.org/education/bodies-of-knowledge/softw... [1] https://en.wikipedia.org/wiki/Church_of_Col%C3%B2nia_G%C3%BC... [2] https://github.com/github/semantic/blob/master/docs/why-hask...


What I'm trying to say is simple: personal aesthetic preference, empirical results and research questions are all very important, and every one of them is a reasonable motivation for making decisions, but they are very different from another, and should not be conflated.

As to Haskell, it is definitely production-ready, and usable in a variety of circumstances. Also, its design and evolution are just as definitely primarily guided by research goals. These two simple, obvious, facts are not in opposition to one another. I have read numerous articles on why some people like Haskell, or many other languages. They are very important: some people like the tracking of "side effects"; some enjoy the succinctness. None of those articles, however, have any bearing on other questions. Which are not necessarily more important, but they are very different.


   Lean. It's implemented in a 
   dependently typed programming 
Lean is implemented in C++ [1, 2]. There's a message in there somewhere. The message is probably something along the lines of: if you want performance, use a low-level language.

[1] https://github.com/leanprover/lean/tree/master/src

[2] https://github.com/leanprover/lean4/tree/master/src


I'm aware, I have a branch implementing C FFI for Lean 3; my point was that it's logic, mathlib could be implemented in system F instead. I'm pretty sure Lean could be implemented in Lean eventually.


What do you mean by "it's logic, mathlib could be implemented in system F"? System F is not dependently typed!

Anyway, nobody doubts that Lean's logic is a dependently typed formalism: "Lean is based on a version of dependent type theory known as the Calculus of Inductive Constructions" [1]. I thought the discussion here was about using dependent types in programming languages. Note that logics and programming languages are not the same things.

[1] J. Avigad, L. de Moura, S. Kong, Theorem Proving in Lean.


I think I mistyped. Pardon the pun!

I meant that Lean, I think, could be implemented in Lean.

Do you also work with Lean?


That's an interesting question: can Lean be implemented in Lean's dependent type theory.

I'm not currently working with Lean, but I will start a large verification project in a few months. We have not yet decided which prover to go for. We may write our own.


That sounds like it could be a lot of fun! Good luck. :)


    remove the barrier between 
    types and terms then ...
... you will loose type inference, and hence Haskell becomes completely unusable in industrial practise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: