> There's a great talk on why functional programming is not the norm
It's a great talk in the sense that it satisfies people who want excuses. The real reason is similar to the one in the talk, but more disappointing: over decades, functional programming was not found to be either significantly better or significantly worse than other paradigms in productivity or other similar metrics, but for many years imperative paradigms enjoyed a significant benefit in runtime performance, and so there's really no reason to switch. I honestly couldn't care -- I like, and hate, OOP and FP equally -- but paradigms exist to serve practitioners, not the other way around.
One thing is clear: you cannot at once say you're interested in serving the research of typed lambda calculus -- a worthy research goal -- and also in advancing the state-of-the-art in the software industry, which is another worthy goal, but a very different one. If you're interested in the latter you must conduct some sort of study on the problem at hand; if you're interested in the former, then you have zero information to make a claim about the latter. In other words, you can either want to advance programming or advance typed FP; you cannot do both at once, because doing they both require very different kinds of study. If you're for typed FP, then that position is no less valid than being for OOP, or any other paradigm some people like, but it's not more valid, either.
> And they're not an incremental improvement on the status quo with an obvious, easy benefit.
... or at all.
> That's a rhetorical exercise. You can take a random snippet of APL or Fortran and the outcome would be the same.
I totally disagree here. Simplifying ideas makes you understand what is essential and what isn't, and is absolutely required before some idea can get out of the lab and become a useful technology in the field. If the most basic subroutine represents some object in a meta-universe, that even 99% of mathematicians have no clue about -- you're off to a bad start with engineers. The use of constructive type theory as a foundation is interesting to logicians who want to study the properties of constructive type theories and of proof assistants based on them. It is nothing but an unnecessary complication for software developers. As you point out, there are often some necessary complications, so there's really no need to add unnecessary ones.
BTW, you mentioned TLA+. TLA+ also requires some effort, although significantly less than Lean (or Isabelle, or Coq), but TLA+ has made a historical breakthrough in formal logic. From its inception in the late 19th century, and even from its first beginnings in the 17th, formal logic aspired to be a useful tool for practitioners. But time and again it's fallen far short of that goal, something that logicians like John von Neumann and Alan Turing bemoaned. AFAIK, TLA+ is the first powerful (i.e. with arbitrary quantification) formal logic in the history of formal logic that has gained some use among ordinary practitioners -- to paraphrase Alan Turing, among "the programmers in the street" (Turing said that formal logic failed to find use for "the mathematician in the street," and tried, unsuccessfully, to rectify that). It was no miracle: Lamport spent years trying to simplify the logic and adapt it to the actual needs of engineers, including through "field research" (others, like David Harel, have done the same). That is something that the logicians behind other proof assistants seem to be uninterested in doing, and so their failure in achieving that particular goal is unsurprising.
> Every language requires training and effort to learn.
... Interested folks will come around eventually and have their moment but it takes training and dedication to get there.
But why get there at all? I'm interested, I learned some Haskell and some Lean, I think Haskell is OK -- not much better or much worse than most languages, but not my cup of tea -- and Lean is very interesting for those interested in formal logic, but offers little to engineers; I enjoy it because I'm interested in some theoretical aspects. But why would I subject other developers who aren't particularly interested in a certain paradigm to suffer until they "get there" just for the sake of that paradigm? Again, paradigms exist to serve practitioners, not the other way around.
> If you don't see why that's useful then what do you think the community should be focusing on?
I think the community should be honest about what Haskell is: a testing ground for ideas in the research of typed functional programming. Then, it should consider measuring some results before disappearing down its own rabbit hole, where it's of little use to anyone. Algebraic effects could be an interesting experiment at this point. I don't really care about the pains of all 300 Haskell developers with monad transformers, but if Haskell tries another approach to IO, then it might eventually find a generally useful ones; same goes for using linear types instead of monads. But dependent types? They're currently at the stage when even their biggest fans find it hard to say more good things about them than bad. If Haskell wants to serve software by testing out new ideas in typed functional programming, I think it should at least focus on ones that might still be in the lab but are at least out of the tube.
> BTW, you mentioned TLA+. TLA+ also requires some effort, although significantly less than Lean (or Isabelle, or Coq), but TLA+ has made a historical breakthrough in formal logic. From its inception in the late 19th century, and even from its first beginnings in the 17th, formal logic aspired to be a useful tool for practitioners. But time and again it's fallen far short of that goal, something that logicians like John von Neumann and Alan Turing bemoaned. AFAIK, TLA+ is the first powerful (i.e. with arbitrary quantification) formal logic in the history of formal logic that has gained some use among ordinary practitioners -- to paraphrase Alan Turing, among "the programmers in the street" (Turing said that formal logic failed to find use for "the mathematician in the street," and tried, unsuccessfully, to rectify that).
I will bet you 100 USD that TLA+ had less than 10% of the users of Z before the AWS paper.
> It was no miracle: Lamport spent years trying to simplify the logic and adapt it to the actual needs of engineers, including through "field research" (others, like David Harel, have done the same). That is something that the logicians behind other proof assistants seem to be uninterested in doing, and so their failure in achieving that particular goal is unsurprising.
I will bet you another 100$ that the success of TLA+ has nothing to do with making it "accessible" and everything to do with AWS vouching for it. If reaching the actual needs of the engineers was so important, why is the CLI tooling in such a bad state?
So what? TLA+ still achieved something that no other formal logic had before it (AFAIK). If you mean to say that Lean could do the same, I think spending a month with it will dissuade you of that notion, despite the fact that it is more similar to programming and despite it more programming-like tooling that programmers of certain backgrounds might appreciate.
Amazon may not have used TLA+ if not for TLC, but very good model checkers have existed for decades, have been used successfully in industry, and still have not made any inroads into mainstream software.
> If reaching the actual needs of the engineers was so important, why is the CLI tooling in such a bad state?
For one, resources are limited. For another, you assume that the best place to invest effort in order to help developers is in CLI tooling. I, for one, disagree. I think building a model-checker that works well is a higher priority; I also think that designing what is possibly the first "utilized" logic in the history of formal logic is also more important. Even with those two, there are other things that I think deserve more attention, because I think TLA+ has much bigger problems than how you launch some program.
I also think that some of the complaints about the tooling might be about something else altogether, but I'm not entirely sure what it is. Simply doing what beginners think they want is rarely a good strategy for product design. You get the feedback, but then you have to understand what the true issue is. Having said that, there can certainly be some reasonable disagreement over priorities.
> So what? TLA+ still achieved something that no other formal logic had before it (AFAIK).
What I'm saying is that either TLA+ wasn't the first (Z was earlier), _or_ TLA+'s success has much less to do with anything especially accessible about it and more because a high profile company wrote about it.
> because I think TLA+ has much bigger problems than how you launch some program. [...] I also think that some of the complaints about the tooling might be about something else altogether, but I'm not entirely sure what it is. Simply doing what beginners think they want is rarely a good strategy for product design. You get the feedback, but then you have to understand what the true issue is
When I asked "what would make TLA+ easier to use", tons of people mentioned better CLI tooling. You told them all that they don't actually need that and they are thinking about TLA+ wrong. Maybe you aren't actually listening to the feedback.
I've also heard it from tons of experienced TLA+ users that they'd like better CLI tooling.
Was Z ever used outside high-assurance software (or the classroom)? BTW, while Z certainly employs a formal logic, I don't think it is a formal logic itself -- unlike TLA+ -- at least not in the common usage of the term.
> TLA+'s success has much less to do with anything especially accessible about it and more because a high profile company wrote about it.
But for a high-profile company to write about it, it first had to be used at that high-profile company for some mainstream software. My point is that being an accessible logic may not be a sufficient condition for adoption, but it is certainly a necessary one, and an extraordinary accomplishment. Both Isabelle and Coq have had some significant exposure, and yet they are still virtually unused by non-researchers or former researchers.
> When I asked "what would make TLA+ easier to use", tons of people mentioned better CLI tooling.You told them all that they don't actually need that and they are thinking about TLA+ wrong. Maybe you aren't actually listening to the feedback.
Maybe, but "listening to feedback" is not the same as accepting it at face value. The vast majority of those people were either relative beginners or not even that. To understand the feedback I need to first make sure they understand what TLA+ is and how it's supposed to be used, or else their very expectations could be misplaced. I've written my thoughts on why I thought the answer to that question is negative here: https://old.reddit.com/r/tlaplus/comments/edqf6j/an_interest... I don't make any decisions whatsoever on TLA+'s development, so I'm not the one who needs convincing, but as far as forming my personal opinion on the feedback, what I wrote there is my response to it. It's certainly possible that either my premise or my conclusion is wrong, or both. It's also possible that both are right, and still better CLI support should be a top priority. That is why I would very much like to understand the feedback in more depth. If you like, we could continue it on /r/tlaplus (or the mailing list), where it will be more easily discoverable.
> I've also heard it from tons of experienced TLA+ users that they'd like better CLI tooling.
Sure, but the question is, is that the most important thing to focus efforts on? Anyway, I'm sure that any contribution that makes any part of the TLA+ toolset more friendly, even if for a subset of users, would be welcomed and appreciated by the maintainers, as long as it doesn't interfere with what they consider more important goals.
> and also in advancing the state-of-the-art in the software industry, which is another worthy goal, but a very different one.
The state of the art doesn't advance until practitioners share the results of adopting new techniques and tools. How else would the SWEBOK guide [0] have a section on unit testing if, in study, we don't have results that correlate the practice of TDD with a reduction in software errors, productivity, or some form of correctness? Is it irrational of us to practice TDD anyway?
How do we know formal methods are effective if there hasn't been a large scale study?
An interesting example of this balance between research and practice came to me from learning about the construction of cathedrals, particularly the funicular system of Guadi [1]. Instead of conventional drawings he used hanging models of the structures. It was not widely practiced or a proven method but it enabled him to reduce his models to 1:10th the scale and accurately model the forces in complex curves. As he shared his work and others studied it the state of the art advanced.
> .. or at all.
> I think Haskell is OK -- not much better or much worse than most languages, but not my cup of tea -- and Lean is very interesting for those interested in formal logic, but offers little to engineers
I think we're at, or approaching, the point where we're talking past each other.
I heard the talk from the Semantic team at Github on their experience report with algebraic effects sounded positive [2]. They are probably the largest production deployment of a code base exploiting algebraic effects. Their primary area of focus is in semantic analysis of ASTs. And yet they would like to continue pushing the state of the art with algebraic effects because, in their words, In Haskell, control flow is not dictated by the language, but by the data structures used. Algebraic effect interpreters enable them to interpret and analyze unsafe code without having to maintain buggy, difficult to reason about exception-based work flows. They say this singular feature alone is essential for rapid development.
Yes, they're doing research. However their research isn't in answering hypothesis about effect interpreters. It's directly contributing to an interesting feature in one of the largest code-sharing platforms in the world. They were able to exploit algebraic effects so they could focus on their research, their real-world problem, without having to fuss around with the limitations of conventional mainstream languages.
I do wish more companies using Haskell would share their experiences with it. It would go a long way to dispelling the pervasive view that Haskell is only good for research. It's not: it's a practical, production ready language, run time, and ecosystem that can help anyone interested in using it to tackle hard problems.
In a similar fashion I think dependent types would make working with types in Haskell easier. Instead of having to learn the different mix of language extensions you need to enable in order to work with inductive type families, it could be more like Lean, where it's a normal data declaration. Haskell introduces a lot of extra concepts at the type level because it has two different languages. I can simplify a lot of code in libraries I've written if Haskell had dependent types (and I'm working on libraries in Lean that may eventually prove it).
It may take a while but I think Haskell will eventually get there. The community has been discussing and researching it for years. It's going at the pace it is because the Haskell community is not focused on pure research: there are a lot of production applications and libraries out there and they don't want to break compatibility for the sake of answering interesting research questions. Despite your opinions on type theory there's a real, practical reason I might exploit Rank-2 types or Generalized Algebraic Data Types and it's not simply to answer a pressing hypothetical question: I've used these in anger.
I hope some of this answers your questions as to why get there at all?
What I'm trying to say is simple: personal aesthetic preference, empirical results and research questions are all very important, and every one of them is a reasonable motivation for making decisions, but they are very different from another, and should not be conflated.
As to Haskell, it is definitely production-ready, and usable in a variety of circumstances. Also, its design and evolution are just as definitely primarily guided by research goals. These two simple, obvious, facts are not in opposition to one another. I have read numerous articles on why some people like Haskell, or many other languages. They are very important: some people like the tracking of "side effects"; some enjoy the succinctness. None of those articles, however, have any bearing on other questions. Which are not necessarily more important, but they are very different.
It's a great talk in the sense that it satisfies people who want excuses. The real reason is similar to the one in the talk, but more disappointing: over decades, functional programming was not found to be either significantly better or significantly worse than other paradigms in productivity or other similar metrics, but for many years imperative paradigms enjoyed a significant benefit in runtime performance, and so there's really no reason to switch. I honestly couldn't care -- I like, and hate, OOP and FP equally -- but paradigms exist to serve practitioners, not the other way around.
One thing is clear: you cannot at once say you're interested in serving the research of typed lambda calculus -- a worthy research goal -- and also in advancing the state-of-the-art in the software industry, which is another worthy goal, but a very different one. If you're interested in the latter you must conduct some sort of study on the problem at hand; if you're interested in the former, then you have zero information to make a claim about the latter. In other words, you can either want to advance programming or advance typed FP; you cannot do both at once, because doing they both require very different kinds of study. If you're for typed FP, then that position is no less valid than being for OOP, or any other paradigm some people like, but it's not more valid, either.
> And they're not an incremental improvement on the status quo with an obvious, easy benefit.
... or at all.
> That's a rhetorical exercise. You can take a random snippet of APL or Fortran and the outcome would be the same.
I totally disagree here. Simplifying ideas makes you understand what is essential and what isn't, and is absolutely required before some idea can get out of the lab and become a useful technology in the field. If the most basic subroutine represents some object in a meta-universe, that even 99% of mathematicians have no clue about -- you're off to a bad start with engineers. The use of constructive type theory as a foundation is interesting to logicians who want to study the properties of constructive type theories and of proof assistants based on them. It is nothing but an unnecessary complication for software developers. As you point out, there are often some necessary complications, so there's really no need to add unnecessary ones.
BTW, you mentioned TLA+. TLA+ also requires some effort, although significantly less than Lean (or Isabelle, or Coq), but TLA+ has made a historical breakthrough in formal logic. From its inception in the late 19th century, and even from its first beginnings in the 17th, formal logic aspired to be a useful tool for practitioners. But time and again it's fallen far short of that goal, something that logicians like John von Neumann and Alan Turing bemoaned. AFAIK, TLA+ is the first powerful (i.e. with arbitrary quantification) formal logic in the history of formal logic that has gained some use among ordinary practitioners -- to paraphrase Alan Turing, among "the programmers in the street" (Turing said that formal logic failed to find use for "the mathematician in the street," and tried, unsuccessfully, to rectify that). It was no miracle: Lamport spent years trying to simplify the logic and adapt it to the actual needs of engineers, including through "field research" (others, like David Harel, have done the same). That is something that the logicians behind other proof assistants seem to be uninterested in doing, and so their failure in achieving that particular goal is unsurprising.
> Every language requires training and effort to learn. ... Interested folks will come around eventually and have their moment but it takes training and dedication to get there.
But why get there at all? I'm interested, I learned some Haskell and some Lean, I think Haskell is OK -- not much better or much worse than most languages, but not my cup of tea -- and Lean is very interesting for those interested in formal logic, but offers little to engineers; I enjoy it because I'm interested in some theoretical aspects. But why would I subject other developers who aren't particularly interested in a certain paradigm to suffer until they "get there" just for the sake of that paradigm? Again, paradigms exist to serve practitioners, not the other way around.
> If you don't see why that's useful then what do you think the community should be focusing on?
I think the community should be honest about what Haskell is: a testing ground for ideas in the research of typed functional programming. Then, it should consider measuring some results before disappearing down its own rabbit hole, where it's of little use to anyone. Algebraic effects could be an interesting experiment at this point. I don't really care about the pains of all 300 Haskell developers with monad transformers, but if Haskell tries another approach to IO, then it might eventually find a generally useful ones; same goes for using linear types instead of monads. But dependent types? They're currently at the stage when even their biggest fans find it hard to say more good things about them than bad. If Haskell wants to serve software by testing out new ideas in typed functional programming, I think it should at least focus on ones that might still be in the lab but are at least out of the tube.