Hacker News new | past | comments | ask | show | jobs | submit login
Functional programming should be the future of software (ieee.org)
341 points by g4k on Nov 2, 2022 | hide | past | favorite | 504 comments



It would be helpful if the article started off defining what a functional language is. A lot of languages have functional features but are not “purely” functional. I think most would agree there’s a spectrum; dynamic vs static, eager vs lazy, mutable vs immutable.

So what flavor of functional programming one might ask, since javascript is a dynamically typed flavor that is ubiquitous nowadays? The fine article suggests drum roll... Haskell! The author believes a statically typed and lazily evaluated language is what we all should be using. Unlike the various other dynamicaly typed options like small talk or scheme or lisp. Standard ML and Ocaml being statically typed are eagerly evaluated.

Most popular languages have added a lot of functional features in recent years so it’s clear the world is moving that way, imo.


To quote "Stop Writing Dead Programs" [1]: "If what you care about is systems that are highly fault tolerant, you should be using something like Erlang over something like Haskell because the facilities Erlang provides are more likely to give you working programs."

[1] https://www.youtube.com/watch?v=8Ab3ArE8W3s


That quote is absurd because the vast majority of applications on the planet are not written in Erlang and work just fine. Working and fault tolerance are in no way related. Being generous the majority of applications with very high uptime are also not written in Erlang.


This. I came across this quote:

> If somebody came to me and wanted to pay me a lot of money to build a large scale message handling system that really had to be up all the time, could never afford to go down for years at a time, I would unhesitatingly choose Erlang to build it in

RabbitMQ might be the most famous example of a product written in Erlang. It's great, but I've seen it fall over. In my experience cluster failures typically caused by a handful of root causes like hardware error, resource exhaustion, network partitioning or operator error. Whether a system is built in Erlang or Go, I'd imagine that these same root causes would exist.

I'd love to read in-depth why RabbitMQ's Erlang underpinnings make it better than, say, ActiveMQ or Kafka. Assuming a 3 perfectly built clusters that aren't mishandled, will RabbitMQ somehow "win" over the other two because of some particular greatness in the Erlang?


> the vast majority of applications on the planet are not written in Erlang and work just fine

The vast majority of applications on the planet are bug-ridden, fragile, over-budget, under-thought, mark-missing, user-hostile garbage. They most certainly do not "work just fine."


And also the vast majority of "working" applications have a full devops team, legions of highly paid senior developers, etc. "You" do not.


Ok so we’re adding “developer productivity” on to the list. Aside from moving the goal, one does not have faith a lot faith in the knowledge of people making these claims, like where’s the proof?

Hint read Joe Armstrong’s dissertation.


Is this actually true? I feel like the majority that software exists is non-business software simply because there's zero cost...


This quote isn't about "vast majority of software".

It's about tools we use and waste our time on.


Cloud Haskell brings ideas from Erlang to Haskell. But don't know how they compare.


> A lot of languages have functional features but are not “purely” functional.

That was my first thought. I work mostly in Java because that's what they pay me to do, but I've almost never worked with a Java programmer who could actually write Java code using the OO features that the language is based around. When I see their Scala code... it's mostly var, rarely val, because it's easy to think about.


> I've almost never worked with a Java programmer who could actually write Java code using the OO features that the language is based around

I don't understand this. The language is based around primitive, flawed, simplistic OO features, right? Like "class Dog : Animal"? I never write code like that either, because it's bad practice. But you're saying they can't write code like that? Or that they don't use classes at all? How can you even write any Java that way?


> they don't use classes at all

As much as they can avoid it, yes. The only classes I ever see are those that are auto-generated from some JSON or ORM processing tool - everything else is a static function.


Unfortunately, a lot of Java programmers write imperative code in Java. Although I have seen "good" Java programmers write "good clean code", it's the exception, rather than the norm. Most of the good Java programmers that I know have moved on to better languages and platforms.


Would like to see a collection of good and bad Java code examples.


Whenever I see "interesting" object modeling beyond very basic inheritance, I groan. I've never experienced it being worth the hassle (templates, factories, etc..).

But my code is flush with collection.stream().filter().map().collect() etc. I was initially critical of it in code reviews (coming from C), but have been totally converted.


Templates (I assume you mean Generics, if we're talking Java?) aren't worth the hassle, but your code is flush with .filter() and .map()? Which rely entirely on generics to provide any typing support at all? You should learn to embrace generics; they're absolutely critical for the code you just said you write, and more. And they are most certainly not going away.


Functional programming won't succeed until the tooling problem is fixed. 'Tsoding' said it best: "developers are great at making tooling, but suck at making programming languages. Mathematicians are great at making programming languages, but suck at making tooling." This is why Rust is such a success story in my opinion: it is heavily influenced by FP, but developers are responsible for the tooling.

Anecdotally, the tooling is why I gave up on Ocaml (given Rust's ML roots, I was seriously interested) and Haskell. I seriously couldn't figure out the idiomatic Ocaml workflow/developer inner loop after more than a day of struggling. As for Haskell, I gave up maybe 20min in of waiting for deps to come down for a Dhall contribution I wanted to make.

Institutionally, it's a hard sell if you need to train the whole team to just compile a project, vs. `make` or `cargo build` or `npm install && npm build`.


I think tsoding has the wrong idea here. Most mathematicians are not working on GHC or Haskell standards or even using Haskell. Most are still doing mathematics on pen and paper. Many use packages like Sage or Wolfram alpha. Few are using interactive theorem provers like Lean.

Haskell is a poor language to be doing mathematics in.

I’d say the majority of people working GHC are software developers and CS researchers. They’re a friendly bunch.

What’s holding back tooling is that the developers of the de-facto compiler for Haskell are spread out amongst several organizations and there isn’t tens of millions of dollars funding their efforts. It’s mostly run by volunteers. And not the volume of “volunteers” you get on GCC or the like either.

That makes GHC and Haskell quite impressive in my books.

There are other factors of course but the tooling is improving bit by bit.

The whole “Haskell is for research” meme needs to go into the dustbin.


I think he counts academic computer scientists more as mathematicians than developers.


C++ and Java wouldn't be where they are without CS researchers, so I think the point still stands. These people aren't doing useless work. They're the ones making sure Concepts won't break the entire ecosystem and that have made the JVM the beast it is.

Grit and determination will get you far but like it or not there is a ton of work that requires knowledge of mathematics and theory that you can't avoid if you want to make good things.

What's kind of neat about Haskell is how closely researchers can work with users and collaborate on solutions.

Remember, the GHC team is pretty small. Their IRC channel isn't huge. Releases still get made fairly regularly and GHC is running one of the most advanced industrial strength programming languages out there with a large ecosystem.


> C++ and Java wouldn't be where they are without CS researchers, so I think the point still stands

I'm not sure how you gleamed that they aren't important/haven't made important contributions/aren't doing useful work from that comment. Tsoding specifically said that they don't make good tooling, but make great languages. It's not memeing about "Haskell for research," it's talking about things it needs to improve (potentially to break free of that meme).

For what it's worth, C++ tooling also sucks. We've just layered tons of kludges on top of it that make the ecosystem somewhat bearable. It's not ideal, contributing to a project for the first time usually requires some troubleshooting to get a working build.


> Tsoding specifically said that they don't make good tooling

And I disagree.

Mathematicians aren't building Haskell.

CS Researchers aren't noodling around either. I know a few of the people working on GHC who are researchers and are doing the hard work of improving error messages and reporting because they care deeply about tooling. They are also users after all!

> For what it's worth, C++ tooling also sucks.

Yeah, so does Haskell's. I think it's just an unfortunate fact of life that nothing's going to be perfect.

My point is that the reason for Haskell's situation is less to do with researchers and more to do with funding and organization.

To that effect, the Haskell Foundation is relatively new and gaining steam. It might change. But it's nowhere near the funding levels that get poured into TypeScript and C# or even Java, gcc, etc.

Update: Put it this way, the set of people building Python packaging is probably mostly developers and very few, if any, researchers. The tooling isn't great either. I don't think researchers make bad tooling and programmers make good tooling. I think programmers make stinking bad tooling all the darn time. Programming is hard. Tooling is hard. And programmers are a fickle bunch that are really hard to please.


He's talking about Programming Language Theory, which includes various type theories and such. It's where math and logic meet programming languages. In other words, the math of analyzing programing languages, not using programing languages to implement applied math.


A URL's domain on your profile seem to be expired and hosts some random stuff


You gave up using a programming language after a day? And Haskell after installing/building some dependencies for 20mins? Tbh, this sounds like you were not really trying. What kind of experience with a programming language do you expect to have after a mere day? Learning takes time. Anyone might spew some not idiomatic code within a day, but really becoming proficient usually takes longer.

Do you have any references for the "Rust is heavily influenced by FP" thing? To me it does not feel that much FP. I have (for now) given up writing FP like code in Rust. ML-influence -- Yeah maybe, if I squint a bit.


> What kind of experience with a programming language do you expect to have after a mere day? Learning takes time.

It does, which is why you need tooling to get out of the way and let you actually learn. Working out obscure tooling commands to build a hello world app then having to grok the error messages absolutely destroys the learning loop.

For rust it took me approximately 3 minutes from scratch to install, bootstrap a project and run the hello world CLI. The rest of the day was spent purely, 100% learning rust. Not Cargo.

20 minutes to install some beginner level dependencies, presumably with little feedback as to what is going in? Dead.


You shouldn't need to install any dependencies beyond `base` for hello world in Haskell:

```

module Main where

main :: IO ()

main = putStrLn "Hello, world!"

```

If you mean installing Dhall's dependencies (https://github.com/dhall-lang/dhall-haskell/blob/master/dhal...), those aren't too crazy, but they're definitely not all "beginner level". Template Haskell in particular is quite heavyweight.


As I see it, this is a completely legit beginner level perspective. When I first touched a programming language (C, Borland compiler) I was able to run a first program within minutes. I completely understand the frustration with development environments that frustrate an aspiring user unnecessarily - I had this experience with F#, VSCode an Ionide, which is a very ugly case. Haskell was not that bad, but it it certainly is not for the faint of heart.


> You gave up using a programming language after a day? And Haskell after installing/building some dependencies for 20mins...

I'd do the same. It's 2022. There are so many options without this friction, why would you fight your way through it?

If either language had some magic power or library, that'd be one thing, but their only selling point is the FP paradigm, which is only arguably somewhat better than what other languages do. Not only that, but most other languages let you do FP to varying degrees anyway.


Haskell's promise is not only FP. That's part of it though, of course, to have an ecosystem, which encourages to continue in an FP style. Haskell's promise is also strong type safety and, as a distinguisher to many langauges, lazyness by default. Aside from that, its implementation is quite performant, if one needs to worry about such things.

I will take a 20min build process for dependencies (probably a few commands, which hopefully are documented in the project's readme and probably only once for most of the lifetime of a project on your personal machine) over a language, that is quickly up and running, but breaks any number of basic principles (for example looking at JS) and lets me shoot myself in the foot. Some languages and the lessons we take from learning them are worth some initial effort. Of course it is not great, that things are not as easy, as they maybe could be, but if the language has other perks making up for that, it might still be worthwhile.


> but if the language has other perks making up for that, it might still be worthwhile.

Totally agree, I just think for the vast majority of developers, the trade off here isn't worthwhile. This is reflected in Haskell's stagnant growth despite it's model being better for a lot of things.


Sure -- people have the right not to try things, which is basically what messing around with a programming language in a new language paradigm for 1 day is. That's fine, although it should also be expected that their opinions on things they haven't tried won't be given much weight.


Day one experience matters a lot to most people. For better or worse, your community won't grow if newcomers have to spend a day hating things first.

It won't matter if you're looking for a language to make your baby for the next 5+ years, but most people are trying to solve small problems on an incremental basis.


> Do you have any references for the "Rust is heavily influenced by FP" thing? To me it does not feel that much FP.

The original implementation of Rust was in an ML dialect (I think OCaml?), so from that we know immediately that the original authors were familiar with FP and used it for their own purposes. It seems odd, then, to assume that there would be no influence of their own language.

But if we look at the actual feature set, we find a lot of things that previously belonged almost entirely to the realm of FP. The type system as a whole has a fairly FP feel, plus algebraic data types, exhaustive structural pattern matching, maps and folds, anonymous functions, the use of the unit value for side-effecting work, the functioning of the semicolon operator (which is identical to the OCaml usage)... there's quite a lot, and those were just the examples off the top of my head!


Yep, though it's got type-classes specifically from Haskell.


(It was OCaml, yes)


My job/hobby is to produce code. It is not fighting a toolchain. It does not bode well for long-term efficient use of my [limited] time if I'm spending a whole day fighting toolchains from the outset.

User experience matters, and developers are ultimately users.


IME learning the (usually shitty) tools and the culture/ecosystem eat up way more time when starting with a new language than learning the language itself does.

I can absolutely understand abandoning such an effort if one's earliest interactions with the tools and/or ecosystem are very unpleasant.


Immutability, most things are expressions, no nulls. I think this is what they mean, it's a good experience if you want to go purely functional, they took influences from everything though.


other things require my attention and there are so many hours in a day

either get those initial minutes right or lose me


I think FP suffers from what I call the "ideological gruel" problem. A lot of niche ideology-oriented communities with very strong opinions tend to ignore usability issues not necessarily because it's a theory/practice dichotomy, but that the highly ideologically excited community ignores the problems because they're so positively motivated by the ideology. So even if you're eating gruel, if the gruel is produced by an ideology you identify strongly with, the identification alone is enough to make the gruel taste better than mere gruel. A lot of folks that use FP are willing to overlook the myriad of rough edges around tooling because they're so excited to work with FP that they often get used to the tooling. Rust made tooling UX an explicit focus of the project which is why it was able to escape the "ideological gruel" curse.

Additionally most prominent FP projects are old. Both Haskell and Ocaml date from a time when UX expectations around language tooling were much lower (think C++.) The inertia around the projects never cared much for UX anyway so now in 2022 when languages like Rust and Go have raised the floor of expectation for PL tooling, Haskell and Ocaml struggle to keep up.


That's interesting, I often find the tooling one of the best things that FP languages offer. In OCaml for instance I found Dune to be fantastic and extremely intuitive. Another very good experience I had was with Elixir and Hex. In Haskell I personally think that there are indeed quite a few things that could be improved around build system and packaging, but overall it's not really that bad once you learn the quirks.


It would the OCaml community a great service if you wrote up a beginner's tutorial on working with Dune.


I felt the small section on dune here was enough for me.

https://ocaml.org/docs/up-and-running

Teaching anything beyond dune build or opam install feels out of place for a beginner tutorial.

However, there really should be more examples of “how to do X in dune”. It took me a bit to learn how to pin a git repo for local use and installation.


Tooling is why Go gets its foot in the door much quicker than other languages IMHO. A single binary with no dependencies that does pretty much everything.


Indeed. Also formatting decided at the language level (gofmt) and a standard library powerful enough to accomplish most business tasks, so you can avoid 3rd party dependencies entirely if you want, while still being productive.


I've seen go programs fail to start because they were dynamically linked to some .so file I happened to not have.


They mean the `go` binary, not Go programs in general.


Yeah ‘Fully static’ requires some extra linker flags.


As someone who likes both mathematics and programming, I find this comment and the article too divisive for divisiveness’ sake.

Programming is applied mathematics. An assignment is not 'sloppy' like the article posts, it is just another kind of operation that can be done.

A proof is very much like programming, except you are also the parser, the compiler, and the (comparatively slow) computer. Learning to write proofs helps immensely with learning how to program.

We should strive to make our proofs and programs easier to understand, no matter the paradigm.


I think F# has a good tooling story, since it's part of .NET and a first-class citizen in Visual Studio. It doesn't get as much love from Microsoft as C#, but it's still quite nice to use.


After learning a bunch of programming languages and their corresponding ecosystems (incl. Rust, Lisps, Scala), F# is still my favorite by a long shot. Its only serious shortcoming, imo, is the tooling.

Visual Studio is great, but if you're not on Windows, your only practical choices are VS Code + Ionide (I was a sponsor for a while; ultimately lost hope), or JetBrains Rider, which is powerful, but heavy.

Comparing my 10+ years focused on C# with ~5 years focused on F#, I was ultimately more productive in F#. But:

1. Tools for refactoring and code navigation were better for C# 2. Testing was more predictable with C#; I often just tested from the CLI with F# (so much love for Expecto, though) 3. Paket and dependency management between projects caused days of pain, especially during onboarding


Microsoft has come a long way in supporting .NET on non-Windows boxes, but I agree that it's still not as good as it could be.

C# certainly has better tooling, but C# has possibly the best tooling of any programming language on the planet, so that's a high bar to meet.


I used Visual Studio on macOS for some small F# projects and it worked fine. The only trouble I had was with some setup instructions being spotty because there was a shift in how things were done in recent versions at the time a few years back.


There’s also the jetbrain ide that I’ve heard good things about compared to ionide.


Yes, we ended up buying JetBrains Rider; I used it for a few years. I was much more productive in it than Ionide.

Code analysis was excellent, though it would sometimes get weird on multi-project repos. Unit testing (incl. Expecto tests) were a little flaky. Like many JetBrains products, it was a resource hog on large projects.

Probably the worst part of the Rider experience had more to do with other devs using different tools, i.e. Ionide or Visual Studio. (We were a "bring your own whatever" kind of startup.) Each IDE/toolchain had its own opinions about .fsproj files. A CI step to keep things "normal" would have been great, but there wasn't (isn't?) anything available, and we wouldn't spend the time building our own.

tl;dr - Rider better than Ionide; whole team should use it


Last time I used F# on Linux, the REPL was a mess and mostly unusable. Compilation takes forever. You have to edit an fsproj rather than inferring modules from the file system structure like most modern languages.

It’s a great language— maybe my favorite, but the tooling stinks if you’re not using VS. I’m not switching to Windows, so that leaves me in limbo.


PSA: Visual Studio for Mac[0]. I know you're on Linux but for others.

[0] https://visualstudio.microsoft.com/vs/mac


And just for clarity, it's not proper Visual Studio, but rather an updated MonoDevelop. Was missing quite a lot of functionality (for Unity/C#) compared to VS, so I used Rider instead.


I didn't know Rider supported F#, thanks for the tip.


I'd argue Rider from Jetbrains is better than VS for F#, though that is a subscription based IDE (though you can stop paying and keep the version you originally "bought" after)


The part about fsproj is due to module order being significant, which I don't think they can change without breaking existing code.


Honestly Haskell's tooling is really surprisingly good these days. It works fine cross platform, there is a single "blessed" path without too many options to spend time agonizing over.

Today the Haskell example is just `cabal install --only-dependencies && cabal build`.


The npm install experience should be baseline for newer languages. Simply let me get into hacking fast. This is one of the top reason I like tinkering with JS because it just works. (Yes, I know all weaknesses of JS ecosystem, but getting is really easy)


Please, no. Node tooling is such a mess and I can almost never get anything running easily on Nix because Node developers download binaries from the internet without understanding the system. All of these executables fail because of linked libraries. If instead they told you what libraries and executables you'd need instead loosy-goosy installing garbage all over my system, more things might work.


Sure npm is not silver bullet. But I want the npm experience. Technical implementation can be improved as you say and I agree.


Then you should be a fan of Nix


I do agree.

I think .Net has got it right. And dotnet-script [https://github.com/dotnet-script/dotnet-script] has been a game-changer for me with a REPL-like experience for unit testing and writing command-line utilities.


Not only tooling is a problem. Name one FP language that I can use for high-performance and systems programming. Is there any one except ATS?

And ATS is pretty hard (unlike C, C++ and Rust). I think it will take a while until linear & dependent type languages will hit mainstream. Rust already succeeded in that regard, so it's a great stepping stone.


Most computers follows the Von Neumann architecture. Any imperative languages with no GC would do great because of the small number of abstractions needed to make a program run. AFAIK, C only requires to set up a stack with the registers.

When we build something with lambda calculus as its core, you might want to revise that opinion.


There’s some truth to this - imperative languages with state make sense because the underlying hardware is a series of imperative instructions and a large amount of state. What does lambda calculus hardware look like?



> it is heavily influenced by FP

Is it really? I agree with the rest of your post, that Rust provides great tooling, but not sure it's "heavily influenced by FP", at least that's not obvious even though I've been mainly writing Rust for the last year or so (together with Clojure).

I mean, go through the "book" again (https://doc.rust-lang.org/book/) and tell me those samples would give you the idea that Rust is a functional language. Even the first "real" example have you mutating a String. Referential transparency would be one of the main point of functional programming in my opinion, and Rust lacks that in most places.


Considering Rust pretty much started as a way to have a ML for system programming and was written in Ocaml, yes, I think it's fair to say it was heavily influenced by FP.

It became less and less ML-like as time went on but it still as a ton of features it inherited from Ocaml and Haskell: variant types, pattern matching, modules, traits come directly from type classes, etc.


I think Rust is not particularly FP because it encourages using loops instead of recursion and “let mut” is quite idiomatic in my understanding. Those two characteristics are more relevant than the type system. For example, Scheme and Clojure don’t have type classes but are clearly FP because recursion and immutability are idiomatic.

In Rust, even though it is true that .map, .fold, .filter, and .zip exist, first of all they also exist in Python, and second, they need to be sandwiched between .iter and .collect unless one is working with iterators which makes the code noisier and pushes the needle toward loops.

The influence of OCaml and Haskell is clear though, and it makes the language more pleasant to use.


The claim was that Rust is "heavily influenced by FP"; I think that's clearly the case, while "Rust is FP" is probably not (which case you make pretty well).


FPLs yes, FP no.

We might argue it's influenced by FP indirectly because those adopted features from ML etc also jive well with functional programming, for example pattern matching as a control flow construct...


Eh, I think that's a distinction we could make, but it's not clear to me it's useful, and FPLs themselves are (definitionally?) shaped by FP.


It's true it's arguable. But if we start calling the languages that adopt FPL pioneered features "heavily influenced by FP" we end up putting a lot of languages in that set, like Java, Python etc for having GC and closures. So in order to use that as a distinguishing feature I think it's warranted to make the distinction.


I would phrase this as: adopted PL features pioneered by FP languages, but not in support of functional programming.

Like Java.

I think we come upon a phenomenan of cultural treatment of FP that makes it like AI - over time time some of the stuff invented initially invented and used in FP languages becomes adopted in mainstream languages (like eg closures, garbage collection, etc) and it gets gradually detached from the FP languages association and mainstream programmers aren't even aware of the FP origins.

(The AI analogy being: particular approaches start out being called AI and if it works out, ends up being called just normal programming technique when its adopted in mainstream - https://en.wikipedia.org/wiki/AI_effect).


It's not a FP language, but it's clearly heavily influenced by FP.

Mutability is a significant part of Rust, but it's much more sharply curtailed than any non-FP language I've ever seen. To be allowed to mutate something, you have to prove that nothing else holds a reference to it. That means that any code that doesn't use mutability itself can pretend that mutability doesn't exist.


The entire "idea" of Rust is the ability to achieve memory safety by mutation XOR multiple references, which is a different way to achieve the benefits of referential transparency/immutable data structures without the loss of performance.

Rust is not a "functional language" in that sense, but that was not the claim made, which is that Rust is heavily influenced by FP. This is most clearly seen in the trait system (typeclasses) and iterator patterns.

Influence doesn't mean you're doing exactly the same thing. If I make a rock band influenced by classical music, that doesn't mean I'm doing classical music, but I'm still very obviously influenced by it.


Most (all?) dependency management systems are single threaded and download thousands of tiny files one… at… a… time…

I have gigabit internet and I’m lucky if some package manager can get more than a couple of megabits of throughput.

Most industries would never accept less than 0.5% efficiency, but apparently software developers’ time is just too expensive to ever be “wasted” on frivolous tasks like optimisation.

I kid, I kid. The real problem is that the guy developing the package manager tool has the package host server right next to him. Either the same building or even a dev instance on his own laptop. Zero latency magically makes even crappy serial code run acceptably well.

“I can’t reproduce this issue. Ticket closed, won’t fix.”


I'm really happy with Meson, as a lot of Wayland (the new display protocol for Linux, the "successor" of X11) apps seem to be built in C, so using meson is super simple and I don't have to worry about tooling (I don't deal much with C/C++, so let me make my change and run away please).

Rust is the same, you can even define a nightly version if you want, so even the correct version is ran with rustup. It's fantastic, and I can contribute much easier to projects without worrying about tooling.


> I don't deal much with C/C++

that is because there is no such thing.


the slash symbol is often used to denote multiple entities. For example, "I don't deal with foo / bar" usually means that the commenter doesn't deal with either foo or bar. The commenter is not claiming that foo and bar together constitute a single entity.


C/C++ is just 1 in the limit, right?


two things wrong with your statement:

- the ++ operator only acts on integer types, not floats or doubles, so there is no limit to speak of here

- the expression "C++" has value equal to C before incrementing, hence the expression "C/C++" is just one for positive C, even when C is small


Hey, wait a minute, ++ is defined for floats.

It might be a bad idea to use it in many cases (since there are values for which the result is just rounded back to the original value), but it works!


So, luckily my mistakes cancel out -- C/C++ = 1 always, so it must also in the limit. Once we figure out how to define limit.


> the ++ operator only acts on integer types

no, I believe it works on pointer types and enums as well


It is also defined for floats.

Using it seems like a bad move though -- for large values it can round back to the input value.

Indeed, the following stupid test program works, although it may heat up your laptop slightly.

    #include <stdio.h>

    void main()
    {
      float C   = 1.0f;
      float Cin = 0.0f;
      int i=0;

      while (Cin != C)
      {
        i++;
        Cin = C;
        C++;
      }
      printf("%i %e\n", i, C/C++);
    }
And, it finally lets us confirm what the mathematicians never could. When does the limit happen? 16777216. No further questions.


The only problem I had with tooling using F# was getting my dev env set up. The only help I need from the editor was navigating between classes/functions and build/run.

Even refactoring was easier because the types are sometimes left to be inferred and not named everywhere. The type inference in F# being weaker also even helps with both compile speed and readability where annotations are needed both to help the compiler and the reader.

Perhaps on larger projects other things become important, but I got the sense that it's on the devs to name things well, use type annotations where helpful, and otherwise document non-obvious aspects.


There are plenty of non-academic languages with tooling issues as well. I think the larger issue is simply if the language has significant usage in a large corporation who can sponsor tooling development, or not.

Most tooling issues are pretty minor for small apps, it’s once one employs scores of developers that the lack of tooling begins to hurt (and by hurt, I mean cost money).


> I seriously couldn't figure out the idiomatic Ocaml workflow/developer inner loop after more than a day of struggling.

I tend to agree but, compiling C++ isn't just about typing "make". And it did take me more than one day to figure out python/js workflow.


> Functional programming won't succeed until the tooling problem is fixed.

I think different people have different wants and needs with tooling. I make (and use) binaries with Haskell. I wish more mainstream languages could make binaries.


I don't know any purely functional languages[], so my viewpoint is skewed here. Whenever I have seen a python developer drink the functional kool-aid they end up either storing state in global variables, environment variables or in a dictionary they end up passing to every function. I then have to explain to them that passing a dictionary around and expecting variables to be in it is just half-assed OOP.

My rule of thumb is anything that needs state should be in a class, and anything that can be run without state or side effects should be a function. It is even good to have functions that use your classes, as long as there is a way to write them with a reasonable set of arguments. This can let you use those functions to do specific tasks in a functional way, while still internally organizing things reasonably for the problem at hand.

The minute people start trying to force things to be all functional or all OOP, then you know they've lost the plot.

[] I have been wanting to learn lisp for over a decade, I just never get around to it.


I'm mathematically trained so in the beginning I really like writing python program in pure functional paradigm. Until I was facing this issue.

But, adopting OOP doesn't mean one have to give up on pure functional paradigm, there's a book which basically is talking about how you can incorporate more functional paradigm into OOP.

One way to look at it is (perhaps trivially) that methods are just functions, and class is just something with properties. So in a sense class defines a type, where the methods/functions expects this type.

In python I find properties (and the related cached properties), dataclass etc can really make the above more apparent in construction. To just give one example, having a property that returns a pure function would acts practically the same as methods (but not in docs unfortunately, and related auto completion kind of stuffs.)

Yet another way of looking at this is to treat Python class as singular dispatch in first argument only.

I find thinking this way enables me to reap the benefits of both OOP and pure functional paradigm.


I strongly agree. Rule one: avoid and simplify state (e.g. recalculate data if it's cheap instead of updating it when any of its inputs change). Rule two: if you need state, keep it coherent by tightly managing it (go through functions that maintain the invariants) and exposing as little as possible. That is where classes come in.


Pure functional programming is orthogonal to that. In other words: you can have your class that contains a maximum simplified state, no problem. Extending this to be pure functional means that in addition to everything else that you said, the calls to the class that manages the state are now considered as needing "special treatment" in the sense that you can't merely call them, you have to also explain what should happen if there are multiple calls. I.e. the order of execution is not depending on the order of lines of code anymore, but it is defined through the means/syntax that the programming language gives you.


Could you ELI5 or perhaps give an example? I’m not sure I understand.


It's not easy but I'll try:

    counter = new Counter

    currentValue = counter.value
    newValue = currentValue + 5
    counter.set(newValue)
In most programming languages, each of those lines is executed sequentially. Therefore we are used to it.

If this is just a script then it's simple and pure functional programming (PFP) as no benefits here. The reason is that the order of calls is always the same (it's the same as the order of lines)

Things change when the order of calls is not static anymore but becomes dynamic. Think about a webserver. Or a any system that receives calls from the outside - or has something "running" like a cron job.

In that case, you can't just look at the lines of code to understand how the program operates. You know have to simulate not only the state, but also the access/change to the state (including external state).

Here PFP comes in, making those things explicit and therefore decoupling it from the order of lines of code.

In the example of a simple script, this is just annoying because we now have to be explicit even though we now everything should be ordered as the lines of code:

    counter = new Counter

    currentValue = counter.value
    newValue = currentValue + 5 // does not compile, because currentValue now is an "effect"
    counter.set(newValue)
currentValue is now an action/effect that might be run at some point or maybe not. Therefore we have to rewrite it:

    counter = new Counter

    currentValue = counter.value
    newValue = currentValue.onceItHappenedModify(value -> value + 5)
    // newValue is now also an effect
    updateCounter = newValue.onceItHappenedExecuteOneMore(value -> counter.set(value))
    updateCounter.execute()
   
In the end we have to execute the "updateCounter" effect because until this point it is just a datastructure. A blueprint for an execution if you want so. However, in PFP we don't actually execute it - that's the whole clue! We just pass the blueprint around and it gets bigger and bigger. Until the point where we return it as datastructure to the main method. And then, the programming languages runtime executes it!

If you find that complicated, you are right. That's why PFP only works in language that support this concept and make it ergonomic. I often use languages that don't (e.g. typescript) and in there, I don't use this technique because it has more drawbacks than benefits.

Anyways, it becomes more interesting once things happen in parallel/concurrently and from different points in the application. The reason is that when you work with those blueprints, you are forced to explicitly combine/merge effects.

You can, for instance, do this:

    fireRockets = fireRocketsEffect()
    activateLasers = activateLasersEffect()
Nothing has happened so far. We only created two blueprints. In other languages, things would be running already, but not here. We now explicitly have to decide how to run those:

    fireRockets.onceItHappenedExecuteOneMore(activateLasers)
or

    activateLasers.onceItHappenedExecuteOneMore(fireRockets)
or

    activateLasers.executeAtTheSameTimeAs(fireRockets)

And so on. As you can imagine, you quickly end up with combinators for e.g. running a list of effects either in parallel or sequential or in parallel but max X at the same time and so on.

I hope that explanation makes sense. I found it hard to grasp without actually building something myself.


Imo functional programming is one of those things that makes sense from a theoretical perspective, but comes with compromises when it comes to reality.

The thing about functional programming is that the confidence you get from immutability comes at the cost of increased memory usage thanks to data duplication. It's probably going to create a ceiling in terms of the absolute performance which can be reached.

There are just other ways to solve the problems like NPE's and memory safety, like ADT's and ownership rules, which don't come at the same costs as FP.


This is actually an area where there’s room to improve FP languages.

If you track ownership in functional languages, you can statically determine if a value is used more than once.

If it’s only used once, you can apply any updates to that value in place without allocating more memory.

This gives the performance benefits of mutability with the safety benefits of immutability, in some common cases.

The main trick is adjusting memory layouts accordingly. You can keep it simple by only applying this optimisation for functions of A -> A, or if you’re replacing it with a different type you can analyze the transformations applied to a value and pre-emptively expand the memory layout.

If a value is likely to be used only once, but might be used multiple times, you can also apply the same approach at runtime by reference counting and updating inplace when there’s only a single reference (for functions of A -> A at least).

I believe the Roc folks are aiming to have aspects of this functionality, and I also believe there’s similar stuff becoming available in Haskell under the guise of linear types.

Finally, if you really need a shared mutable value, that can be achieved with mechanisms like the State type in Haskell.

In short, the pieces are there to create a functional programming language that doesn’t introduce needless memory usage overhead, but I don’t think anyone has put all the pieces together in a convenient and accessible way yet.


I get that not everyone does, but a large part of why I use Clojure is because it makes a whole class of concurrent designs easier. In particular, sharing that immutable data across multiple threads.

As a simplified example: one thread modifies the data, and another thread writes a snapshot of the data.

In the programs I write, it would pretty much never benefit from this optimization.


This article tries to push FP as a solution to NPE??? What the?! NPEs are a problem due to dodgy type systems... it's a type sytem problem, not a language paradigm one... i.e. it has no relation to whether a language is functional or not.

For example, Dart 2 is null-safe! No one in their right mind would claim Dart is a FP language. Even Java can be null-safe if you use some javac plugin like the Checker Framework.

Also, a language can totally be functional and yet suffer from NPE, like Clojure or Common Lisp, but I suppose the author may be forgiven here because they are talking only about "purely functional programming languages"... (they didn't mention "statically typed" though, but that's clearly implied in the content)...

I believe the author is inadvertently pushing for two things that are pretty much unrelated to FP, even if they are a requirement in most purely-functional languages:

* immutability

* strong, static type systems

I would mostly agree with both (except that local mutability is fine and desired, as anyone trying to implement a proper quicksort will tell you - also, see the Roc language[1], which is purely functional but uses local mutability), but I write my Java/Kotlin/Dart just like that and I wouldn't consider that code purely functional. I don't think whether the code is purely functional actually matters much at all compared to these 2 properties, which can be used in any language regardless of their main paradigm.

[1] https://www.roc-lang.org/


If the first thing you talk about is performance and not quality and maintainability, then you're already missing the point. Most software just isn't in some super high-perf environment - what matters is fewer bugs, easier maintainability, better communication with other engineers (through declarative code).

The code we work on in the 2020s is much, much more complex than code written 20 years ago. We need better primitives to help our weak and feeble brains deal with this complexity. FP (particularly pure FP) gives us that. It isn't a panacea, but it's a major step in the right direction.


I just disagree that performance isn't a concern in almost every context. There's a hierarchy of concerns, to be sure, and if you haven't written reliable code which solves the problem yet you shouldn't be worried about performance, but if your PL itself imposes a performance tax, that's something which has to be paid every time your program gets executed, by every user.

As programmers, our job is to not to play with abstractions, it's to move electrons and make hardware do things. We can't afford to abstract away the complexity of the hardware completely. Indeed the trends in PL popularity of the past 20 years have been to move back closer to the hardware, and away from highly abstracted environments, like scripting languages and JVM.


But Python/Ruby (and JS? Not sure how far they are with optimising that or what the comparison would be) are very slow compared to Haskell. So people are paying that price all the time without getting any benefits that Haskell (etc) can over next to it. I agree with the GP; performance is really not very interesting for most projects and most (Py/JS are the top dev languages by far I think) programmers/companies are agreeing with that by using low performance environments that make them productive. So productivity seems to win out.

For sake of the environment and hardware upgrades, I think we definitely should make an effort and we can see that improvements in compilers and PL theory do help with this when the goal is practical programming languages using these techniques; Rust does, Haskell was meant as academic language for a long time.

I think robustness/security should go first anyway as hierarchy of concerns; that's where things are really breaking now.


Python and JS are top programming languages for very specific reasons. Python because:

1. It has a low barrier of entry for non-programmers, which makes it suited for applications like data science and ML.

2. There is vast library support for math and science, which makes it basically the only choice for those domains.

But python is basically used as an API for highly optimized C libraries since any kind of a hot loop in python is basically an anti-pattern unless you own stock in energy companies or hate getting results quickly.

JS has its place because because until very recently it had a monopoly on web front end development, which is one of the largest programming domains.

So for both of those examples, it's the use-case determining the PL, not the programmer.


Can you give some examples of increased code complexity over the last 20 years? I am blanking on that.

I have noticed a lot more ops complexity and additional library usage, but not complexity in the code I am responsible for.


Why do you think that functional programming results in data duplication? I would think it's rather the opposite.

With strong immutability like in Haskell, you can share values even between threads and can avoid defensive copying. Two versions of the same immutable tree-like data structure can also share part of their representation in memory.

(Haskell has other problems causing increased memory usage, but not related to data duplication in my mind)


Why do you think that functional programming results in data duplication? I would think it's rather the opposite.

I suspect it has something to do with the perceptions around always returning a new thing from a function rather than the mutated input to the function. For example, if you need to mutate the property of an object based on other inputs, the perception is you would clone that object, modify the property appropriately, and then return the cloned object.

[Edit: formatting]


So, ELI5: What do you do instead, where you return a modified object, but still have immutability? Or do you avoid the problem by not trying to do that?


In most situations, you use persistent data structures where you only have to copy the modified leaves.

If you really need your type to be backed by a contiguous block of memory, you batch updates (stencils, SIMD, etc.)


But if you modify the leaf, don't you also have to modify the branch node that points to the leaf, so that it points to the new one? And every node from there to the root?


Yes, you (or rather, core library developers) need to pick a data structure appropriate to your access patterns: linked lists for iteration, search trees for concatenation, finger trees for random access, and so on. But you should be doing this anyway for clarity, even if you face no performance constraints.


Not the poster you’re responding to but I think they’re referring to the current need to allocate more memory when updating immutable data structures.

Not that there aren’t ways to represent mutability in Haskell, just that the de facto use of immutability causes excess allocation.


Efficient functional programming often uses tree-like data structures. These can be immutable but still avoid duplication.

Consider if you "duplicate" a Data.Sequence Seq (finger tree) for modification. You're not actually copying the whole structure, you are creating a new root node and re-using as much as possible of the common structure.

The end result is that a bit more memory is used in the simplest case, but not due to duplication I think.

The benefit is that a thread can make a modified value at cheaper cost without affecting another thread that is still using the original value. I also think it's easier for the programmer to understand the code.


Won't this still result in a lot of fragmentation? I.e. won't you have disjoint allocations for those new branches of the tree? Sounds pretty cache-unfriendly.


In a strict language or with poorly optimized lazy code, yes. If you can get good stream fusion, not really. If your code fuses really well (common for lists, harder elsewhere) the whole thing will happen on the stack.


That's duplicating part of the structure. That uses more memory than just modifying a value in-place, but less than duplicating the whole tree.


Sure, but let's assume that the program has more than one thread and that another thread could still be using the old value. In that case, an imperative program might be required to copy the whole structure or sleep until the existing users are done, which is often less efficient and is always more complicated.

If it's ok to support only a single concurrent user of the value, then a mutable structure is indeed more efficient. Even in Haskell we have mutable structures that can be used for such purposes.

The interesting question to me is, what should be the default? I think there is a good argument that it should be the safer, simpler immutable structures.


"Simpler". If you're only single-threaded, the mutable structures are simpler.

If you're multi-threaded, you have to choose: Do I go with immutable, which means that the other thread gets something safe? It also means that the other thread gets something stale (out of date), which can have its own consequences.

Or do I use a lock or mutex, which means that I have to place guards in all the right places, which is error-prone and can break in weird ways if I miss one?

Or do I use mutable data without guards, and just let it crash? (Because it's probably going to crash if I do that...)


> Imo functional programming is one of those things that makes sense from a theoretical perspective, but comes with compromises when it comes to reality.

Ironically whay you say is true in theory, but not true in the real world.

Source: real world Senior Haskell programmer


My problem with functional programming is refactoring. If you don't have side effects when you want to do 2 things deep into the call tree to something that is in completely different call tree branch - you have to extract it all the way up to the common parent and pass it through all the intermediates just so that one function deep there can access it.

It's incredibly frustrating when you work in a functional language, and yet it's the main benefit (no side effects).

I'd like to have a language that is imperative when written and functional when read :)


> I'd like to have a language that is imperative when written and functional when read :)

Depending on which language Monads give you exactly that bridge between imperative and functional.

In your example, you can always choose to have side effects in that deep call.

Personally, I like a type-driven approach. Then, I do not care so much where functions are (they will be grouped logically, but could be anywhere), as long as the type in and the type out matches.


I've handled this (in Clojure) either by passing a context map through the entire stack or binding some context atom at the top and using it lower down the stack. The binding is less obvious at first glance, so I prefer to pass the context, but both make testing quite easy and reduce the need for heavy refactoring.


I like that aspect because it gives you a hint that your call tree is not how it should be. Lifting out side-effects and passing the data back in is one approach, but not the only one.

Turning a deep call tree into a flat pipeline is a popular other approach and often leads to less complexity, better compos-ability.


I like it when I'm done. I very much dislike it when I'm not sure yet what my code should do so it changes constantly. And that's most of the time in gamedev for example.


Immutability does not even have to be bound to functional programming. One can use persistent (= immutable from the user's perspective) data structures, like immutable.js and get most of the benefits.

Moreover, persistent data structures can be optimized well (see Clojure), so that the performance issues may be relegated to the inner loops, as usual, and the rest of the program may use safe-for-sharing data structures.


The extent to which immutability leads to duplication seems a matter of implementation rather than a principle.

The compiler/runtime could optimize such that memory is reused, as long as all other laws are obeyed.


To some degree. But it really is the case that a persistent functional data structure is going to have a slower insert operation than a traditional mutable set. There's no getting around that.


https://hackage.haskell.org/package/containers-0.6.5.1/docs/...

log(n) slowdown to add in the end

but the cost to add in the middle is cheaper then the trivial array (see insertAt)


The asymptotic behavior is not the entire story. Because persistent data structures almost necessarily need to have their data allocated non-contiguously they have terrible cache performance and prefetching/speculation behavior in comparison to data structures that take advantage of contiguous memory locations.

I also mentioned sets, not lists.


Not that this answers all your objections (it does not, caching might be a problem!)

But sets are also a mere log(n) away

https://hackage.haskell.org/package/containers-0.6.6/docs/Da...


Duplication problem is solved by immutable data structures with structural sharing. They are part of Clojure standard library and exist in some other languages.


From my experience if you are not dogmatic/extremist about it you can gain a lot, so I try to do things in a functional way in my non functional languages.


Yeah I totally agree. I think writing code which is "functional where possible" is super powerful and offers a ton of advantages. Trying to cover that last 20% of cases where you have to bend over backwards to create a performant functional solution, to solve a problem which is trivial with a little bit of mutability, doesn't make sense.


The other problem with functional programming is it's harder to look at code and figure out the time complexity. At least with non-functional languages I can easily figure out why there are performance problems with it. Stuff like lazy evaluation may seem cool, but it's not when you have to figure out why something's slow.


That's an Haskell issue not a functional programming issue. Haskell is pretty much the only functional programming language which is lazy by default. It's the sole one for the reason you give. Every other ones are eager by default and you can opt in to lazy evaluation when needed.


In addition to sibling’s recommendations, consider OCaml too for a strict language that is vaguely similar to Haskell and performs similarly well in benchmarks:

https://cs3110.github.io/textbook/cover.html


This is lazy vs strict, not functional vs whatever. I'm a fan of functional programming, not so much a fan of lazy evaluation. Look at PureScript or Idris perhaps?


Data duplication is coming to all minstream languages these days, whether it's a community best practice or a requirement for some libraries (e.g. streams in Java).

I'm not saying that in-place operations don't have their uses, but to me, these use-cases look more and more like niche cases, as opposed to being the default choice in the past. That is well-reflected in new languages like rust, where a new name is by default immutable, and must specifically be flagged as mutable in order to be variable.

It means that the benefits of immutability and the performance tradeoffs people are willing to take are evolving. I would assume larger codebases and faster hardware means performance is less valuable, and clarity comes at a premium.

I do agree with you that NPEs and memory safety are unrelated problems, though.


Data duplication is a tool which can be used to avoid large classes of bugs, but there's no question it comes at a cost to performance.

If anything, I would say explicit mutability allows us to decrease data duplication. By being explicit about what's mutable and what's not we don't have to resort to holding "safe copies" to ensure shared memory is not being mutated unexpectedly.


I will certainly be downvoted for this, but I want to be honest so here it is anyway:

In all my programming years, 20+ years that is, I've met hundreds of programmers, and 95%+ of them handled imperative programming languages just fine, with very few actual bugs coming from each one.

Each time there is such a conversation, I have yet to see some actual concrete proof that functional programming provides a substantial increase in productivity over well-implemented imperative code.

In other words, I am still not convinced about the merits of functional programming over imperative programming. I want some real proof, not anecdotal evidence of the type 'we had a mess of code with C++, then we switched to Haskell and our code improved 150%".

Lots and lots of pieces of code that work flawlessly (or almost flawlessly) have been written in plain C, including the operating system that powers most of Earth (I.e. Unix-like operating systems, Windows etc).

So please allow me to be skeptical about the actual amount of advancement functional programming can offer. I just don't see it.


I think personal experience with multiple paradigms is essential for a developer to decide which one they prefer.

The opinions of programmers who have only used one paradigm are less than worthless, since they are demonstrating a basic lack of curiosity and lack of willingness to invest in their craft.

You can always find excuses not to learn.


I agree fully.

My viewpoint, as a functional programmer, is software is a young field.

We don't know much of anything about it.

We're living in the first 100 years after the Gutenberg printing press.

All I know is NLP code generation will be a dramatic change.

Everything else, we are still figuring out.


I immediately distrust any article that makes sweeping claims about one-paradigm-to-rule-them-all.

The reason why multiple paradigms exist is because here in the real world, the competing issues and constraints are never equal, and never the same.

A big part of engineering is navigating all of the offerings, examining their trade-offs, and figuring out which ones fit best to the system being built in terms of constraints, requirements, interfaces, maintenance, expansion, manpower, etc. You won't get a very optimal solution by sticking to one paradigm at the expense of others.

One of the big reasons why FP languages have so little penetration is because the advocacy usually feels like someone trying to talk you into a religion. (The other major impediment is gatekeeping)


This is just an appeal to the law of averages. I don't believe that you're actually considering ada, cobol and forth for new projects.

> One of the big reasons why FP languages have so little penetration is because the advocacy usually feels like someone trying to talk you into a religion.

This is never really a reason for anything, it's a personal attack on people who advocate the thing that you don't want to do. FP people are not bullying you, or shoving anything down your throat.


To be charitable to kstenerud, FP advocacy does often sound like proselytizing. When promoting FP to folks, I think we can mitigate that issue by communicating a couple points:

1. There's no dichotomy between FP and OOP. Sprinkle it into places where it makes sense to you. Adoption can come by degrees.

2. Just thinking in composable, pure functions on projects buys you serious mileage. You don't have to wrangle monads (or worse, victimize team members with advanced FP ideas). Just KISS.

FP often feels to non-practitioners like a pretentious and byzantine fad. We'll have better luck with widespread adoption if we can be inviting and dispel those negative associations. We'll seem less zealous if we can frame FP as a practice to dip one's toes into, rather than a religion to be submerged & baptized into.


I think a lot of the "religious" feeling I get from FP advocates is the constant focus on purity and the "No, you AREN'T DOING IT RIGHT!!!" kinds of tantrums. You HAVE to understand the nuance. You HAVE to participate in these strange little paradigms that esoteric thinkers have come up with. Ultimately, I think of FP languages like Haskell as playgrounds to pilot bleeding-edge ideas before the best ideas get incorporated into languages that are widely used in production.

Ultimately, some real-world processes are just procedural, and procedural languages are easier to apply to them. When it comes to callbacks, mapping functions to a list, etc., it's pretty clear to me that FP paradigms are superior.

In other words, I think it's part fad/religion, but as with anything, you can learn from it and take the good parts.


Most languages have already taken the good parts. We barely even consider them a "FP thing" anymore. Callbacks and map() are just things you can do in most languages.

What remains when people advocate for more FP is the more esoteric stuff. (And non-nullability, which inexplicably gets associated with FP surprisingly often despite being orthogonal to everything else in the paradigm.)


Problem is, it's hard to distinguish the chaff from the wheat. A lot of practices that advertised themselves as "the way of the future" and most of them faded into nothingness.

At some point people try to save time and stick with practices that stand the test of time.


> Just thinking in composable, pure functions on projects buys you serious mileage

This is exactly where I was a called a zealot and gave up because it was obvious the other person felt attacked, as if I told them "you are an idiot for using procedural and imperative style all your career" -- which I absolutely didn't say or even imply.

Ultimately, selling someone on FP is selling them a new mental model and I think a lot of us how hard that is. People by default don't want to change the way think, it's a fact of life.


> FP people are not bullying you, or shoving anything down your throat.

Have you ever worked with a FP evangelist? Every single experience I've had has been with a person who simply won't take "no" for an answer. The impatience, ego, and pettiness is bar none, really.

Aside from that, there are objective reasons to be skeptical: it's inefficient when it comes down to actual implementations that have to work on actual computers and solve actual problems ("but but tail recursion and copy optimizations lolol you moron you just don't understand it"). Also, most FP tends to become much more complex than an equivalent imperative program solving the same problem, usually through a terrible confluence of code density and mazes of type definitions.


As someone that's left scala and come back to dotnet land... depending on the domain, functional programming results in much cleaner solutions. A lot of the code messes in functional paradigms that I dealt with and also created were mostly due to people that didn't understand how to think functionally. It took a long time for me to figure it out. I find myself missing features all the time now, and spending a whole lot more time fixing bugs that simply wouldn't exist had people just applied more functional principles in their code. I understand where you're coming from with your frustrations, but I would personally attribute that to the LISP culture problem that I also saw with the hardcore functional programmers. Everyone would just nerd snipe themselves into re-inventing the wheel.

Also, my dotnet contemporaries on average do not care as much about code quality as people did when I was in scala land. But this might just be a company culture thing.


This rings true - experiencing different paradigms opens you up to more elegant solutions. That's why years later I now see the value of the "Programming languages" course I had in university - every lecture covered a new language and had homework to implement a program in it. I haven't touched LISP or Prolog since, but just being exposed to them made me write better code in "traditional" languages.


Agreed. I took a year (long ago) to learn lisp just to be exposed to a different way. And it has paid dividends in terms of providing alternate ways of thinking about code in other languages.

That said, I'd never want to use lisp in a real production project. Same for FP. Get exposed to these things, take the learnings and bring them into KISS boring production code that is fast and maintainable.


.net, and C# especially are absolutely fantastic tools for programming. The problem is just that they are embraced by large companies and that is where crappy engineers tend to congregate, in my experience. The bigger the company the easier it is to get lost in the crowd.

I enjoy functional programming, but I'm a wizard with C# and .net, and I can pull a much higher salary continuing to focus there. I can write my own fun stuff in Haskell for myself and will continue to enjoy corporate money.


It's also laziness and controversy. Unit testing is something aggressively pushed in many of these places, but then they continue to litter side effects, long flows, loads of complex objects, void methods, and more which inherently make it more difficult to test. FP guidelines help a lot of these cases, and you don't need to understand monoids, monads or any of that to grasp the concept of simple functions with clear output.

Meanwhile, FP is kind of a pain to connect things in, for most people.


Before learning FP I used to ask myself: "Assuming that by tomorrow my memory gets erased and my IQ diminishes by 50 points, will I understand this code?".

That's why I try to juggle the least amount of variables as possible at the same time, while trying to have my code to read like a cooking recipe or a very simple literature. From that, FP feels almost natural.

Even more so, I think that most software bugs and spaghetti code is made by people that were too confident on themselves. They thought they could handle a lot of random unrelated variables floating around all the time, while having their functions do a lot of random algorithms at the same time.

And yes, they could handle it. The first time. But, in the long run, most people forget, or they leave and another person ends up having to maintain it, and that super brief-and-clever code became an unmaintainable mess.


I'm trying to push unit testing, as a means to get rid of some of those negatives you pointed out. But it's not a panacea. And obviously, I don't want to test for the sake of it either.


I'll be honest, most of the unit tests and integration tests I write are mostly because it's more fun than cranking out "features"


... there are lots of optimizations available when you have immutable types and no assignment operator if you're curious.

Pure, immutable data structures might not be the right choice if you don't have the right language/tooling/runtime to take advantage of the benefits but they're there.


I have spent the time with Haskell to learn how to not just use it a bit, but program it idiomatically and with some fluidity. I think there is a very interesting point you can reach if you go 100% all the way in. There are some types of programs that you can write, like compilers, that are kinda painful and dangerous and quirky in imperative or OO programming and are just beautiful in full on, no compromises functional programming.

I am waaaaay less impressed by the half-assed "I borrowed a few paradigms and bashed them into an OO or multi-paradigm language but it's still fundamentally imperative" approach.

Reconceptualizing your entire program as consisting of recursion schemes and operations that use those recursion schemes, what I think the deep, true essence of functional programming as a paradigm is, is a very interesting and has a lot of interesting benefits hard to obtain any other way, along with some costs you might never otherwise think about if your mindset is too deeply into one particular world.

Rewriting a single for loop with five maps, reduces, or filters is nothing. It's a parlor trick. It buys you nothing of consequence. For avoiding bugs I almost never have, this requires a lot of extra syntax and costs huge runtime performance unless the compiler is smart enough to optimize it away.

That doesn't mean it's bad, per se. I use maps and filters as appropriate. In some languages, they're syntactically convenient, and if I'm mapping or filtering with something that is already a function anyhow, the performance issue is moot. I'm not saying this is bad and you should never use those things.

What I am saying is that is completely uninteresting as a "paradigm shift". Writing architecturally imperative programs with a light sprinkling of map and filter is nothing. Come back to me when your program is no longer architecturally imperative, and the top level of your program is now a two-line expression of a chain of mapMs and <*>s that represents your entire program. Now you have something fundamentally different.

Frankly, I think those who bang on about how vital it is to replace for loops with maps and filter chains have actually managed to miss the entire interesting point about FP. One could perhaps blame FP advocates for not explaining it very well with some justification. But it still is what it is. These people are banging on about how square a brick is, and how hard it is, and how nicely you can use one to bash things open, and how nicely you can polish up a brick if you want, but the point of bricks is that you can put them together to build walls. Sticking a couple of bricks in the middle of a mud hut leaves you with a weird looking mud hut.

The point of functional programming isn't map and filter; it is taking the class of things for which those are only two easy examples (and not even necessarily the most useful in general), and then building your entire program out of that class of things. You end up with something very different if you do that.

The linked article also bangs on about not having null references. Nothing stops you from having an imperative programming language that lacks null references. It is not a good or interesting example of why FP is interesting.

Anyways, upon re-reading, I lost a bit of focus on my original point, which is that you do need to watch out for which of the types of FP are being advocated. I have very different reactions between the two of "consider rewriting your fundamental view of the world to be composing recursion schemes instead of as the stringing together of imperative operations in various fancy ways" and "you shouldn't use for loops". You may not love the first; it is certainly a trip and not necessarily for everyone and everything, but it will at the very least expand your mind. The second is, well, mostly annoying to me because of the vigor of its advocates far out of proportion to either its programming utility or its mind expansion characteristics.


> The second is, well, mostly annoying to me because of the vigor of its advocates far out of proportion to either its programming utility or its mind expansion characteristics.

Thanks to closures, map/reduce function callbacks in Javascript are extremely handy.

Please link me to something like a stackoverflow answer with "vigor" greater than this, I'd love to read it.


If you want to see this for yourself, be it either because you don't believe or an honest intellectual curiousity, find a forum (StackOverflow isn't actually the best since it's not really a "forum") and express skepticism about the utility of map and filter and suggest that a for loop works just as well. Be polite, but be firm and don't back down.

You will be called names. You will be insulted and told that you just don't understand functional programming, probably because you're dumb.

I know this, because I had to deliberately cultivate a writing style that learned how to avoid this (which is why I start out waving around the fact that I don't just know a bit of Haskell, but actually know Haskell; I would not normally make a deal out of that, for instance). I wouldn't have had to deliberately learn how to write around this if it doesn't exist.

Now, the tradeoff I get is people who don't believe such advocacy exists, because it is no longer appearing in the replies to my posts like it used to. A worthy tradeoff even so, I think.


If you were to write a post about taking an example imperative program and showing the conversion to FP, I’d be very interested to read it. Or if you have a link handy to someone else’s post?

For me it’s difficult to immediately understand what you mean about the top level program, mapM, etc. I guess what you mean is that they entire program can be expressed as 1 call with some args? I feel like there is probably more nuance that I am not grasping.

I am curious to add this way of thinking to my toolbox, and I learn best by examples.

I have read things like Mostly adequate guide to FP and whatnot. They always get stuck on side effects and containers, which is fine, but doesn’t really address the larger scope of program design.


In the second part of the "Solving Problems the Clojure Way"[0] talk, the speaker shows step-by-step transformation of the heavy imperative (book-keepish) JavaScript code to a more functional approach.

As a result of transformation, ~95% of the code is easily unit-testable and the only impure code is the call to main which basically starts all the domino falling.

[0] https://www.youtube.com/watch?v=vK1DazRK_a0


Where are all the FP software products I can use? It doesn't seem to dominate a lot of software niches, no?


How do you distinguish between people who are in a religious fervor and people who have figured out that something is obviously beneficial?


> people who have figured out that something is obviously beneficial?

If it's so beneficial then those people can prove it by building something exceptional. I worked with Haskell and Scala for almost a decade and libraries were full of bugs and performance problems. A lot of bugs just hadn't been reported because so few people were using them. FP certainly has its strengths but so far there's very little evidence that it produces better software in the long term.


In life, there are a lot of obvious things that are very difficult to teach.

For example, I'm a biochemist with 12 years of education, and I can't seem to be able to convince anyone that vaccines are safer than no vaccines.

Humans just don't like new things or changing their mind, and that's a very generalizable fact, even among very smart groups of intelligent engineers.


The typical argument for vaccines is much less shaky than the typical argument for pure FP. For one, vaccine people can actually explain why vaccines are good. FP people seem to mostly just repeat variations on "it's easier to reason about" and showing trivial functions that are generally not easier to reason about.

Imagine trying to get people to accept vaccines, but in a world where we have other types of medicine that usually works well enough for all the things you've developed vaccines for, and the only argument you're allowed to make is "it's obviously better, look at how amazingly liquid the stuff in this syringe is".

You'll still get some resistance, but in our current world, pure FP is much more of a niche thing than pro-vaccine stances. There's a reason for that.


> The typical argument for vaccines is much less shaky than the typical argument for pure FP.

I suspect you aren't that knowledgeable about either.

> For one, vaccine people can actually explain why vaccines are good.

Unless you have a very particular background, I'm suspicious that you can actually follow the frontier of that argument. More likely, you're getting the ELI5 explanation.

> FP people seem to mostly just repeat variations on "it's easier to reason about" and showing trivial functions that are generally not easier to reason about.

Well-designed FP languages reify coherent low-complexity formal semantics. When FP advocates say "reason about", they mean in a (semi-)formal sense. No popular imperative language has any kind of denotational semantics, so good luck reasoning about the compositional behavior of C++ libraries, unless your definition of "reason about" is "think about informally".


> Unless you have a very particular background, I'm suspicious that you can actually follow the frontier of that argument. More likely, you're getting the ELI5 explanation.

People who actually understand complex things are able to provide ELI5 level explanations for people who haven't the background for more rigor. You're presenting a false representation of the OP's comment anyway: it never suggested the explanation for why vaccines are good required "frontier" level discourse.

> Well-designed FP languages reify coherent low-complexity formal semantics.

This is such a great example of what so many of us have noted about FP evangelists. It's so divorced from writing software as to be gibberish.


> It's so divorced from writing software as to be gibberish.

Only if the software you're writing is A) aggressively simplified, so as to be amenable to informal analysis, or B) garbage.

If you're only interested in writing garbage, or perhaps you don't even notice that's what you're doing, then the appeal of FP is significantly reduced.


Minus the tone, I agree with the substance of your comment. In my experience, people who don't care about this don't really understand it either.


> are able to provide ELI5 level explanations

Yes, in theory, and this sometimes works. But it rarely works in general.

In practice, people come to your explanation pre-conditioned with a lot of (often politicized) misinformation, Dunning-Kruger type overconfidence in their own ability, very little curiosity or openness to new ideas, and exhibit the attention span of a 26th percentile squirrel.

People tend to listen very little and are more skeptical of others than they are of their own understanding: instead of searching for ways in which their mental models need adjusting, they try to poke holes in your explanations. They'll repeat whatever objections they've seen or heard somewhere, whether or not the objection is relevant or adequate. This undermines both "ELI5" approaches (because what you gain in simplicity you lose in nuance and correctness) and more pedantic approaches (which require more prerequisite knowledge, experience, or patience).

If you disagree with me because that is not your experience, it's possible you surround yourself with unusually insightful and wise people.


It simply can't be done. It's always possible in theory but never in practice. Or at least certainly not in this special snowflake case.

Until someone bucks the trend and does it.

But for that you need someone with actual intelligence and empathy; a Feynman of that sphere so to speak.

In every gatekeeper community this is the order of things until someone finally destroys the gates to the knowledge and the monopoly of the priesthood. Until then, it's basically "blame the victim" for their own lack of understanding (e.g. "people listen very little", "people are more skeptical of others", "they try to poke holes in your explanations", "They'll repeat whatever objections they've seen or heard somewhere", etc).


Well, I don't disagree with you, but also all of these things you list in the parentheses are just how humans generally behave. They are real obstacles.

And regardless of where we place the blame, at the end of the day:

> In life, there are a lot of obvious things that are very difficult to teach.


> a Feynman of that sphere so to speak

Hopefully people who read QED don't go around thinking that they know all they need to know to make an informed judgement about lagrangian QM.


> there's very little evidence that it produces better software in the long term.

Have you used other software? 99% of it is dogshit.

> libraries were full of bugs and performance problems

This is a really wild claim to me. IDK about Scala, but libraries in Haskell, Ocaml, and Elixir are all vastly better than libraries in C++, Java, Python, despite a smaller userbase.


>figured out that something is obviously beneficial?

Case in point for the problem with FP communication. When you speak in smug absolutes, it comes off as religion.

I'm pretty big into the FP world. I love Purescript. I dig Idris and exploring dependent types. I spend a lot of time at work teaching "functional" thinking. And yet, I still find most "hard core" FP evangelists insufferable for smugly explaining things as "obviously beneficial." ugh.

That is a lots that sucks about pure FP. It's not all upsides. The FP community can do a lot better at "evangelizing" by talking like engineers rather than sales people.


Where did I say that FP was obviously beneficial? You're reading too much into my question. I was questioning the GP's epistemology.

> talking like engineers rather than sales people.

They do talk like this - go to any Haskell conference and it's 90% engineering papers - much better than e.g. a JS conference. It's just that most people don't even know what PL engineering even looks like, let alone how to interpret it.


> Where did I say that FP was obviously beneficial? You're reading too much into my question. I was questioning the GP's epistemology.

cmon fuckin guy


Care to elaborate? You literally wrote 3 words


> How do you distinguish between people who are in a religious fervor and people who have figured out that something is obviously beneficial?

Seems easy to me.

If it's obviously benefitial, show me:

- Performance benchmarks showing how much faster the compiled code runs

- Case studies demonstrating faster development and/or fewer time spent debugging

- Studies showing consistently better maintainability

If it is lacking this data, it comes across as religious fervor.


If you believe in the epistemic power of "studies" to convey useful information in these domains, I have bad news for you - you are the one in a religious fervor.

The only semi-objective metric you've proposed is compiled code performance, which is a fairly small component of most people's utility function, especially when the difference disappears in all but numerical-computation-centric applications.


I was once at a distributed systems meetup where a member of the audience interrupted the talk to give an extemporaneous presentation on why we should all be using Haskell.

FP advocates have a reputation for being zealous and pretentious. The only question is whether they’re right to be.


An anecdote of one person with bad manners shouldn't be representative of the whole.

I'd estimate there are three kinds of FP advocates:

1) People (like me) who have experienced personal pain in building or maintaining complex systems in imperative or other paradigms, and are genuinely astounded and relieved when we learn how Haskell and other FP languages can mitigate or eliminate that pain. They tend to advocate FP as a solution to specific problems they've personally encountered.

2) PLT academics who are just generally fascinated with building a mathematical programming language that can elegantly express the most advanced notions in math and logic.

3) People who like to feel superior and rudely interrupt presentations to expound on monads and endofunctors and the like.

Try to focus on and/or associate with #1 and #2 and ignore #3.


> An anecdote of one person with bad manners shouldn't be representative of the whole.

It is so incredibly widespread, it's not just "an anecdote of one person".

The entry-level courses at TU-Berlin where I studied had just been taken over by FP disciples when I started studying, and it was crazy. "Let me tell you about our Lord and Saviour Functional Programming, Hallelujah".

And of course the reality didn't come close to matching the advertising.

And it never does.

Another nice example was in a lecture by SPJ on parallel Haskell, where he says "This can only be done in a functional language.". Audience member: "We've been doing exactly this in HPC for decades. In FORTRAN". Instead of conceding or apologising for the gaffe, SPJ doubles down with something along the lines of "well, then what you are using is an FPL". Jeez. Oh, and when it comes to the results he reveals that the overhead is so high that you need 7-8 cores running full tilt to be equivalent to a single core C program. Jeez.

In general, there is also the widespread phenomenon I call "functional appropriation", where a similarity is noted between some aspect of FP and some other mechanism or paradigm or some such, and then the claim is made that this obviously means that the other mechanism/paradigm is just thinly veiled FP.

Newsflash: let me tell you about Turing machines. Or NAND gates...

For example FRP, "Functional Reactive Programming". Which is really just dataflow, and badly packaged dataflow at that. Or the React people's claim that the core concept or React is that the UI is a "pure function" of the model. Well, it turns out it is not. Not a pure function at all. And not really a function either. Just a mapping. And any UI had better be some sort of mapping of the model, or it's hard to see how it would qualify as a UI.

I could go on (and on, and on, and on...) but I'll stop now.


> Not a pure function at all. And not really a function either. Just a mapping.

A function is a mapping of an input to an output.


But not all mappings are functions.


imho the whole thing about FP it's not about a objectively superior way of coding, but a way of coding that's better suited to the current high level, highly distributed world.

There are A LOT of developers, a lot of custom made software and a lot of web applications. And unless you really need performance, most web services are an ideal use case for FP practices. Even if you do it in plain Javascript.

Some people are zealots about it, yes. But imho FP is a better way to trickle down those concepts instead of telling people to spend decades to become imperative code masters.


> imho the whole thing about FP it's not about a objectively superior way of coding,

OK.

> a way of coding that's better suited to the current high level, highly distributed world.

But is it objectively better suited? One might even say "superior"? ;-)

Also, I actually believe FP is actually quite ill-suited to the high-level and particularly the highly distributed world.


> current high level, highly distributed world

That's one area it's well-suited for. Also in error-handling and prevention, and in refactoring, and configuration management. All personal pain points for me previously.


> in a lecture by SPJ on parallel Haskell, where he says "This can only be done in a functional language.".

Can you provide a link?


> Try to focus on and/or associate with #1 and #2 and ignore #3.

I disagree. We are too tolerant with the zealots.

They are an obstacle to the exchange of knowledge. They don't want people to learn and get better at programming. They want to live in their own Mr Robot personal fiction.

FP is a tool. A great tool, yes. But a tool in the end. And we should promote it as such.


If there is a way to do it that you feel more efficient/enjoyable than an other more widespread one, just show other people. If they don’t look interested, changing communication style next time might help. Become more proselyte will almost neither pay positively.


I understand that it looks like that, but I believe most of the FP evangelists are just too excited about the cool tech and want to share it with everybody so everybody else can feel the joys of functional programming. At least that's why I often mention and talk about FP to other devs.


> so everybody else can feel the joys of functional programming

That is the kind of knowing-better-than-thou that undermines the evangelism.


I used to program in Erlang. The joy of it doesn't lie in FP. Not even closely.

It's in the runtime, effortless processes etc.

If all you can offer is "joy of programming in FP", you're in a cult.


So like a Christian who just wants you to share the same joys of Jesus Christ that they do correct?


In my experience growing up as an evangelical Christian, the proselytizing has very little to do with wanting strangers to “share the same joys of Jesus Christ”.


People do still consider ada. I wouldn't, but I don't need that tool. I feel like you are creating a strawman. The gp wasn't talking about languages but about paradigms. OO was overused but it has its place, and both imperative and functional programming have strengths. There are some languages that can do everything, or almost everything, but multitools are never quite as good at being pliers as pliers are. And they are pretty bad at being a hammer. There is nothing quite like having exactly the right tool for the job, to the point that I would rather have a poorly made version of the exact right tool than a well made version of almost the right tool.


> This is just an appeal to the law of averages. I don't believe that you're actually considering ada, cobol and forth for new projects.

First it greatly depends on what "new project" mean here. There myriad of new projects within existing code base for which the existing used technologies are the major factor.

When it’s about creating a brand new product out of a blank slake, sure Cobol won’t be your first idea for a startup. Maybe Ada might come to mind if some high level of reliability is required, like in aeronautics and the like.

It also depends on the team skills. If you have team mates which are all Haskell advanced practioners, surely imposing J2EE for the next big thing might not be brightest move in term of crew motivation.


> I don't believe that you're actually considering ada, cobol and forth for new projects.

I'm not considering Erlang either.


Addendum after watching this thread blow up:

What's interesting to notice about this thread is how many messages are just oozing with smug superiority and disdain for anyone who doesn't share their knowledge. Yes, some are from genuinely humble and even-handed FP practicioners, but when we look at people who are vocal about FP, this small example shows around 90% of them in the gatekeeper camp. And the punchline is I don't think they can even see what they've become. This is what adds to the insidiousness of it all: They genuinely believe themselves to be helpful and positive.

This is a huge cultural problem, and one that I've seen many times in other communities over the years.

Take Linux, for example. Nowadays, it's trivial to get up and running with Linux, but it wasn't always so. Back in the 90s the installation was tricky, to say the least. The end user tooling was iffy at best and buggy as hell, there were tons of sharp edges, but most frustrating of all were the gatekeepers. People can handle challenges and pitfalls, but nothing quite takes the wind out of their sails like a smug asshole belittling them when they ask for help or express frustration.

Similarly with video codecs. In the 2000s when video codecs were a new thing, they had so many obscure options that almost nobody knew what would produce a good result, and so the gatekeepers came out of the woodwork to hold the sacred knowledge prisoner. They brought us byzantine software like the appropriately named Gordian Knot, and once again the forums were full of smugness, arrogance, and abuse as people despaired over how the hell to rip their DVDs into something viewable.

And it's similar in the Nix community, and countless others I've observed over the years.

In my experience, gatekeeping goes hand-in-hand with poor tooling and educational resources. The worse the UX, the more gatekeepers it attracts (because gatekeeping fulfills a need they have, so they flock to gatekeeping opportunities). Linux used to require an understanding of init scripts and environment variables and bash and crontabs and kernel recompiling and all sorts of esoteric knowledge just to get started. But now with mature tooling, it's easy to get started and then dig deeper at your leisure via the wealth of newbie friendly articles littering the internet.


I think It's funny. the blog is literally the opposite of gate keeping, trying to get non FP people into FP. Even provides a book, a course, doesn't throw around Math terms and yet your argument is "Well, some FP practitioners are smug and suck, so why bother?"

Elixir is a functional programing language (It's not pure like Haskell or PureScript, but thats besides the point.)

The Elixir community is absolutely awesome! All or welcome.

There are soooo many resources for people to learn FP, the gate keeping thing may have been true ten years ago but I certainly don't think that the case today.


On the other hand, it's really frustrating to explain this kind of things to the most Juniors. Many of them were too confident on their code because "they had everything in their heads" and anything else was like an incoherent and overcomplicated ceremony. "Why do all that when I can do a simple index.js file and add a simple if and that's it?"

There's a catch with FP, where many of their implications impose you to write better code. But it's not mandatory to go the functional way to do that, and I always try to explain the importance of those fundamental principles to other people, and they can try to implement it in whatever style they desire.

But when someone just doesn't want to learn, they always see me with a smug and gatekeeping attitude and I can't help but see them as people that just don't care at all.


In contrast, I find juniors to be the easiest to teach. Those with more experience, are stuck in their ways, often stubbornly.


Do you have actual, recent, examples of this? I keep reading the same attitude you have here, but I have never seen it first hand.


Git, Vim and general GUI come to mind, specifically on this site. There is always someone berating individuals for not using shortcuts, not using the console instead of a visual tool, using an IDE, etc. Specifically using Git with a GUI primarily is a great way to get flak from several subreddits, too.

NB: This is not an invitation for discussion on the pros and cons, to berate either preference, or gatekeep.


I only see people saying it's superior. But that's not the same thing as berating.


OMG you just reminded me of waiting outside some 101 class on day one typing on my laptop and this guy asks me what OS I'm running. When I told him "Ubuntu", he snickers and says "Talk to me when you're done playing with a toy"


> One of the big reasons why FP languages have so little penetration is because the advocacy usually feels like someone trying to talk you into a religion.

That's exactly what worked for OOP—have we collectively forgotten just how hard OOP was pushed everywhere 15–20 years ago? Way more aggressive than any FP advocacy I've seen, and I've seen a lot. When I was learning to program every single book and tutorial pushed OOP as the right way to program; procedural programming was outmoded and functional programming was either not mentioned or confused with procedural programming.

I still have to deal with the fallout from OOP evangalism; I've had colleagues who unironically use "object-oriented" as a synonym for "good programming" and managers who believe design patterns are software engineering.


And yet today we only have partially-OOP languages being mostly used in procedural ways.


Exactly my point above, about Java.


> One of the big reasons why FP languages have so little penetration is because the advocacy usually feels like someone trying to talk you into a religion.

Eh, I doubt it. I've encountered a few Haskell snobs in my time, but most people that I know who use FP do so completely silently and will discuss the merits and (and challenges) with you freely. I think the real issues with FP are lack of good tooling, package management, and being a major shift in thinking for most developers. It's a common theme for someone to say that learning FP (seriously) was the most impactful thing they've done to improve their software chops.

> The other major impediment is gatekeeping

??? what? I guess you could argue that pure vs non-pure functions are gatekeeping, but there are absolutely legitimate benefits to pure functions that basically everyone can agree on.


"A monad is just a monoid in the category of endofunctors". Anyone who says that - or anything anywhere close to it - is gatekeeping, no matter how true the statement is.

And it's not just the statement. It seems to me (from my outside perspective) that category theory is often used in a gatekeeping way.

In contrast, take SQL. How much of the mathematical theory of relations do you need to know to be able to write SQL queries? Yes, it might help, but the SQL gurus don't try to drag you into it every time you get close to a database.


> "A monad is just a monoid in the category of endofunctors". Anyone who says that - or anything anywhere close to it - is gatekeeping, no matter how true the statement is.

The original attribution of this line about monads comes from the (intentionally) comedic article "A Brief, Incomplete, and Mostly Wrong History of Programming Languages", published by James Iry in 2009: http://james-iry.blogspot.com/2009/05/brief-incomplete-and-m...

It is not a stance I have ever seen taken up in a serious manner by anybody in the FP community, and I work in PL academia (the region of the community from which I would expect to see the most snobbery). Please stop misrepresenting people based on a misunderstood joke.

There are people in the FP community who gatekeep and take on a snobbish tone — with that I do not disagree. However, their prevalence is generally overstated; it's a vocal minority situation. Most people I know or have talked to are very welcoming.


That article went round the Edinburgh mailing list when it was published, and Phil Wadler, who got monads into Haskell, replied saying something like "I didn't know this. Does anyone have the proof?"

The actual quote that monads are monoids in the category of endofunctors comes from MacLane, and is intended for mathematicians.


I'm honestly not sure the MacLane reference applies here.

Although the abstract phrasing can be traced to him, the particular use of "just" in the parent comment's quote tells me they're specifically thinking of the (deliberately condescending) version from the Iry post, especially since that's the version that gets memed throughout the FP community. After all, that particular phrasing is meant to convey a sense of "this is obvious and you are stupid if you don't understand it immediately", which is a far cry from the MacLane version (since that one is, as you said, intended for mathematicians).

But I probably ought to have included the full provenance regardless; thank you for bringing it up!


"an X is just a Y" is a common turn of phrase in mathematical writing. It means that Xs and Ys are the same thing, whereas "an X is a Y" may, depending on context, mean only that every X is a Y. A human is a mammal, but a human is not just a mammal.

The original quote (from Categories for the Working Mathematician) is:

> All told, a monad in X is just a monoid in the category of endofunctors of X, with product × replaced by composition of endofunctors and unit set by the identity endofunctor.


Nobody claims you have to understand category theory to write Haskell. In fact, most will tell you it's not needed.

The saying "a monad is just a monoid..." is a cliché and an in-joke, not gatekeeping. It's the community having a laugh at itself.


I've had someone do that on this very website. I was assured that affine types in Rust are too difficult to understand without a solid grounding in Category Theory.

The years have proven that ease of programming and the burden of knowledge are the two most important elements of a programming language. FP zealots simply won't accept that their chosen paradigm is opaque to most for benefits that can't seem to be written out in human language.


> I've had someone do that on this very website

On this website you will hear anyone say the wildest assertions. There's the whole human range of expression here. But what makes you think actual FP practitioners (and Haskelers) really believe this?

The proof wouldn't be what some random person here on HN tells you. The proof would be you getting involved in an actual FP community trying to write an actual FP project and being told that you just cannot do this unless you understand category theory. Which, as I said, is not something that happens... at least not in my (limited) experience.

Never confuse what people tell you here on HN, random forums or even throwaway StackOverflow comments with what actually happens in the actual communities when trying to achieve real goals and not just chat about stuff.


But why is it funny? Isn't it funny because the community knows it comes off that way, at least some of the time (and/or some of the people)?

> Nobody claims you have to understand category theory to write Haskell.

I've seem the claim that you can't really use Haskell without it, here on HN, more than once. (Or at least something that I interpreted as being that claim...)


It's funny because the community knows some people too deep in the rabbit hole come across that way, but because the community acknowledges this is a hilarious assertion that would alienate newcomers -- if said with a straight face -- then it cannot be gatekeeping.

Gatekeeping would be if the community said this with a straight face and everyone got impatient when you just "don't get it".

> I've seem the claim that you can't really use Haskell without [knowing category theory], here on HN, more than once

I've almost never come across this assertion; it's certainly not common in the community (or wasn't when I was learning Haskell). I can tell you it's certainly false: you can very well write Haskell without knowing or studying category theory.


Well, try understanding the Haskell docs around Applicative, Functor or Monad without understanding at least basic category theory - not to mention the heavy use of notation/operators in place of traditional programming idioms (named functions).

Here [0] is an example:

> This module describes a structure intermediate between a functor and a monad (technically, a strong lax monoidal functor). Compared with monads, this interface lacks the full power of the binding operation >>= [...]

[0] https://hackage.haskell.org/package/base-4.17.0.0/docs/Contr...


Counterpoint: I learned about Applicative, Functor and Monad by reading docs and tutorials, and I haven't the faintest idea about category theory.

These are the building blocks of Haskell. You learn them as you go, just as you learn what a "method" is when learning OOP. You don't need to know category theory.


Learning what Applicative, Functor and Monad are is category theory. It's like saying "you can learn how to do unions, intersections, and differences on collections of unique objects without understanding set theory".


That seems a little silly to me and I think we're splitting hairs with what it means to do category/set theory. I don't know category or set theory, so I hope you will forgive me for using yet another allegory.

Let's say I make a type class for Groups (in the abstract algebra sense). The rationale behind this is that there's an algorithm for exponentiation which is O(log(n)) versus the naive O(n) algorithm. So if you make an instance that's a Group, you get to use this fast exponentiation.

Sure, to understand and use this type class you have to understand what a Group is. However, I think it's a bit of a stretch to tell someone "in order to use Group you must first learn abstract algebra" because they'll think you're telling them to take a university level course. In actuality, they don't have to know much at all (they don't even need to understand _why_ the exponentiation algorithm works) - they just need to know what is a lawful Group.

The first day of the intro to abstract algebra course I took introduced much more information than you'd need to use this made up type class, and I expect the same of the others.

Like this is my understanding of Functors/Applicatives/Monads. I kind of know their shape and how to use them. If you asked me about any of the underlying math I would shrug and maybe draw a pig (https://bartoszmilewski.com/2014/10/28/category-theory-for-p...).


That's not really true. When some people claim you must learn category theory to use Haskell, they mean they have to learn the math-subject-capital-letter Category Theory, which goes way beyond programming and comes with a lot of baggage.

They never mean you have to learn Functor, Applicative, etc as used in Haskell and taught in every tutorial. If they did mean the latter, it would be a tautology, which is not very useful.

Compare "in order to do OOP you have to learn what an object truly is, its essence, and possibly take some courses on the philosophy of being" vs "you have to learn about methods and objects".

> It's like saying "you can learn how to do unions, intersections, and differences on collections of unique objects without understanding set theory".

You can totally do unions, intersections and differences without knowing set theory.


SQL is intrinsically tied to set theory and borrows a ton of terminology and logic from set theory. Whether you understand that it's from mathematics or not doesn't really matter. You are using set theory and it's terminology regardless.

I've only really ever seen monoids referred to in Haskell. If you actually read my comment, Haskell is not all FP (no PL is). It's not even a significant portion of FP. So just don't use Haskell, it's hardly the first FP language I would reach for and it's probably not the one you should either.

Also, any person willing to learn would be intrigued by the terminology, not reject it as something that's "gatekeeping". Such a defeatist attitude will not get you very far in a field as complex as this.


SQL was explicitly designed to appeal to business people, and uses strictly familiar words and phrases. It was very explicitly designed (though I would say it has mostly failed) to read like pseudo-natural language.

SQL DBs and the precise semantics of SQL are somewhat based on relational algebra, but SQL syntax is definitely not. It's also not using set theoretic terms in general, with the exception of UNION and INTERSECT.


SQL tables are not actually sets at all since you can insert duplicate rows! In practice they usually are though.


You can't actually insert a complete duplicate row because the ROWID (or whatever your particular RDBMS calls it) is a implicit, but unique property of each row (and corresponds to the offset in the file the row is located at). Further, while tables can be the identity element of all records they contain, they themselves aren't really a set. Usually when people associate set theory with SQL, it's with queries (and their results).


Nobody says that seriously.

It tells a lot how people go for common jokes to complain about a community. It's easy to fall for that if you never actually interacted with the people.


> "A monad is just a monoid in the category of endofunctors". Anyone who says that - or anything anywhere close to it - is gatekeeping, no matter how true the statement is.

You just tried to badmouth whole community of FP programmers by taking serious a running gag, wow.



> "A monad is just a monoid in the category of endofunctors". Anyone who says that - or anything anywhere close to it - is gatekeeping, no matter how true the statement is.

Quite frankly, if you don't understand that sentence then you are in no position to judge whether the concepts could be made more accessible. Sometimes the important question is not "could this be easier to learn?" but "is learning it worth the effort?".


> One of the big reasons why FP languages have so little penetration is because the advocacy usually feels like someone trying to talk you into a religion.

This is a part of it. The other part is FP's a bit of a mind fuck if you're used to procedural programming. Take a classic example: Haskell. To do anything remotely productive it's advisable to understand how monads and the various monad design patterns fit together. This difficulty is further compounded by monads having it's foundation in category theory, a branch of mathematics. But hey, they're just monoids in the category of endofunctors, it can't be that hard ;) You can rely on `do` syntax a lot but it really helps to learn how they work.

> I immediately distrust any article that makes sweeping claims about one-paradigm-to-rule-them-all.

I get where you're coming from but this is the end state as far as I'm concerned. Pure FP is where we're all headed. I am convinced the more we try to make mutable state, and concurrency safe and correct, the more FP concepts will leech into future languages.

Rust was one of the first steps toward this. The language is procedural but a lot of the idiomatic methods of doing things are functional at heart. Not to mention most of it's syntax constructs return values. You can still mutate state, but it's regulated through the type system and you can avoid it if you really want to.

Rust's enums, arguably one of it's killer features are algebraic data types and they're combined together the way you would use monads in Haskell. This not only avoids null typing (if you ignore ffi), but also provides a bullet-proof framework for handling side-effects in programs.

I could be totally crazy, but I reckon many years from now, we'll joke about mutating state the same way we joke about de-referencing raw pointers.


I'm not sure why you're getting downvoted.

> I get where you're coming from but this is the end state as far as I'm concerned. Pure FP is where we're all headed. I am convinced the more we try to make mutable state, and concurrency safe and correct, the more FP concepts will leech into future languages.

Having been around the block to catch a couple "FP will rule the world" cycles I can say this is likely untrue. While FP is useful and I personally enjoy it, it is not always the most efficient, nor is it the most clear. This is true in your example of Rust as well. Functional constructs are often slower than their procedural counterparts. For example, having to pass around immutable data structures becomes pretty memory intense even with a highly developed GC.

> I could be totally crazy, but I reckon many years from now, we'll joke about mutating state the same way we joke about de-referencing raw pointers.

Dereferencing raw pointers has been joked about as long as I've been in the industry, and probably even joked about in the 70s. We still dereference pointers today. Similar, state is a very natural thing to reason about. Sure, you can argue as many FP fans do that stateful programs can be rewritten with pure functions. I have yet to see FP code that does this that doesn't negatively effect readability. Even something as simple as large map/reduce/filter chains can quickly become extremely difficult to debug when compared to a very simple loop.

All this to be said there's a lot of benefit many languages can take from FP paradigms. Map, reduce, etc are great examples. Offering immutable state, and algebraic data types could also be beneficial especially in the areas of concurrent and parallel programming. In my professional experience the problems usually start when you begin talking about pure functions which in theory are awesome but sometimes don't map to a problem domain well, or become extremely hard for Joe Developer to get used to. Often times I will think in a functional way, but rewrite things into a procedural way, because communicating your idea is often just as important as the code you write.


> In my professional experience the problems usually start when you begin talking about pure functions which in theory are awesome but sometimes don't map to a problem domain well, or become extremely hard for Joe Developer to get used to.

It's a ripe field for research. I do wonder if there is a better way to program in a pure functional way that conforms to the impurity of the real world. I agree with you though, I think while pure FP is a tantalising abstraction, it must obey the the whims of the hardware. The only way you're getting pure FP to the absolute bottom is if you can monadically formalise the hardware you're running on, which has been attempted [1].

> Often times I will think in a functional way, but rewrite things into a procedural way, because communicating your idea is often just as important as the code you write.

I think this is perhaps the common benefit that is cited from learning FP. It provides an alternate way of expressing the same solution, but maybe in a simpler fashion. The reverse is also true, what is not straight forward in FP may be expressed simpler, procedurally.

[1] https://www.cl.cam.ac.uk/~mom22/itp10-armv7.pdf


> The other part is FP's a bit of a mind fuck if you're used to procedural programming

It's not "if you're used to procedural programming" - FP is simply hard for people to reason about. You can often very intuitively reason about imperative programs, while FP always requires putting your abstract thinker cap on. Look at any non-software person describing describing how to do something and see how often that sounds like an imperative program vs an FP one. Hell, most algorithms papers are themselves using imperative pseudo-code, not FP pseudo-code (unless they're specifically describing novel algorithms important for FP of course).

> I am convinced the more we try to make mutable state, and concurrency safe and correct, the more FP concepts will leech into future languages.

Concurrent mutation is fundamentally hard regardless of what paradigm you chose to implement it in. Parallelism can be made safer by FP, but parallelism is actually pretty easy in every paradigm, with even a little bit of care. And concurrent updates of mutable state are just a fundamental requirement of many computer programs - you can try to wrap them in monads to make them more explicit, but you can't eliminate them from the business requirements. The real world is fundamentally stateful*, and much of our software has to interact with that statefullness.

* well, at least it appear so classically - apparently QM is essentially stateless outside its interactions with classical objects (the Born rule), but that's a different discussion.


Eh, it really depends. Some people find trees easier to navigate through recursion than iteration, same for lists and graphs. Some do better using decomposition on a list to get a combined value or create permutations than doing it through iteration and indexing. FP commonly becomes problematic when things are far more trivial to do stateful than stateless, with side-effects than without, and more.

FP usually shines in small sections of easily isolated code, where it is still trivial to grasp for most and its strengths show more than its weaknesses. Given that code isn't more easily expressed using iteration, anyway. The moment things have to be woven together and not leave any side-effects is when the difficulty jumps to 11.


Sure, there are contexts where the functional approach is actually more natural - especially already highly abstract contexts. I actually like to use certain functional paradigms in my day to day programming (or did, before switching to Go). I just dislike the claim that imperative code is simply more familiar and not more natural/intuitive (in general).


> It's not "if you're used to procedural programming" - FP is simply hard for people to reason about.

It's relatively straightforward once you understand the patterns and idioms, just like procedural programming. `age |> add10 |> substract9` is about as readable as `add10(substract9(age)`, but we're just so used to the latter.


Using Haskell as the classical example of FP is a bit like using Rust as the classical example of imperative programming.

You can do it but they're much harder than most other languages in the class and not anywhere near representative of the whole class.

My FP journey was: Scheme (different but okay) -> Haskell (wtf) -> Erlang -> Clojure -> Elixir. Nowadays when I reach for an FP it's Clojure or Elixir, mostly based upon the problem at hand.


I’ve used Erlang, Elixir, and Elm and found them all quite easy to learn. There are different coding strategies, sure, but for me FP is much less frustrating than OOP.


Totally agree. As a Javascript dev I got functional code pushed on me in 2016.

It's definitely good practice for devs to learn some of the pitfalls that FP prevents and solves, but implementing it on a massive scale front-end application just seems impractical.

Having worked on a large streaming service and considering the author's 3 MONTH struggle after his 40 YEARS experience, I'd estimate that a re-write of our codebase there would have taken our 20 devs over a decade.


Functional programming, and it's degenerate cousin, cramming random functional constructs (array comprehension methods, willy-nilly currying, ...) in Javascript has been the worst thing to happen to web programming.

What that has done is made a whole generation of developers completely detached of the impact on heap allocation and GC. If web programs are slow and bloated, it's partly because of using nuggets from functional programming just for the sake of it.


I'm actually all for the FP things that have been added to imperative languages, which increase their power tremendously and make a lot of tasks a helluva lot easier. But like any tool, it has its place. I'd be equally leery of a pure imperative solution as I would be a pure FP solution.


> implementing it on a massive scale front-end application just seems impractical.

I was using Clojure/ClojureScript around the same time. I’ve since worked primarily in TypeScript/JavaScript. I’m sure the FP experience influenced my opinion, but it seems impractical to me not to use FP techniques for a large scale frontend application. The applications I inherited and maintain now, which were certainly not originally implemented with FP techniques, have been gradually becoming much more maintainable as I’m able to eliminate or at least ruthlessly isolate mutations. Not only because the code itself is easier to reason about, but also because it’s easier to reason about dependencies between different parts of the systems—even parts which haven’t changed.


> implementing it on a massive scale front-end application just seems impractical.

That's true if a team was trying to do so from scratch in js or ts. However React borrows a lot from FP and works at scale. A better example would be Elm.


Anything beats the hundreds of incompatible classical inheritance models rammed on top of proptypes.

Today, functional has won so completely that devs don’t even notice. Using classes is almost entirely antipattern. Factories with object literals and Claire’s reign supreme. Everyone prefers map, filter, reduce, etc over manual looping. Const and copying as a default immutability is preferred to mutation. Nobody thinks twice about higher order functions everywhere.


> As a Javascript dev I got functional code pushed on me in 2016.

What do you mean?


> One of the big reasons why FP languages have so little penetration is because the advocacy usually feels like someone trying to talk you into a religion. (The other major impediment is gatekeeping)

I think one of the major reasons is because IO doesn't really fit well into the FP paradigm. All the theory and niceties take a second place when you find out that something as simple as "print a line here to console" isn't as simple as you thought it should be.


I think the context here is pure functional programming. So not just replacing e.g. loops with mapping and such things, but actually making effect-handling explicit. The IO type (or otherwise deferred-type) is essentially for that. Saying it doesn't fit into the FP paradigm doesn't make sense. Without IO, FP is useless because you simply cannot "do" anything at all.


Another problem is that as nice as avoiding state and side effects is, for any decently complex application there's a minimum amount of state that you simply have to handle, and as a general rule OOP languages seem to make dealing with state much easier. Mind you there are plenty of good ideas in FP, and discriminated unions is the feature I want added most to C#.


> as a general rule OOP languages seem to make dealing with state much easier

I disagree. One of the most significant benefits of FP (and pure FP in particular) is being able to deal with mutable state safely in concurrent programs. This is essentially the core problem which effect systems try to solve.


Isn’t this IO thing a bit overused? There are more FP languages that are not as pure as Haskell, but even if we only do IO through a special monad it is not too bad at all.

And I say that as someone who is very much fore “practical FP” — I do think that local mutability is sometimes simply the better fit for a problem.


> there are more FP languages that are not as pure as Haskell

I mean, that's precisely the point I'm making, that IO doesn't fit into the FP paradigm and that languages that make IO easier necessarily deviate from it. And once you bring non-FP concepts into an FP language, most people will reasonably question why do that instead of the easier way, which is bringing FP concepts into imperative languages.


Pure functional languages model IO with monads, which are very much functional (function composition within a context) and IMO easy to use. This ease of use is probably why JS adopted the pattern with native promises -- async/await. They fit well into otherwise imperative code and people seem to to like them.


Pure functional languages can very well have normal IO. It's Haskell's lazyness that mostly forced it to use monads for IO, for better or for worse. In a pure FP language with strict evaluation semantics, IO can easily be implemented as a pure function foo -> (World, Input) -> (World, Output).


That's not true at all. If you could use that implementation in a strict language (which you can't) you could equally-well use it in Haskell.


Arguably the best platform for the general io use case is functional.

From how when you console in to a cluster and run a command on another node, the standard io is redirected to you, to how tcp/up and up sockets are managed by the stdlib, it's really hard to beat Erlang.


100%.

IMHO, the ideal "future of programming" is like how a lot of game dev has C on the "back-end," and Lua "in-front."

Why one language or paradigm? Mix it up.


It is similar to a religion, because if everyone agrees to adopt the same reality, then there's easy communication and a shared background understanding that makes it easy for everyone to work together. Picking a religion is a society-wide coordination problem, and strategies of shaming and in-group/out-group exclusion make sense in this context, where they wouldn't about flavors of ice cream, for example. We see the same thing happening in tech all the time. I don't want to learn a new technology, which means I don't want you to learn it, because I don't want it to become something that engineers are expected to know. And so on.

The point is to understand that "right tool for the job" and "C++ for everything" are religious positions, too. There might even be a dominant religious position, which likes to style itself as just the disinterested rational viewpoint.

The FP folks might be a somewhat more fervent (crazy mathematical) sect, but there's no getting away from religion, though we can call it by other names (fashion, "best practices", ...).


Tooling. Tooling is the major reason why no one uses functional programming languages in industry. I was very into Haskell a decade ago. Today I don't even know where to start when it comes to installing a new package. If Debian doesn't have it in their repos I usually just try building it from source and hope make all works. If it doesn't. Well I probably didn't need that package anyway.


My company does FP and I think the tooling in Elixir is pretty great.


Thank you. I learned functional programming first, and I do think there is a lot of merit to it in a large proportion of software development tasks. But there are far too many people trying to fit square pegs in round holes with it. Pure functional applications which manage large amounts of complex and messy state are a nightmare to work with. There is a reason why game developers and simulation developers have almost defaulted to object oriented programming. There is a reason why cryptography developers have defaulted to procedural programming. There really should be a No Free Lunch theorem for programming methodologies.


I believe game developers aren’t against functional programming (John Carmack has great things to say about Haskell for example).

Pure functions excel at state management and reducing bugs. If there were a reason to make games with a functional language, this is the reason.

The big issues seem to be deterministic performance and resource usage. Garbage collection, lazy evaluation, etc all result in bubbles of weird performance. That doesn’t matter for the overwhelming majority of programs where the human limits in responsiveness is hundreds of milliseconds, but does matter it must scale into single digit milliseconds.

There are functional languages that have this capability, but outside of the partially functional Rust, they are basically unknown.


Until John Carmack actually writes a game in Haskell, I'm gonna choose to interpret his comments as "Hey, I like these ideas" over "Hey, this would be perfect to write a game in". He's had a lot of nice things to say about a lot of different languages, but it is far more telling to consider what he has actually written significant amounts of code in.

Pure functions excel at reducing bugs, I'll give you that. They also excel at transforming state. But managing it? Encapsulating it? No thank you. I'll take plain old classes over every state management idea that has ever been conceived for Haskell. Does anybody seriously believe they will manage the state for hundreds of thousands of different sprites, environments, bots, and players using the State Monad? I'd rather castrate myself. It's one thing to take immutable/functional ideas and code and use it in a stateful system, and another thing entirely to constrain 100% of your code so that it all fits in that neat theoretical box you've built for yourself.


Carmack ported Wolfenstein 3D to Haskell. That seems like a big enough project to make serious statements about the language in the context of games.


Really? Are we thinking of the same game from 1992? Innovative for it's time, sure, but nowhere near the magnitude of complexity of modern games. Porting an extremely old game that can be written in a few thousand lines of code sounds like a hobby project to help you learn a new language...not a proof that the language could take over decades of OOP-driven game design.

Do you know what would be a better statement about the language in the context of games? A game engine. And actual games. That people want to play in 2022.


It took a long time to write that engine and porting the whole thing properly also takes time. It just moves goalposts. Why didn’t he spend 80M on a new AAA game? If he spent any less than that, he certainly can’t draw any useful conclusions.

Have you ever looked over the codebase? It’s plenty large enough to draw useful conclusions from for most people let alone someone with his vast game experience.

https://github.com/id-Software/wolf3d/tree/master/WOLFSRC

Meanwhile, you are drawing bay conclusions with no credentials out evidence. As to actual games, setting aside the fact that Wolfenstein still sees play, loads of popular games are written in JS. Lots of others are in Java or C#. None of these make your case as Haskell, Ocaml, and StandardML (SML) are in the same performance range.

As to your argument about the efficiency of objects, what do you think functional languages use? Lets use SML as an example. There’s real arrays and they are also optionally mutable (yes, there’s linked lists too, but those can be used in C++ too).

Records are basically just C structs (they are immutable by default, but can contain refs which are mutable pointers). They can contain functions because functions are first class without the mess that many languages create.

You associate functions with datatypes which gives you the best part about methods. They also give you a kind of implicit interface too due to structural typing. I’d note that closures are mathematically equivalent to objects.

Finally, modules are everything a language like Java tries to get from classes (and more), but without any of the downsides of classes themselves.

People generally like the JS paradigm of factories and object literals (even if they hate the stuff like dynamic typing or type coercion). StandardML offers the same kinds of patterns, but with sound typing, simpler syntax without the warts, more powerful syntax, and performance in the same range as go or Java.

To me, your argument sounds like the people arguing that goto is better and more natural than looping constructs or the procedural guys arguing against OOP. I think if you messed around with StandardML, it would change your mind about what programming could be in the future.


Not only have I used SML, but I've also used F# and Scala extensively, both of which are well rooted in the ML philosophy. And for completeness, I've also used Common LISP, Haskell, Clojure, and Erlang.

Not once have I made an argument about the efficiency of objects oriented languages. Although that is important for performance wherever performance matters, it isn't my point in the slightest, and I already would agree with you that there are functional languages out there that are equivalent in performance.

My point is that the raw exposed complexity of functional state management (and especially pure functional state management) makes it worthless for game development. And until someone actually develops a real game, up to modern AAA standards using a functional language, all of your posturing is theoretical.

I can carry a pallet worth of goods to a supermarket using a pickup truck. It might be a little messy cause it can't pull up to a dock, so I'd have to unload it by hand instead of using a pallet jack, but the time I save by driving faster more than makes up for it. I could argue until I'm blue in the face about the theoretical benefits of using faster and more nimble pickups instead of slow and cumbersome lorries, but until someone actually builds a supermarket logistics network with them, those benefits are just theoretical. More importantly, the people who actually work in logistics industry would never attempt to do so because they can immediately see the problems with it.

So if you're that convinced your functional language would be better for game dev, then just do it already, cause nobody in the industry is gonna do it for you. I can guarantee it won't take long for you to understand what it is like when your paradigm choice puts you on the wrong side of the Expression Problem. And by the end of your experiment, you'll just be another crusty old insider that is too dumb and stuck in your ways to consider the amazing benefits of functional programming, just like what you're trying to do with me here.


What encapsulation exists in OOP or procedural code, but not in functions and modules?

The complexity in games is peanuts compared to any random business UI logic at whatever random company. Managing this complexity is an explicit reason to use functional paradigms. If OOP were better at managing complexity, why has everything been shifting from OOP to functional paradigms?

What about the functional approach makes it inferior at state management in your mind?


Nah, I'm not gonna write an essay just for some random dude on the internet. Try writing a game. Like a real one, not tic-tac-toe. You'll learn quickly enough.


An Entity Component System is literally the functional approach to programming under a different name and with mutation by default. There’s even been a huge move toward keeping ECS data classes separate from behaviors. At that point, it’s just a less ergonomic functional approach with better performance due to lower-level languages (though ATS or Rust are just as fast as C).

Maybe you’re thinking of FRP (functional reactive programming), but that isn’t even super mainstream among most functional programmers. I know some small games have used it, but it’s mostly used for specific kinds of UI in languages that enforce immutability and no side effects.


I don't think any paradigm is inherently better at managing complex and messy state. I've worked on several very complicated applications written in a pure FP style which handled complex, shared mutable state really elegantly (and many OO applications where state management was a terrible mess...). And more generally I think that pure FP and specifically effect-oriented pure FP gives you really nice, composable tools to handle complex state safely in concurrent applications.

But yeah, for game development pure FP is probably a bad fit because you need pretty precise control over memory allocation and layout (from what I understand, never actually done any game dev).


The popularity of various programming paradigms is mostly historically contingent and driven by social/business dynamics, not because of any reasonable constrained optimization.


I've always kind of wondered about something weird in relation to this type of programming style:

reversible computing.

Ever since I read some stuff feynman wrote years ago about computing it seems like quantum computing and/or max energy efficiency would be achieved if information was not destroyed.

And I thought functional programming might be a programming paradigm to support this kind of thing.


Discussing FP without mentioning induction and proofs always makes me wonder.


why?


>> navigating all of the offerings, examining their trade-offs

the amount of time you're afforded for that exercise better fit neatly inside a very small window of implementation (in other words you're probably not going to do it, or at least do it justice)

>> figuring out which ones fit best to the system being built in terms of constraints

constraints will always get ya. It always ends up to being what is the tool/paradigm that you (and your project manager) are most comfortable with because you'll most likely use it in leu of anything else (especially given you are not the only one that has to be convinced -- engineers don't operate in a vacuum, and they love predictability, ie what they already know)

>> You won't get a very optimal solution by ...

YAGNI. Premature Optimization. KISS.

I am not saying that ^^^ is true, I'm just introducing you to your new conversation buddies for the foreseeable future. People always bring'em along for the conversation.

>>> trying to talk you into a religion

advocacy among unbelievers is always gonna come off like this, especially when the evangelists have dealt with so many naysayers and the apathetic majority. And this is probably the crux of the entire issue. Students' first languages in school are generally imperative languages. They are taught in their youth a path away from functional programming. Which is funny to me because my school was always promoting that their studies were there to help you grow one's intellect and not necessarily fit into some cog of the industry. But, I don't recall much playtime given to functional languages (not as much as they deserved at least).

My point is that it would be nice if FP was the first tool that was given to students that pour out of universities into developer teams. It would be nice if the easy button for a group was FP, not the other way around. Then it would be much easier to beat the other drums (like the testing drum).


I am sorry, but you sound like you know for a fact that functional programming is better, but you have trouble making others recognize that fact.

Imho, FP has many tangible weaknesses, just a few off the top of my head:

- Immutability is non-intuitive: If I were to ask someone to make an algorithm that lists all the occurrences of a word on a page, they wouldn't intuitively come up with an algorithm that takes a slice of text and a list of occurrences, then returns an extended list, and an linked list with one more occurrence.

- Immutability can cause more problems than solve: If I were to create a graph of friendships between people, chances are that if I were to add a link between A and B, then not only A and B would need to be updated, but everyone who transitively knows A and B (which, since no man is an island would probably mean, everyone). This is highly complex, and probably just as bad as living with mutability.

- FP is not performant: FP code tends to be full of pointers and non-performant data structures like linked lists that have very little memory locality. It also makes optimizations such as updating things in a certain order to avoid recalculation impossible.

- FP has to deal with side effects: The real world has side effects, and your FP code is probably a bit of business logic that responds to a HTTP request, and fires of some other request in turn. These things have unavoidable side effects.


FP isn’t just limited to Haskell while your criticisms seem aimed only at that one language.

Immutability may not be intuitive, but neither are pointers, hash tables, structured loops, OOP, etc. In any case, maintaining immutable code is certainly more humane than trying to reason about 10th order side effects. Finally, the majority of functional languages are immutable by default with optional mutation when needed.

Your immutable argument is a red herring. It’s the wrong algorithm. Bubble sort is much more intuitive than heap or quick sort, but you certainly wouldn’t recommend it.

You are flatly wrong about performance. Ocaml and StandardML are just as fast as languages like Go or Java. This is incredible when you realize These FP languages are maintained by a handful of people with basically no budget and are keeping pace with languages that have tens out hundreds of millions of dollars pushed into them.

FP doesn’t mean no side effects (though it does tend to encourage limiting and separating them). Outside of Haskell and Miranda, basically all the other FP languages have side effects and even Haskell has an escape hatch available.


Haskell has to have an escape hatch in order to work, though fortunately it is hidden away.

Every program has effects. A program that does not have any effects (IO) would not be useful, as you can't get anything into or out of it. In FP we manage those effects, in order to help ensure correctness, with the additional benefit that good effect management makes the program easier to comprehend (nee reason about).

Contrast a procedural program with effects littered throughout the codebase, with a program wherein effects are pushed to the edges. In the latter, you and your team know exactly where all of the effects occur. Everything else is pure code: easier to test, easier to comprehend, easier to compose.

Category theory is not required for good effect management. It just so happens that monads like IO fit the problem space nicely; although the same could be achieved with a lazily evaluated definition of computation (i.e. a function).


On this friendship example, I think already FP is showing you something useful about the way you are designing your model.

A friendship is really an edge in a graph. if you manage friendships by recording a collection of friend PersonId's on each person, you are modelling a graph by recording the same edge in two different places, and then hoping the programmer is diligent enough to never make a mistake and update the edge in one place but not the other. If you model the friendship graph as a collection of id pairs, not encapsulated inside a specific person structure, this invalid state is no longer possible.

I'm not sure what you are on about regarding making transitive updates - that idea is obviously not going to work on any moderately sized network. Imagine if Facebook tried to record all your transitive friendships on your account!

Sure immutability is non-intuitive, but it's really helpful! Like borrow checking in Rust, immutability helps eliminate an entire class of bugs. And immutable code is often much easier to reason about, you need to keep significantly less of the context in your head.


I think you haven’t read much of the literature and don’t have any real experience with it on a real project.

There is no intuition when it comes to programming. Everyone has to learn it. You’re appealing to familiarity of concepts you’ve already learned and assume everyone else thinks the same way.

There are highly efficient graph algorithms using pure immutable data structures. You don’t seem to be aware of how much sharing is enabled by immutability nor the algorithms for efficiently updating graphs in a pure functional way. Try reading this book [0].

Again there is a whole zoo of optimizations available exclusively to pure functional languages that you don’t seem to be aware of. Try reading Counting Immutable Beans. Some of the optimizations GHC can do are impressive. It’s not all bad performance.

Want to talk about cache misses? Check out an average C++ code base. It has a dedicated cache-miss operator.

Side effects are everywhere. And yes, in a purely reverentially transparent language, it becomes a pain being forced to be explicit about every mutation, network call, and file handle operation when you’re used to throwing caution to the wind and flying free with them. Improper handling of side effects include common use-after-free errors, etc. After years of dealing with this mud all of software, at least for me, explicit handling of effects becomes exactly what you want in order to keep your sanity intact.

[0] https://books.google.ca/books/about/Purely_Functional_Data_S...


> There is no intuition when it comes to programming. Everyone has to learn it.

There is no intuition in anything aside from the very few things we are born with. Everything else is learned from experiences.

And to that end, chances are someone has learned to follow procedure-like steps like a recipe at a very early age - indeed, in some schoolbooks recipes and other "procedural" works are among the first things someone does.

Children learn things like "to do X, first do Y and then Z" from a much earlier age, often before even learning how to read and write (e.g. via simple papercrafting in kindergarden) than something like "X = Y*Z". Giving and taking commands (i.e. "imperative" programming) as a concept is something people meet in their lives much sooner and are more exposed to than abstract problems.


I think the jury is out on which programming paradigm is easier to teach to kindergarten aged children.

Let alone adults with no prior training with programming.

I don't believe it's as natural as you seem to think.


I think you're completely missing an entire world of FP languages which are not pure like Haskell, ocaml, elm, etc... There are languages with side effects as an easily accessible escape hatch but don't do things which make debugging a pain like data encapsulation and inheritance.

Also, there are FP langs out there that have better data/heap locality with indirected pointer data structures, because they don't share memory and so the pointers aren't tied to anywhere except where they are directly running.

I completely disagree with immutability being non-intuitive. Try explaining to a junior/bootcamp JavaScript or python developer why when you pass an integer into a function it doesn't reflect changes you made in the called function when you jump back into your calling frame... But if you do the same with a object/dictionary...

For junior programmers immutable passing is absolutely the default assumption and so I think it's reasonable to claim it's the intuitive choice.

There's even crazymaking shit where in python if you have a default array parameter you can seriously fuck up your model of what's going on if you mutate that array anywhere.


>> I am sorry, but you sound like you know for a fact that functional programming is better

I can see the room for misunderstanding. My only argument is that FP is worthy of more playtime than it currently enjoys in undergraduate academia. I'd wager that the current percentage of FP/Imperative playtime is 95/5.


Ah I see. I've been out of school for a while so I don't know how students are taught - but if you take a modern language such as Kotlin, you'll see it's a mishmash of concepts from various paradigms, including FP.


> - Immutability is non-intuitive: If I were to ask someone to make an algorithm that lists all the occurrences of a word on a page, they wouldn't intuitively come up with an algorithm that takes a slice of text and a list of occurrences, then returns an extended list, and an linked list with one more occurrence.

What do you base this on? Did you run a study of people who had never programmed before and then asked them to invent a pseudo-programming language to solve this problem? Or are you talking about the "intuition" of people that have already learned to program, mostly likely in an imperative programming language, in which case your claims to "intuition" don't mean anything.

> - Immutability can cause more problems than solve: If I were to create a graph of friendships between people, chances are that if I were to add a link between A and B, then not only A and B would need to be updated, but everyone who transitively knows A and B (which, since no man is an island would probably mean, everyone). This is highly complex, and probably just as bad as living with mutability.

Firstly, that's not true, it depends entirely on what sort of data structure is used. For instance, a trivial model is a simple list of graph modifications, ie. type Action = AddFriend(a,b) | RemoveFriend(a,b), type FriendGraph = List(Action). Any change to the graph thus requires allocating only a single node.

Secondly, if you were to model this mutably, a node removal means you would lose information about the fact that two people were friends at one time.

> - FP is not performant: FP code tends to be full of pointers and non-performant data structures like linked lists that have very little memory locality. It also makes optimizations such as updating things in a certain order to avoid recalculation impossible.

This argument is based on assumptions about the functional programming idioms, the compiler, the machine it's running on and more. I could just as easily say that imperative programming doesn't scale because you can't easily parallelize it across cores, and since single-core performance is now a dead end, even if everything you just said were true it's basically irrelevant for the question of performance.

> - FP has to deal with side effects: The real world has side effects, and your FP code is probably a bit of business logic that responds to a HTTP request, and fires of some other request in turn. These things have unavoidable side effects.

I'm not sure who denies this. The question is not whether useful software has side-effects, it's what sort of side-effects you actually need for any given program and how those side-effects should be modeled so you can properly reason about them and achieve the program properties you need.

If your programming language has pervasive side-effects then you have to deal with them everywhere, and the failure modes multiply. If your programming language does not support pervasive side-effects, then the side-effects are pushed to the edges of your program where they are more amenable to reasoning. Questions like "how did we get into this state?" are considerably simpler to backtrace as a result.


1. Immutability is non-intuitive. If I were to hand a regular person a book, and ask him how he would list all the page's numbers where the word cheese appears, he would probably say something like this: 'I would read through the book, keeping a scrap of paper at hand. Whenever I saw the word cheese, I would write down the page number' This is an imperative algorithm. I could give more examples, but I hope you get the point.

2. I just wrote the 'friends' example to make a point - common, complex programs often have huge graphs of interconnected objects, whether one would like it or not. Your solution is to build a journal of friendships and breakups - it's not a typical solution for object relations - seeing if 2 people are friends is a linear operation. Keeping the history around might not be necessary or useful.

3. Your FP code runs on a modern OoO CPU whether you like it or not - so it's safe to make that assumption. And multithreading might not be a silver bullet. FP does nothing to break long dependency chains. As for sharing values between CPU cores, considering the latencies involved, it might be cheaper to compute the value locally, but multithreaded optimization is a black art - it's not always obvious if sharing is better.

Another example is when I made a small toy Excel-like spreadsheet that supported formulas - while Excel itself is FP, I realized that the best way to update cell values, is to topologically sort all dependent expressions, and evaluate them one after the other - an imperative concept.

4. I was just making the point is it's easy to write a function in e.g. Java that is imperative on the inside, but is pure when called from the outside. Other languages will even allow the compiler to enforce this, while still being imperative. So both 'regular' and FP languages can avoid some side effects to some extent, but have to deal with others.


> 1. Immutability is non-intuitive. If I were to hand a regular person a book, and ask him how he would list all the page's numbers where the word cheese appears, he would probably say something like this: 'I would read through the book, keeping a scrap of paper at hand. Whenever I saw the word cheese, I would write down the page number' This is an imperative algorithm.

No, that's a purely functional algorithm. Note how he's just adding numbers to the end of a list and emphatically not mutating the existing items in any way. Tell me what you see as the functional real-world solution to this problem. Do you consider an accounting general ledger to also be imperative? Because it clearly isn't, it's an append-only log where all previous entries are immutable, which is exactly the same thing as the list noting the pages containing the word "cheese".

> 2. I just wrote the 'friends' example to make a point - common, complex programs often have huge graphs of interconnected objects, whether one would like it or not. Your solution is to build a journal of friendships and breakups - it's not a typical solution for object relations - seeing if 2 people are friends is a linear operation. Keeping the history around might not be necessary or useful.

The example doesn't make a point. In either imperative or functional programming, the operations you need and their time and space complexity will dictate the data structures and algorithms to use. Saying that programs have "huge graphs of interconnected objects" is not evidence of anything specific.

> 3. Your FP code runs on a modern OoO CPU whether you like it or not - so it's safe to make that assumption.

Make what assumption exactly? Microcontrollers are in-order. GPUs are in-order. You also made claims that FP programs are full of pointers and non-performant data structures. I have no idea what out of order execution has to do with this claim. Certainly early FP languages were pointer heavy, but early imperative programs were goto heavy, so I'm not sure what exactly you think this says about FP in general.

> And multithreading might not be a silver bullet. FP does nothing to break long dependency chains.

FP encourages and often forces immutability which does help significantly with parallelism and concurrency.

> Another example is when I made a small toy Excel-like spreadsheet that supported formulas - while Excel itself is FP, I realized that the best way to update cell values, is to topologically sort all dependent expressions, and evaluate them one after the other - an imperative concept.

Prove that that's the best way, and not merely the best way you know of, or the best way available to your programming language of choice.


> No, that's a purely functional algorithm. Note how he's just adding numbers to the end of a list and emphatically not mutating the existing items in any way.

They are clearly mutating the piece of paper to add new numbers to it. There is no linked list in sight. They are also flipping the pages of the book in order, another imperative paradigm.

The closest you could come to a pure FP algorithm expressed in physical terms would have been "Say there are still pages I haven't looked at, and say I have a stack of post-it notes on the current page; then, I would check if the word cheese appears on this page, and if it does, I would produce a post-it note with the page number on it, adding it over the stack of post-it notes; I would then flip one page; on the other hand, if I am looking at the last page of the book, I would add another post-it note to the stack if needed, and return the whole stack of post-it notes to you otherwise".

Imperative:

  foreach page in book
    if 'cheese' in page
      write(paper, page.number)
Functional:

  search page:rest_of_book post-it-stack = 
    search rest_of_book 
           (if ('cheese' `on page) 
              then number(page):post-it-stack 
              else post-it-stack)
  search [] post-it-stack = post-it-stack
Of course, we could write this easier with a map and filter, but I don't think there is any good way to express those in physical terms (not with a book - filters have nice physical interpretations in other domains though).

Later edit:

A version of the imperative algorithm closer to what was described in prose:

  repeat:
    word = read_word(book)
    if word == 'cheese':
      write(paper, current_page_number(book))
    if no_more_words(book):
      return paper


> They are clearly mutating the piece of paper to add new numbers to it.

No, they are simply adding a new record for the word occurrence. If the paper runs out, they grab another sheet and continue. This is clearly an append-only ledger, just like that used in accounting. These are both immutable abstractions.

The fact that this is happening on a single sheet of paper is merely an optimization, it's not a property of the underlying algorithm they're employing. The post-it equivalent you describe is simply a less space efficient version of exactly the same algorithm. You're basically saying that tail call elimination makes FP no longer FP.

> There is no linked list in sight.

What do linked lists have to do with anything? You don't think that FP or immutable programming have to use lists do you?

> They are also flipping the pages of the book in order, another imperative paradigm.

Traversing an immutable sequence in order is now an imperative algorithm? Since when?

What's really happening here is that you've already assumed that physical reality and people's intuitions are imperative, regardless of the contortions required.

This example of counting words and the general ledger are perfect examples: it's absolutely crystal clear that all of the recorded entries are immutable and that this log of entries is append-only, which is a characteristic property of FP and immutable abstractions, and yet you have to contort your thinking into looking at the paper itself as some mutable state that's essential to the process in order to call this an imperative process.


> The fact that this is happening on a single sheet of paper is merely an optimization, it's not a property of the underlying algorithm they're employing.

The person is describing the abstraction, not any optimization. If you were to translate their words directly into code, you would have to write code that modifies the piece of paper, because this is what they described.

In contrast, when using a persistent data structure, the abstraction says that I create a new structure that is the old one + some change, but, as an optimization, the computer actually modifies it in place. The implementation is imperative, but the abstraction is functional.

> You're basically saying that tail call elimination makes FP no longer FP.

No, I'm saying that there is a difference between writing an algorithm using tail-call recursion, and writing it using iteration; even if the compiler produces the same code. The tail-call recursive version is FP. The iterative version is imperative. How they actually get executed is irrelevant to whether the algorithm as described is FP or imperative.

In contrast, you're basically claiming that any algorithm that could be abstracted as FP is in fact FP. So, `for (int i = 0; i < n; i++) { sum += arr[i]; }` is an FP algorithm, as it is merely the optimized form of `sum x:xs total = sum xs x+current; sum [] current = total` after tail-call elimination.

> Traversing an immutable sequence in order is now an imperative algorithm? Since when?

Immutability and FP are orthogonal. Append-only ledgers are a data structure, not an algorithm; and it's algorithms that can be functional or imperative.

On the other hand, yes - traversing a data structure in order is an imperative idiom. In pure FP, the basic operation is recursive function application, not traversal. For loops and tail-call recursion obtain the same thing in different ways.

> This counting words and the general ledger are perfect examples: it's absolutely crystal clear that all of the recorded entries are immutable and that this log of entries is append-only, which is a characteristic property of FP and immutable abstractions, and yet you have to contort your thinking into looking at the paper itself as some mutable state in order to call this an imperative process.

By your definition, I understand this is an FP algorithm for producing a list of the first 100 natural numbers, since it's append-only:

  xs := []int{}
  for (i := 0; i < 100; i++) {
    xs = append(xs, i)
  }
You can certainly define FP = append-only, but that is emphatically not what others mean by the term. Instead, most people take FP to mean "expressing the problem in terms of function applications (and abstractions based on them), where function == pure function in the mathematical sense".

> What's really happening here is that you've already assumed that physical reality and people's intuitions are imperative

I've actually shown what I think is actually an FP algorithm represented in physical terms, and how that translates 1:1 to FP code (and the alternative imperative algorithm). I don't think it's fair to accuse me of assuming that physical reality is imperative - at least not without showing your 1:1 mapping of the description to FP pseudo-code.

Edit to ads:

Perhaps a fairer representation of the imperative algorithm should have been:

  repeat:
    word = read_word(book)
    if word == 'cheese':
      write(paper, current_page_number(book))
    if no_more_words(book):
      return paper
This is even closer to the prose version, and makes it clearer that it was describing an imperative traversal.


To any leads reading this,

>>> navigating all of the offerings, examining their trade-offs

>> the amount of time you're afforded for that exercise better fit neatly inside a very small window of implementation (in other words you're probably not going to do it, or at least do it justice)

Every project I have worked that has done a gap analysis has executed faster and smoother than those that didn't. A gap analysis being a more rigorous approach to an analysis of tradeoffs, solution capabilities vs requirements/existing software, and sometimes scoring/ranking (often with three recommended options depending on client's choices in tradeoffs).


>> the advocacy usually feels like someone trying to talk you into a religion

> advocacy among unbelievers is always gonna come off like this, especially when the evangelists have dealt with so many naysayers and the apathetic majority.

If an advocate sees unbelievers and naysayers around it is more a reflection of the advocate tactics. No offense intended. Most developers will gladly hear an overview of how a new technology can make doing X easier, better or faster. Even when there is a steep learning curve. They may not switch to it quickly or at all, but they are likely to form positive impression of it.

But if an advocate starts with "your tools are bad", "stop doing Y", "you need to unlearn Z" he is usually digging a hole for his technology, not advancing its adoption. If an advocate cannot produce powerful, convincing examples showing the goodness of his technology (without bashing existing tech) he should stop advocating and start learning. My 2c.


For the most part, agreed. But the arguments for FP are old and like you’re mentioning could be sold easily by the right orator (those teachers/evangelists exist btw in droves on YouTube and elsewhere). I’m only saying that given what academia and collective professional experience has proven around FP, it would be nice if FP or FP patterns were the first option(s) most stressed to budding SWEngineers/Developers and that imperative architecture was the afterthought (not the other way around as probably it is in our current state).

In other words, there would be little need to evangelize for FP if it was the base of understanding.

And while there are a lot that want to improve the field, I’d wager that most 9-5’s are tired, wanna finish a good days work and rest.


But why should FP be the basis for understanding?

Let's say I run a CS department. And let's say that I've accepted that what my department is really doing is jobs training for software engineers. Well, what are their jobs going to be, imperative or functional? For most of them, for most of their careers, the jobs are going to be imperative. Why should we start with FP?

You could say "FP is better", but then we're back at the OP's point. FP is better for some things, and worse for others.

For you to say that FP should be the starting point, you need to show that FP is better for the majority of programming situations, or else to show that FP is the better way to learn programming for the majority of students. I have never seen anyone seriously attempt to show either one. (Some argue "it fits how we think better", but what they mean is that it better fits how they personally think, ignoring that different people think differently.)


It’s about rope. Imperative languages generally give you a lot of flexibility (for x or y) and, therefore, rope to hang yourself. I don’t believe that starting students here, from a pedagogical perspective, is a good strategy. FP languages/paradigms, on the other hand, are all about restrictions(immutability, side effects, etc), and thus less rope. Less places to hang yourself, so to speak.

Also, even though, as you’ve stated sw engineers tend to work in an imperative environment (which I’m arguing is an artifact of their formative years), junior sw engineers should at least start with a bit of trepidation to use that rope (if only in their heads).

Plus, utilizing a functional style (and understanding the whys of functional style, where pragmatic) would improve many aspects of industry (e.g reducing the friction of adding tests - did I mention that I love tests??)


This rope only matters for production oriented systems. Most programmers are doing quotidian processing tasks. Manipulating CSVs, processing data to get statistics on, plotting points on maps, maybe writing a simple automation. Almost every software engineering class I read about when I was a graduate student teaching undergrad classes spent time discussing the pitfalls of the "rope of mutability" and explicitly discussed the idea of immutability to make this kind of programming safer. I agree with another poster that it's just much easier to teach general programming skills and thinking procedurally. I do think that programmers have to unlearn some of this when writing production-grade software, but most programmers will never write anything like that.


FP has one distinct advantage over many other paradigms, it has very sound and well understood theoretical foundations, being essentially based on the formal mathematical logics (e.g. System F and its derivatives). This has huge implications for correctness and reasoning.


I'll get made to take shortcuts to get the product out before I'll be allowed to be strict on correctness if it means pushing a deadline back even a single day. I love correctness, but it's just not often important to people who aren't me and for reasons completely outside of my control.


Correctness is important to everyone. Presumably everyone involved would rather software work rather than not work. Time commitments are what they are and sometimes quick and dirty is what you have to do but I don't actually think there is some hard tradeoff between fast and correct. The whole idea of having rigorous foundations is that you have a set of primitives which can compose together in well-understood ways so you can build correct programs quickly without necessarily re-inventing the wheel each time.


Another reason is that imperative languages have a lot of business inertia around them. It's expensive to rewrite existing code or switch to a new language, and most businesses can't justify this cost.

I love functional programming, but I doubt most companies that sell CRUD apps care about it.


Another reason is that imperative languages are all that's necessary for many (maybe even the majority) of business use cases.

Really a lot of it boils down to if this else that, and little more.


"all that's necessary" implies there's something fundamental about imperative languages that's intrinsically more basic. In fact, the opposite is true. If you're problem statement is "in response to event X, transform Y into Z, stick it in a database, then take A from the database, reshape it into B, and send it back to the user" then weak functional approaches are the natural solution. (languages like Elixir or idiomatic JavaScript).


We use Clojure at my work and it's basically functional Java with parens. No snottiness or evangelism needed. The fact that I don't have to write Java OOP boilerplate is a big enough advantage to me. Even if you just write it like Python or JS, it's still leagues better.


What does gatekeeping mean in this context? Genuine question.


Something like "oh, you're using map() and filter() in your C++ program? you're not doing actually doing FP unless [...]".

See https://news.ycombinator.com/item?id=33438320 for an example of this exact attitude.


Is that gatekeeping? It reads to me like they're saying it's not enough to just use a few functions to it FP. Their point seems to be that it's more than just using some functions, which seems reasonable.


It's strange to me that this article focusses so much on nullability, which seems like a tangental issue. There's nothing stopping an imperative language from enforcing nullability checks. Indeed, with full strictness enabled TypeScript will do just that, including requiring you check every indexed access to an array.


Not only that but directly under the heading "Nullifying problems with null references" they start to describe problems with global variables. The article is all over the place. There may be arguments for functional programming but I wouldn't trust this writer because their thinking is so sloppy.

Also: "But many functions have side effects that change the shared global state, giving rise to unexpected consequences. In hardware, that doesn’t happen because the laws of physics curtail what’s possible."

The laws of physics? That's complete waffle. What happens when one device trips a circuit breaker that disables all other devices? What happens when you open the door to let the cat out but the dog gets out as well?


This is a good point and I agree it buys you similar safety. The annoying part is it isn’t a monadic data structure. You usually get some syntactic sugar for a limited subset of mapping (optional chaining), but a lot of the time that’s insufficient and you get imperative code.


Speaking as someone who recently switched from F# to TypeScript: yes, "annoying" is just what it is.

Until TypeScript at least has pattern matching you'll be writing at least 50% more code, maybe more.

Chaining would be a godsend, but without auto-currying, it's unsightly. Computation expressions would be better, and would probably be easier to implement if, say, TypeScript included a "native" `Option`, `Result`, or `Async`.


And you will end up with monads anyway in FP which probably are about as error prone as missing null checks.

Many compilers warn about potential null problems as well.


Yes, enforced nullability checks is just a type system feature. Another example that has it is Crystal, which is an OOP language.


Functional programming proponents like the blog's author remind of Linux users who sweared by it as a user-friendly OS, thought of everyone else as idiots, and refused to admit its serious flaws as a general-purpose OS. I am not criticizing FP. It has great ideas and many of them have been actively borrowed into other languages. But it's just annoying to see bad analogies that get repeated again and again, like these:

> Now, imagine that every time you ran your microwave, your dishwasher’s settings changed from Normal Cycle to Pots and Pans. That, of course, doesn’t happen in the real world, but in software, this kind of thing goes on all the time.

> Let me share an example of how programming is sloppy compared with mathematics. We typically teach new programmers to forget what they learned in math class when they first encounter the statement x = x + 1. In math, this equation has zero solutions. But in most of today’s programming languages, x = x + 1 is not an equation. It is a statement that commands the computer to take the value of x, add one to it, and put it back into a variable called x.

Deja vu! I read exact same arguments 10 years ago. Maybe if FP did reduce the bugs, you'd have some stats and successful projects to back them up.

I worked at a company where FP was heavily used. It didn't magically reduce the number of issues we had to fix. Possibly increased them because of number of things we had to build from scratch. The company is default dead[1], now. Maybe bugs are not a symptom of the paradigm, but how strongly the systems and teams are architectured to prevent them.

[1]: http://www.paulgraham.com/aord.html


Did you mean to comment on the analogies you quoted other than to say that they are often used? Is being useful a problem for an analogy?

> But in most of today’s programming languages, x = x + 1 is not an equation. It is a statement that commands the computer to take the value of x, add one to it, and put it back into a variable called x.

This is on page 1 of every basic programming book when it's explaining how "variable" differs between math class and programming class. I can't for the life of me see what upsets you about it.


They are stupid because they don't address the fact that most procedural languages themselves have features to prevent re-assignments and mutable states (like, const). Also, variables reflect the state of the program, while microwave settings are inputs. The switching of settings doesn't make a whole lot of sense.

> This is on page 1 of every basic programming book when it's explaining how "variable" differs between math class and programming class. I can't for the life of me see what upsets you about it.

I never had a problem grasping the concept. I never equated "=" in programming to "=" in Math. It's just a symbol. Replacing "=" with "<-" would mean the same thing.


I believe functional programming is interesting, and fun. But it is not going to replace the world. My background has spanned from working on AAA games, to simulations for the DoE, to owning my own business developing embedded products, and graduate school (yes, I went back as a gray haired, and did that late). I fell in love with Lisp many years ago on a TI Explorer.

Functional languages are inherently difficult to develop applications that require state to change in non-deterministic ways. In fact, I challenge you to develop a first-person shooter in Haskell (Have fun).

There are many types of applications where functional languages are perfect, but there are more that it would be a disaster. To make broad sweeping claims, such as this article, just encourages unnecessary discourse, and shows the ignorance of the author, and his limited understanding of the domain of problems that functional languages will benefit from, and the larger domain of problems that will not benefit from them.

As an employer, I don't hire people for their functional programming skills. If they have them, the better, but we have over 3 millions lines of code in C++, and close to a million in Java. We are not starting over, and new projects will leverage existing code.


> In fact, I challenge you to develop a first-person shooter in Haskell (Have fun).

https://hackage.haskell.org/package/frag

> There are many types of applications where functional languages are perfect, but there are more that it would be a disaster.

Can you give an example or two of an application in a functional language would be a disaster?

> To make broad sweeping claims, such as this article, just encourages unnecessary discourse, and shows the ignorance of the author, and his limited understanding

No, it's an opposing viewpoint to the popular "right tool for the job" and "take the best from functional, best from imperative, and smash them together".

See "The curse of the excluded middle by Erik Meijer".

> As an employer, I don't hire people for their functional programming skills.

I certainly choose my jobs with language and ecosystem in mind.


frag-1.1.2 failed during the building phase. The exception was: ExitFailure 1

Go fix that for us.

``Can you give an example or two of an application in a functional language would be a disaster?``

Sure, code that tests its self. Not happening. Which means, most embedded applications where fault tolerance is a must. If you don't know what caused the error, well, lets here your ignorant response. Explain how you trap those problems in a functional language, and do it as simple as you can, say in C, or Pascal.

``opposing viewpoint to the popular "right tool for the job"``

So you advocate a hammer for screw. Got it. You're exactly the kinda of person I won't hire. You're so bent on proving you're right, you ignore the right tool for the right job.


> frag-1.1.2 failed during the building phase. The exception was: ExitFailure 1 > > Go fix that for us.

Isn't there an error message just above saying that you need to install the development version of the GL library? Fix it yourself; we can't!

Granted, that error message could be made easier to read.


>Functional languages are inherently difficult to develop applications that require state to change in non-deterministic ways. In fact, I challenge you to develop a first-person shooter in Haskell (Have fun).

Don't know about Haskell but it would be very fun to do it in Lisp.

Performance sold separately.


Common lisp has a "pretty OK" story for calling C code whenever some speed is needed [0,1]. In my opinion, they suffer from some of the documentation/quick start problems that common lisp has, but they're otherwise usable.

Some of Naughty Dog's late 90's/early 2000's games (Jak and Daxter, Jak II) were written in a lisp called GOAL, Game Oriented Assembly Lisp [2]

[0] https://github.com/rpav/cl-autowrap [1] https://github.com/cffi/cffi#cffic2ffi [2] https://en.wikipedia.org/wiki/Game_Oriented_Assembly_Lisp


Then try it.


At the last two places I worked, I gradually led my teams from traditional OOP Ruby or Python behaviors to at least partially functional (if you'll pardon the pun).

The immediate value I was able to demonstrate was in testing. Three basic (not too scary) principles can get you a long way:

1. push mutations and side effects as far toward the edges as possible (rather than embedded in every method/procedure)

2. strive for single responsibility functions

3. prefer simple built-in data structures (primarily hashes) over custom objects... at least as much as possible in the inner layers of the system

If these steps are taken, then tests become so much simpler. Most mocking and stubbing needs evaporate, and core logic can be well tested without having to touch the database, the api server, etc. Many of the factories and fixtures go away or at least become much simpler. You get to construct the minimal data structure necessary to feed to a test without caring about all the stuff that you normally would have to populate to satisfy your ORM rules (which should be tested in their own specific tests).

Once devs see this, they often warm up quickly to functional programming. Conversely, the quickest way to get an OOPist to double down and reject any FP is to build complex chains of collection operations which build and pass anonymous functions everywhere. Those things can be done where appropriate, but they don't provide as much early bang for the buck... and they likely prevent FP from getting a foot in the door to that team.

The only real downside is that naming things is hard, and good single responsibility practices result in a lot more functions that need good names.


> prefer simple built-in data structures (primarily hashes) over custom objects

I've found this is actually one of my biggest problems with functional code as currently written: people seem afraid to just declare a struct/record, in lots of cases where it's obviously the right thing.

If everything is a hash, you've just made all arguments optional and now you've invented a bad type system inside your good type system.

If everything is a tuple (more common in my experience, from reading Haskell), now you know (int, int) is actually a pair of ints, but you've thrown away variable names and nominal typing: is it a 2D vector, a pair of indices into an array, or something else entirely?

Defining custom types is the elegant solution to this: you have `struct range { start: int, end: int }` and now your functions can take a range and everything is great.


It's not about being afraid to define a struct/record (rigid type). It's about being afraid of being stuck with something that doesn't meet your needs later or elsewhere, if only by a little bit.

That Range type is great until you have cases where you want variations on a Range, such as a MeasuredRange (same start and end, but with a new field called step_cost). Original functions which take a Range can't cope with this new completely different struct called MeasuredRange. So now we have to change them all to accept Range or MeasuredRange, or we need some kind of type heirarchy to relate them in appropriate ways.

The alternative is to accept the runtime risk of receiving a thing which doesn't have the start and end that you needed. Or of course if it were a very important piece of logic where failure was not an option, you just make your do_rangy_thing() require explicit start and end parameters. Then it's up to the caller to call do_rangy_thing(my_range[:start], my_range[:end]). Ultimately that just moves your failure risk up one level, though.

Likewise, you can use hash destructuring in Clojure like (do_rangy_thing [{:keys [start end]}] ... or pattern matching in Elixir like def do_rangy_thing(%{start: s, end: e}) ..., both of which will blow up at runtime if the hash passed in doesn't have the required keys.

Many people accept the runtime risk because it greatly simplifies code at the cost of runtime safety. The worst thing, in my view, is when languages try to bolt on some kind of type strictness later and end up solving the Range vs MeasuredRange problem by listing all possible accepted types or just throwing up their hands and saying Any.

I don't know Haskell, but from what I've heard about it I find it surprising that (int, int) is as common as you say. I thought they were very much about detailed specific types, with the theory "if it compiles, it works". My guess is that generic tuples are just an indication that some of the participants didn't want to deal with the complex type system and have a thousand different narrow case types defined.


re Haskell: I just looked at a few arbitrary files in arbitrary Haskell projects on Github, and I retract that part. I was either thinking of another language, or had seen a lot of bad Haskell code before that I cannot find now.

----

Isn't this just the nullability thing again that most people have decided was a bad idea, but with a different name? Instead of two ints, now your function takes two maybe-ints and crashes at runtime if they're not there.

I do think the thing you're asking for (interfaces but for fields instead of functions) is a missing feature in most languages. I understand why it's there, but it's still annoying.

The most well-known language I can think of that explicitly supports it in an otherwise static type system is TypeScript, where a type can be "object containing non-nullable ints called start and end, but with no constraints on other fields that may be there"[0]. It's technically a hash map at runtime, but it's a compile-time type-checked hash map.

In other languages, you could bodge it with a bunch of Java-esque getX/setX functions. I don't blame people for not doing so.

[0] https://www.typescriptlang.org/docs/handbook/2/objects.html


> Instead of two ints, now your function takes two maybe-ints and crashes at runtime if they're not there.

The idea in Haskell (and many more, I'm sure), is that if you take a maybe-int, you can't get the int without unpacking it (or using a function that only operates on the int if its there), so then when you pass the unpacked int around, you are 100% sure it's there at all times. No worrying about a JS-style undefined, or some other function altering the value to become null again. It's a guarantee, which is a huge part of why all this FP stuff is so great.


Functional or procedural or whatever, it doesn't matter much for me as long as the paradigm is not OOP.

I strongly believe that data should live separate from the actions performed on it. I also believe that inheritance is a bad thing as there are other, better means to achieve polymorphism.

I do believe in a data oriented programming where we waste as little CPU cycles as possible and introduce as little abstractions as possible.


I am working on a java game where I threw all OOP knowledge out the window and use C-style pure data-classes with public fields and no methods.

It's pretty refreshing to work this way compared to the design pattern madness you see in enterprise applications - but I guess it's not very safe if multiple people are working on this and some don't know what they are doing.

There has to be some middle ground, I think people have been going way overboard with OOP in the last two decades.


Me too, I use .NET but try to use in the least OOP way as possible. Of course, for interviews I can recite all things about OOP, countless patterns and their "advantages", SOLID, DDD done OOP way and most of the Uncle Bob "wise" teachings.


I dunno, even in C codebases, it's not uncommon to have data structures containing function pointers. Polymorphism isn't inherently bad and trying to achieve it using, say, enums and big switch statements isn't particularly maintainable.


As someone who uses Scala on a daily basis, I might be biased. However, functional programming is truly the future because of its mathematical guarantees. Pure functions give us referential transparency i.e. we can substitute a piece in a large layer and still expect the entire thing to work as long as it has no side-effects. This has so much relevance to software maintainability. The downside is that, to take advantage of this mathematical guarantee the code has to be pure through and through. A single side-effect can spoil everything. Immutability, is another aspect of pure functions i.e. a pure function wouldn't mutate its inputs.


Maybe it ought to be but it definitely isn't if you look at success as industry adoption.

The author seems to have good intentions and covers all the talking points a new convert will discover on their own.

However I'm afraid an article like this will do more harm than good in the end. There are too many network effects in play that go against a new paradigm supplanting the mainstream as it is. And the benefits of functional programming pointed out in this article haven't been convincing over the last... many decades. Without large, industry success stories to back it up I'm afraid any amount of evangelism, however good the intention of the author, is going to fall before skeptical minds.

It doesn't help that of the few empirical studies done none have shown any impressive results that hold up these claims. Granted those studies are few and far between and inconclusive at best but that won't stop skeptics from using them as ammunition.

For me the real power of functional programming is that I can use mathematical reasoning on my programs and get results. It's just the way my brain works. I don't think it's superior or better than procedural, imperative programming. And heck there are some problem domains where I can't get away from thinking in a non-functional programming way.

I think the leap to structured programming was an event that is probably only going to happen once in our industry. Aside from advances in multi-core programming, which we've barely seen in the last couple of decades, I wouldn't hold out for functional programming to be the future of the mainstream. What does seem to be happening is that developments in pure functional programming are making their way to the entrenched, imperative, procedural programming languages of the world.

A good talk, Why Isn't Functional Programming the Norm?

https://www.youtube.com/watch?v=QyJZzq0v7Z4


Lot of the time arguments for Functional Programming seem to describe some form of total programming and avoiding partial functions. Like enforcing null checking or exhaustive matching, avoiding panics etc.

When I was going through Functional Programming classes in Haskell, the teacher tried to separate total programming and functional programming.

For instance Rust programs rarely use function composition compared to Haskell. He didn't consider Rust as very good functional programming language for that very reason. But at the same time Rust has good total programming tools like exhaustive checking, Option, Result etc.

Does anyone else try to separate functional programming and total programming?


Totality can be split into two separate properties:

- All programs are defined for all inputs (exhaustive)

- All programs terminate/coterminate

The former is becoming more common (like your Rust example), but the latter isn't very widespread. For example, most would consider a function like this to be exhaustive, even though it loops forever when both Ints are non-zero:

  foo : (Int, Int) -> Int
  foo (0, y) = y
  foo (x, y) = foo (y, x)
Proving termination is hard, and often undesirable; e.g. we want servers and operating systems to keep running indefinitely. However, co-termination can be quite easy, e.g. if we define a Delay (co)datatype:

  data Delay t where
    Now   : t -> Delay t
    Later : Delay t -> Delay t
Wrapping recursive calls in 'Later' allows infinite processes, at the cost of some boilerplate (Delay is a monad, if you know what that is):

  foo : (Int, Int) -> Delay Int
  foo (0, y) = Now y
  foo (x, y) = Later (foo (y, x))


Arguments from academic and mathematicians will/may favor provability.

Pure Functional where everything is function composition have more hope of producing a valid mathematical proof for a block of code. CS/Math will favor this over the aspects that get grouped into total programming, which often don't help provability.


I'm reminded of the following quote by the co-author of SICP - Gerry Sussman...

"Remember a real engineer doesn't want just a religion about how to solve a problem, like object-oriented or functional or imperative or logic programming. This piece of the problem wants to be a functional program, this piece of the program wants to be imperative, this piece wants to be object-oriented, and guess what, this piece wants to be logic feed based and they all want to work together usefully. And not because of the way the problem is structured, whatever it is. I don't want to think that there's any correct answer to any of those questions. It would be awful bad writing a device driver in a functional language. It would be awfully bad writing anything like a symbolic manipulator in a thing with complicated syntax."


The problem is sometimes you lose a lot of functionality when you allow for that flexibility. For example, let's say you want your functional language to allow for side effects. Immediately you've lost locality: the explicit arguments of a function no longer guarantee their results. Or maybe you want to allow mutability: there goes thread safety. The fewer hard restraints a language can impose, the less lifting the language can do on your behalf. Give up on Lisp's S-expressions, and the macro system becomes vastly more complicated.

Likewise, a language has a limited amount of space for syntax before you end up a mess. So languages adopt a paradigm, and optimize the language for that paradigm. Functional-style Java is bloated to hell, because Java is built for Object Oriented programming.

Frankly, I'm of the opposite mind of the quote: the middle ground is often worse than either extreme. I'll happily write Object Oriented C#, or Functional Haskell, over a tepid mess of C++.


A refreshingly pragmatic position for an academic. Where did you get that quote?


From a quick google, likely from a talk on Youtube. Matching section timestamped - https://youtu.be/O3tVctB_VSU?t=2346


isn't it amazing in this day and age we can just google a 7 year old quote and basically pull up the actual video of it instantly and send it around timestamped?

some days technology sucks, but other days its a wonder it works as well as it does


Thanks. Very impressive to hear this from the Scheme pope himself. This should make all dogmatists think.


> A refreshingly pragmatic position for an academic.

There are plenty of pragmatic academics. It just happens that academia is about the only place that the non-pragmatic people can find long-term career success, but that doesn't mean there aren't many pragmatic people in the same space.


> There are plenty of pragmatic academics

Sure, just as there are dogmatists in the industry, as the article shows.


Basically the antithesis of the "silver bullet". That is, there is no silver bullet.


A silver bullet may be able to elegantly kill the werewolf, but throw enough depleted uranium his way and the problem should go away soon enough.

I suppose that's an analogy for object oriented design or something.


I'd strongly recommend checking this paper out:

Out of the Tar Pit - http://curtclifton.net/papers/MoseleyMarks06a.pdf

I agree that functional programming is part of the future. I believe that the relational model is the other part. In this space, imperative programming exists primarily to bootstrap a given FRP domain.

We've built a functional-relational programming approach on top of SQLite using this paper as inspiration. Been using this kind of stuff in production for ~3 years now.

Remember - Your user/application defined functions in SQL do not need to be pure. You can expose your entire domain to SQL and build complete applications there, with the domain data & relational model serving as first-class citizens. With special SQL functions like "exec_sql()" and storing your business rule scripts in the very same database, you can build elegant systems that can be 100% encapsulated within a single .db file.


I personally think Rust should be included in this group. It isn't technically a functional language, but it has a "functional flair" with many of the same benefits as functional programming provides, and with some of the features (pattern matching, sum types, etc.). It takes a different approach to mutability, but the net benefit I think is about the same.


Functional programming, as a paradigm, is way better than object-oriented programming, and also equally more complicated than object-oriented programming. My observation, over the past 30 years, is that 90+% of programmers aren't even capable of doing object-oriented programming - so getting them to do functional programming is a pipe dream.


> getting them to do functional programming is a pipe dream

I don't know if that was intentional but I'm upvoting!


> 90+% of programmers aren't even capable of doing object-oriented programming

Similar to other takes I've seen in this thread. But isn't it flawed to talk about being "capable of object-oriented programming" when object-oriented programming is itself an ill-defined, flawed idea? (I'm talking C++/Java/C#/textbook OO, not Smalltalk.) I spend a majority of my time in C#, and I'm not really sure I'm doing object-oriented programming either. Learning curves are never linear, but if it were, the curve for OO in C# would look something like:

Level 0: God-classes, god-methods. Puts the entire program in the "Main" method.

Level 1: Most of the logic is in "Main" or other static methods, with some working, mutable data stored in classes. No inheritance.

Level 2: Logic and state are starting to get distributed between classes, but lumpily -- some classes are thousands of lines long and others are anemic. Inheritance is used, badly, as a way to avoid copy/pasting code. Short-sighted inheritance based on superficial similarities, like "Dog : Animal". No clear separation of responsibilities, but "private" is starting to make an appearance. If design patterns are here, they're used arbitrarily. Still lots of mutability around. This is "OO Programming" as taught in early textbooks. It's bad.

Level 3: Methods and classes are starting to ask for contracts/abstractions instead of implementations. Inheritance hierarchies are getting smaller, include abstract classes, and are starting to be organized by need and functionality, rather than by superficial similarity; things like "TextNode : Node". Classes are clearly articulating their public surface vs. private details, with logic behind which is which. Generics are used, but mostly just with the built-in libraries (e.g. IEnumerable<T>). Design patterns are used correctly. Mutability is still everywhere. If interfaces are used, it's in that superficial enterprisey way that makes people hate interfaces: "Foo : IFoo", "Bar : IBar", for no discernable reason. This is "OO Programming" as taught in higher-level textbooks.

Level 4: No more "Dog : Animal". If inheritance is used at all, it's 1 layer deep (not counting Object), and the top layer is abstract. Code de-duplication is done via composition, not inheritance. Fluent/LINQ methods like .Select() [map] and .Where() [filter] have mostly replaced explicit loops. A large percentage of the code is "library" code -- new data structures and services for downstream use. Generics are everywhere, and not just with standard-library classes. Interfaces are defined by the needs of their consumers, not by their implementations -- you may not even see an implementation of an interface in the same project it's defined (this is a code-fragrance; a good smell!). Liberal use of Func<> and Action<> has eliminated almost all of the explicit design patterns and superficial inheritance that used to exist. Mutable state is starting to be contained and managed, perhaps via reactive programming or by limiting the sharing of mutable objects. This doesn't look much like OO as taught in textbooks.

Level 5: Almost all code is library ("upstream") code, with a clear, acyclic dependency graph. Inheritance is virtually absent; an abstract class may show up occasionally, but only because it hasn't been replaced with something better yet. Most code is declarative using fluent/functional-style methods on immutable data structures, like .Select() and .Where(). Where Level 4 may have abandoned that style at the limit of the "out of the box" data structures, Level 5 just writes their own immutable fluent/functional data structures when they need to. This means heavy use of interfaces, Func, and generics, including co- and contra-variance. It also means adapting ideas from the functional world, such as Monoids and the "monadic style" (but not an explicit Monad type, both due to the lack of higher-kinded types and due to the fact that Monad is a red-herring abstraction that is not useful on its own). Most code looks like it's written in a mini domain-specific language, whose output is not a result, but a plan (i.e. lots of lazy evaluation, but with sensible domain-specific data structures, not with raw language elements like LISP). Data is largely organized via relational concepts (see: Out of the Tar Pit), regardless of the underlying storage layer. Identity and state are separated. Data and function blend seamlessly. Mutability is almost exclusively relegated to the internals of an algorithm, mostly in said data structures. Virtually no mutable state is shared unless it's intrinsically necessary. If it is necessary, it's tightly controlled via reactive programming or something similar. A few performance-critical loops look almost like C, with their own memory models, bit twiddling, and other optimizations, but these are completely internal, private details, well commented and thoroughly tested. This looks nothing like OO Programming as taught in textbooks. It looks a lot more like functional programming (with some procedural sprinkled in) than OO.

If there's a Level 6, I'm not there yet, nor have I seen it (or known what I was looking at if I did).

So when I see someone say "programmers aren't doing OO programming", I don't know what that means. Only Levels 2 and 3 above look much like "object-oriented programming". If nobody told you C# was supposedly an "object-oriented language", and all you saw was Level 5 code, would you know OO was supposed to be the overriding paradigm?

Are people avoiding OO programming because they can't do it, or because they evolved past it? To someone stuck at Level 3, Level 5 code might look unnecessary, overly complicated, whatever. It might look like code written by someone who doesn't know how to do OO.


> The biggest problem with this hybrid approach is that it still allows developers to ignore the functional aspects of the language. Had we left GOTO as an option 50 years ago, we might still be struggling with spaghetti code today.

This is demonstrably false, as C has always had a goto and its use by custom and in practice is greatly circumscribed.


Even though this article comes from a reputable source, it should be pointed out that the author is not a researcher in the area -- and the decision not to include various MLs, OCaml, Scala, or F# in the chart of functional languages seems controversial. So this article does not speak for the community. If you want to read more about using Functional Programming in Industry, I would recommend Yaron Minsky's https://queue.acm.org/detail.cfm?id=2038036 instead.

Why did the GOTO statement fall out of favor with programmers? If you look at Knuth's famous article weighting the importance of GOTO (https://pic.plover.com/knuth-GOTO.pdf) you can see many calculations where the GOTO statement can save you a tiny bit of runtime. Today, these matter far less than all of the other optimizations that your compiler can do (e.g. loop unrolling, inserting SIMD instructions etc). Similarly, in some domains the optimizations that functional compilers can do matter more than the memory savings mutation could bring.

Personally, I believe that with in the next decades memory usage will matter more, but even then functional programming languages can do well if they can mutate values that only they reference(https://www.microsoft.com/en-us/research/uploads/prod/2020/1...). This does not break the benefits of immutability, as other program parts can not observe this mutation.

I disagree with the articles premise that it is "hard to learn".. it might be today but it doesn't have to be. Monads are usually difficult for beginners, but algebraic effects are almost as powerful while being much simpler. They have slowly become mainstream (and might even make it into WASM!). It is an exciting time for functional languages and many people are working to make them even better!


> the decision not to include various MLs, OCaml, Scala, or F# in the chart of functional languages seems controversial

I don't know why that would be controversial. There's a very clear distinction between (MLs, OCaml, Scala, F#) and (Haskell, Elm, PureScript, etc.).


Sure, they are in different language families. But a chart showing the "top dozen functional-programming languages" should include all languages that were given a separate workshop at the International Conference on Functional Programming (https://icfp22.sigplan.org/): Haskell, ML, OCaml, Scheme, Erlang, miniKanren (the last two are arguably specialised enough that they could be excluded to put more focus on general-purpose languages).


Doesn't include Clojure in GitHub repo count.

"Functional programming also requires that data be immutable" Not true


The D programming language is a good example which defines pure functions, not as strictly operating on immutable data, but simply as functions without side effects. So pure functions can still have local mutable variables, but cannot mutate any shared or global state. It goes further with "weakly pure" functions which can have mutable parameters/arguments, but still cannot mutate any global variables.


Not to mention you can use just about any language provided you adhere to FP standards. It doesn't need to be forcibly "baked in".


Pure functional programming does require immutable data. There are lots of FP languages that aren't pure, though.


No, it doesn't. Pure FP is about pure functions, meaning only that they don't have side-effects (essentially just that you could replace the function call with it's return value without changing the semantics of the program). If a function mutates some global state then that would certainly make it impure but a function which takes an input, mutates it and returns the mutated value is absolutely pure.


I like the philosophy of Rust (and some other languages) of "safe by default". Rust variables are immutable by default, but can be made mutable using "let mut". Rust is memory safe by default, but can be made unsafe using the drumroll "unsafe" keyword. As for null references, other languages still have them but enforce "strict null checking", such as Kotlin and TypeScript. They force you to verify a reference is not null before using it, but without the verbosity of an Optional or Maybe type.

Functional programming is great, but it's far from optimal in many situations. For example, implement a self-balancing binary search tree using a common imperative language with mutability. Then try implementing it again but using pure functional programming. Certainly very possible but also requires a lot more work when you're not allowed mutability.


A note regarding your binary tree example:

I'd argue that FP is actually great for working with tree-like data structures. For example, the common implementations for sets and maps are based on balanced trees. IME it's graph-like or array-based stuff where the paradigm struggles.


I don't disagree that manipulating deep arrays and graph data can be harder in an immutable functional language, at least with default commands.. however many fp languages have powerful and mature libraries that let you work with deep data structures very gracefully and correctly. Clojure has Specter, Haskell has lenses and stuff.

For me, the FP dream ( and we aren't quite there yet) is that your program is an airtight declaration of the desired functionality, and then we get out of the way and let the compilers and profiling runtimes optimize away.

How cool would it be if the runtime stats could let the jit either unroll and inline a function if it's called on small arrays from network buffers, or push it to gpu if it's frequently called on huge in ram buffers? Not there yet, but as the tower of complex tooling needed to work in a compute parallel world keeps growing, we are going to be relying more and more on the compiler/runtime to optimally use that power.


See my cousin comment; I just wanted to make the point that working with tree-like data structures in FP feels rather natural.


Tree data structures are pretty prevalent in compiler. Like that, perhaps?


> For me, the FP dream ( and we aren't quite there yet) is that your program is an airtight declaration of the desired functionality, and then we get out of the way and let the compilers and profiling runtimes optimize away.

I would guess we are at least 100 years away from this. Current imperative compilers miss obvious optimizations and FP typically needs many layers of optimization before it would be efficient. Any one layer failing to make progress would result in a very slow program.

Currently FP compilers generally don't even use successors for natural numbers because it would be slow. Typical programs use other data structures more complex than numbers that would need to be optimized into a mutable structure. If FP compilers can't do it for integers, they definitely can't for FP programs as a whole.


great for reading trees, but not for cases where nodes need to be added or removed. There's a "zipper tree" structure but it's kind of a pain to implement:

https://github.com/xdavidliu/fun-problems/blob/main/zipper-t...


Pay a log(n) slowdown and you have arrays in all their glory (and more!)

https://hackage.haskell.org/package/containers-0.6.5.1/docs/...


If we really need to we can just actually use arrays tho:

https://hackage.haskell.org/package/primitive-0.7.4.0/docs/D...

That's what I meant: You can mostly use tree-like log(n) data structures rather nicely but you'll have to isolate your mutating code into stuff like PrimMonad, see:

https://hackage.haskell.org/package/primitive-0.7.4.0/docs/D...

I like Haskell because it lets me write clean interfaces at a high level but fall back to byte-level stuff when needed.


It isn't just log(n), it's also L1 vs L3 access times and an extra level of pointer chasing.


Just to confirm, you are talking about L1 and L3 caches, right?


Yes, and the specific advantages of using contiguous in-memory representations for data which is accessed sequentially.


> but without the verbosity of an Optional or Maybe type

Without the safety of having that encoded in the type system.


No, Kotlin/TS do the same thing with different syntax.

See https://kotlinlang.org/docs/null-safety.html#safe-calls (TS is basically the same)


`T` and `T?` being different types is encoding in the type system.

It is a different encoding, though.


Actually, implementing AVL tree in purely functional way is easy, I'd say it's easier than doing it in the mutable in-place way. It will allocate more, but will have the optimal O(something) complexity.

Many other algorithms are much harder, though, especially those requiring fast array indexing (graph search, hash tables, ...)


You're allowed mutability, even in languages like Haskell.


A better example would be to implement an efficient HashMap in pure FP. This is simply impossible unless you fall back to some tree based solution which is less efficient.

Interested why this is downvoted .. Hashmaps can have O(1) time complexity for lookups and inserts in the imperative world, in pure FP either lookup or insert can be no better than O(log(n))


> in pure FP either lookup or insert can be no better than O(log(n))

For single operations, yes. But what we really care about is usually the amortized performance over many operations, which can still be constant time.


No, this is not the case. It is actually for non FP solutions that the amortized performance converges to constant time. But for pure FP, there is no solution that performs in constant time, also not when amortized.


I like and agree with much of the advantages of FP, but I’ve never used it exclusively.

A number of years ago, we worked with a startup that was based around a new FP language, focused on image processing pipelines[0]. It’s actually quite cool. We came from a C++ background.

Learning the language was difficult, but our team was very capable, and very experienced. We did it.

But it was just too limited, and the advantages never appeared for us. We were doing it for an embedded implementation.

It was a really neat experience, but ended up as a failure. I am sorry for that, as I actually thought they had the right idea, and I think that management failure was as much to blame as technical hurdles. The language had many limitations, but we were still able to work with it. That’s what you get, with a highly capable team. The startup we worked with, had some real rockstars.

These days, I program in Swift, which has many FP features. I enjoy it.

Nonetheless, I think that many of these “new paradigms” are built around the premise that most programmers suck, and need to be forced to write good code, which never seems to work.

Companies seem to be desperate to hire crappy engineers, and get them to write good code, as opposed to hiring decent engineers, in the first place, who can write good code, regardless of the tools.

[0] https://halide-lang.org/


Good article, but the chart of FP languages is way off. There are only 8,752 public Haskell repositories in GitHub (not 126,990, as claimed). Numerous other popular FP languages aren't even listed (e.g. Scala: 13,224, F#: 1,960).


This article reads like something from 2010. Other than Go, new languages no longer have null references and you don’t need a purely functional language to do so. I kept waiting for more examples for why we need FP but the article yada-yada-yada’d the rest

I think FP came and showed everyone how to design expressive types, how to define flatMap on more than arrays, and that’s it. It turns out you don’t need Haskell, you can incorporate those features in imperative languages like Rust and Swift.


The "blue collar" approach to functional programming is a fruitful one.

Written about Scala, but applies to any language: https://www.lihaoyi.com/post/StrategicScalaStylePracticalTyp...

In doing this, program organization starts to change in interesting ways. Modeling error states as regular data cleans up a lot of complexity.


> I kept waiting for more examples for why we need FP

Dependent types, allow a lot more type safety (ex. shader program type parametrized by description of its uniform variables, getting rid of `INVALID_OPERATION` on wrong uniform location/type).

> you can incorporate those features in imperative languages like Rust and Swift

Incorporating dependent types into imperative languages with unrestricted effects is hard (impossible?).


ML language != functional programming. There is more out there than Haskell.

The article doesn't even mention OCaml, the second most popular ML-derived language on Github. Makes me suspect the article is not very well researched.


While the title doesn't specify this, the article is about pure functional programming. I don't think OCaml fits this bill exactly.


It doesn't fit "pure", but it solves the same problems the article is talking about, as other functional programming languages do.

In fact the article is excluding most functional languages by saying "pure", and then goes on with "Of the top dozen functional-programming languages, Haskell is by far the most popular" under the graph, which is wrong because it's missing "pure", a feature that is _not_ required to solve the problems that are being talked about.

So I agree on the initial point "Makes me suspect the article is not very well researched."


Which part of Haskell is impure?


I find this an amusing take. I'd more fully accept "declarative should be the future of software." In that, at large, I'd rather larger chunks of the software I am responsible for to be declarations of intent, if at all possible. That said, the rubber still hits the road somewhere, such that the abstraction should be tailored to fit the domain, if you can.

As a fun challenge, look at the definition of pretty much any fractal. It is a mathematical construct that is almost certainly not equivalent to what most "functional programming" environments let you do. Indeed, most definitions are imperative by nature, that I recall, and yet they work remarkably well.

Really, anything from the book Turtle Geometry would have a challenging time in a lot of functional languages. Which is not that most functional languages are bad. Just they don't usually even try to abstract over the graphical. I hate that folks see how well the abstract over functions and assume that is all programming is.


> Really, anything from the book Turtle Geometry would have a challenging time in a lot of functional languages.

https://github.com/sergv/turtle-geometry

Is an implementation of the book Turtle Geometry in Scheme. A Lisp dialect.

> Which is not that most functional languages are bad. Just they don't usually even try to abstract over the graphical. I hate that folks see how well the abstract over functions and assume that is all programming is.

There is an entire section of SICP dedicated to graphical abstraction using functions and function composition.


Yeah, that isn't surprising, in that the book even mentions using LISP. However, this is a touch of a goal post shift with what folks typically mean by modern functional programming. (Which, to be fair, was always far more nebulous than folks admitted.)

Specifically, though, this is my point. The turtle geometry abstraction is very imperative, by nature. It is still declarative, at a high level. But it is not functional. Despite that, it can be implemented in one of the classic functional languages. (Though, lisp's origins are far more imperative than you would think.)


Not quite turtle, but Elm (a pure FP) has something prettier, a full 3d renderer https://package.elm-lang.org/packages/ianmackenzie/elm-3d-sc... Demo: https://ianmackenzie.github.io/elm-3d-scene/examples/1.0.0/m...


Does this have examples of a fractal that I just couldn't find? The outputs are certainly pretty, but I'm not entirely clear where this fits into what I was trying to say. :(

That is, my argument is ultimately that for some things, having a section of code that is more imperative is actually far shorter than trying to accomplish the same thing otherwise.

It is akin to changing coordinates. Some things are trivially conveyed with polar coordinates. Just as some are made much more difficult in that setting.


The first text sums it up:

    It’s hard to learn
Which is refreshing to see just stated up front: FP is for smart people who have some motivation to learn something hard, even when there's a whole world of alternatives that are not "hard" to learn. in this writer's case, it appears to be they own a company and they've mandated everything be written in Haskell or PureScript, which will select for employees that are willing / able to do that, etc.

as long as humans are employed to write the code directly, "hard to learn" is a non starter for being the "future".


You should give elixir a try. It’s easy to learn, the tooling is really good, and the community is super friendly. I’ve found the mix of immutability and dynamic/structural typing to be great in practice.


Specifically, FP requires expansive working memory, which is a normally distributed trait across the population. Meanwhile, in OOP one can reason from the perspective of the object and the interfaces it's interacting with, easing the burden on programmers, but there are obviously drawbacks there as well.


> Specifically, FP requires expansive working memory,

What? FP increases local reasoning!

> in OOP one can reason from the perspective of the object

In FP one can reason from the perspective of the function?


Hybridization of object-oriented and functional approaches seems like a decent approach to theses problems.

> "Nearly all modern programming languages have some form of null references, shared global state, and functions with side effects..."

Which is to say, code is organized into discrete classes, instantiated as objects, but those objects only use the functional paradigm with respect to their bound functions, i.e. no side effects, no shared global state. Some sort of input validation and screening can be used with each to sanitize values and avoid null references. Then you have a collection of discrete modular elements which can be reasoned about or debugged independently.

Such classes would be essentially 'stateless' but you could have other classes that stored mutable state and were queried by the functional types, much like the application-database model:

> "The trend has been to keep stateless application logic separate from state management (databases): not putting application logic in the database and not putting persistent state in the application. As people in the functional programming community like to joke, “We believe in the separation of Church and state”"

https://ebrary.net/65011/computer_science/separation_applica...


> Hybridization of object-oriented and functional approaches seems like a decent approach to theses problems.

It's too complex and loses a lot:

> Contemporary imperative languages could continue the ongoing trend, embrace closures, and try to limit mutation and other side effects. Unfortunately, just as "mostly secure" does not work, "mostly functional" does not work either. Instead, developers should seriously consider a completely fundamentalist option as well: embrace pure lazy functional programming with all effects explicitly surfaced in the type system using monads.

https://m-cacm.acm.org/magazines/2014/6/175179-the-curse-of-...


Someone should bolt object-oriented features on top a functional programming language to see what it would look like. That would be an interesting research project. They could call it O-something and do a pun with an animal name.


Don’t need to, look at F#. Its functional first and OO is bolted on.


Now I'm stuck wondering if you missed the joke or are one upping me. Well done if it's the case.


Functional programming (FP) is great, no question. However, we who know about FP should not forget, that there are other worthwhile paradigms out there. Just think of Prolog-like things or programming in relations (for example minikanren) for example. The good thing is though, that mostly-FP/FP languages can be used to make DSLs, which in turn enable such kinds of paradigms, so that we are not limited to FP itself.


The goto statement exists in most modern languages. Even C# has a goto statement, and while I have only used it once or twice in the past 10 years, it still has a purpose -- effectively breaking out of multiple loops if you are within more than one. That is, break will only break out of the inner most loop. It can save a lot of resources sometimes to break out of both. Normally you would not want to put yourself in such a position, but it happens and comes in handy.


It's not just about FP, it's about creating a language that will allow you to think more clearly about the problem. If we cannot graduate our primitives to the level of abstraction that's required for the problem then codebases will be fragile, projects will run over-budget and complexity will forever increase.


The self-righteous attitude of the article is very much offputting. I'd love to have something to wave around at my boss and co-workers to convince them of the usefulness of the FP paradigm (to at least encourage FP-like practices in our heaps and heaps of existing code) but this article is certainly not it.


Very odd that this article states that functional programming is the solution to the null reference problem. Yes as far as I know all functional languages have some kind of Optional or Maybe type as a solution, but there are non-functional languages with this solution as well.


> Yes as far as I know all functional languages have some kind of Optional or Maybe type as a solution, but there are non-functional languages with this solution as well.

Optional/Maybe values are tedious if we don't have functions/methods like map, flatMap, etc. which take functions as arguments. That requires first-class functions, which pushes things further in the direction of FP.

I think of things on a spectrum, e.g. more-functional/less-functional, rather than having a hard cutoff of "this is FP" or "this isn't FP".


Perhaps, but those are still FP solutions that have been added to non-FP languages.


The article states: "More important, developers need to learn a new way of thinking. At first this will be a burden, because they are not used to it. But with time, this new way of thinking becomes second nature and ends up reducing cognitive overhead compared with the old ways of thinking. The result is a massive gain in efficiency."

In my experience from University CS education and later on in industry, a quite large group of students or engineers, programmers never grasp the functional way of thinking. It don't just take longer time, it doesn't happen. For this reason I'm skeptical that FP will ever be able to replace imperative (and object oriented) programming.


Oh, scalfani, he was quite invested in FP talks too.

It's funny, I'm calling cover effect (forgot the idiom, but basically when big medias put you up, your uptick is over) on simple FP. I think mainstream absorbed most of the idioms (map filter reduce, decorators, composability, lazy streams etc) and there's nothing else in that bag to push.

That said, I do believe that FP as an abstract multi stage modeling language is still gonna help in the future because it raises provability which is something that I personally miss every day in most mainstream languages. You're never too sure about anything and it's tiresome.


The properties mentioned in the article are not really tied to functional programming.

- Not allowing null-references can be done with any paradigm. For example with object-oriented programming there really is no reason why it wouldn't work to not allow null references.

- Immutability can also be done with every paradigm. The Java String for example is both object-oriented and immutable.

I think the paradigm is actually irrelevant. The real advantage is not gained by using a functional programming language. It is gained by using a language that prevents null-references and makes it easy to write and use immutable data structures.


Tis doesn't give you 1) ease of use and 2) composability.


The article actually mentions ease-of-use as the major downside of functional programming ("Functional programming has a steep learning curve")


In a world where your two options are object-oriented and functional, functional is naturally superior. But just because the last twenty years have had all the oxygen sucked up by object-oriented does not mean there are no alternatives to it other than functional. Like all good IEEE articles, it's seven years out of date: Rust is one of the most maintainable languages around, simply by embracing good old fashioned procedural programming. And the only practical example they give, null pointers, have been a solved problem outside FP for an even longer time.


Arguably the single most determining factor in using a language is the size of the ecosystem relative to the use case. If I'm writing a data science app I'll probably use Python because I can access a large number of well-tested libraries in the space, not because Python is a superior language itself. Same argument for Java (come on, who actually enjoys Java?) Whether it's OO or FP isn't going to be a major consideration. I enjoy Ruby over any other language, but don't use it much at work because ... ecosystem.


The article discounts dynamic FP languages that most real world delivered FP code is written in (Erlang/Elixir, Clojure, etc). The claimed "the top dozen functional-programming languages" list is also missing Scala, Ocaml/ReasonML, etc.

I can understand wanting to focus on your preferred FP subdisciplines (statically typed purely functional languages) but it seems eliding any mention of this will be confusing the readership since IEEE Spectrum is targeted at a general engineering audience.


>Functional programming has a steep learning curve Not really. Learning and using F# is not hard. Same for using JS in a functional manner.

>To reap the full benefits of pure functional programming languages, you can’t compromise. You need to use languages that were designed with these principles from the start. Yes and no. F# was designed with functional principles, but you can compromise and you don't have to write 100% functional code.


Functional programming helps because the lack of mechanisms in languages to manage state mutability. FP brings in the big ban, keeping things immutable, to make state management easier. When new languages like Rust that can automatically track state ownership, state management and mutability are much easier. You don't need to enforce immutability all the times, though it still helps. The need for FP's practices is actually lessen.


Quoted: "OOP has, however, been wildly successful. It may be that this success is a consequence of a massive industry that supports and is supported by OOP."

Link: https://stackoverflow.blog/2020/09/02/if-everyone-hates-it-w...


JavaScript is functional if you write in functional.

There's nothing stopping you from writing pure functional code in JavaScript or TypeScript.


I think this is something a lot of people overlook too quickly, too. It's pretty easy to get some of the benefits of FP without using Haskell/Scheme/ML/whatever in a lot of languages:

- Write pure functions

- Postpone side effects until necessary (as in, set up the side effect in a way that the side effects inputs are testable)

- Return things when possible, in order to increase the expressiveness of your code

It's also important to not fight the language you're working in. If you're constantly breaking idioms and your teammates can't read your code, FP isn't providing any benefit.


> Contemporary imperative languages could continue the ongoing trend, embrace closures, and try to limit mutation and other side effects. Unfortunately, just as "mostly secure" does not work, "mostly functional" does not work either.

https://m-cacm.acm.org/magazines/2014/6/175179-the-curse-of-...


To have a program crash on null is a feature. There was a trend in OO caused by you know who that made people always initiate variables and return stupid things like empty lists instead of null. Among the worst APIs to work with are the ones where you never can tell when you get an empty list if something went south or if it is a correct state.


Functional programming: the future of computing since 1977. (Dating it from John Backus's Turing Award lecture.)


Is that really a fair date to use? The functional programming Backus is talking about bears little resemblance to the functional programming which is being discussed now(and in the article). I think an interesting discussion can be had about what defines functional programming (and I’m not really trying to argue with you), but Backus’s FP doesn’t fit the definition the article or other posters are using (or implying).


As long as I can avoid the mess of oop and inheritance, I'm happy.

Good oop is only found in a very small amount of libraries.

In don't want "reusable oop code", i want fonctions that returns data. Side effects are impossible to keep track.

Of course you can't use fp everywhere, but oop should not be the default.


All this article has done is reinforce my desire to always consider the best paradigm for my use case.


The article picks on Javascript (of course) however you can write almost exclusively functional code in Javascript with the help of Rambda, fp-ts or the like. Yes, there is no "compiler" (outside of tsc) that will help you (yet) but stylistically, it's possible.


There is nothing magical about functional programming, it is the elimination of non functional programming features that is important.

A language that can do either is exactly the wrong thing, from the perspective of TFA


> There is nothing magical about functional programming

How come not? I read that F# gives you compiler exception when you didn't match all possible values. Or when you didn't handle __maybe__ cases. JS even doesn't mind comparing strings with ints and incorrectly summing them together and not throwing a runtime exception less alone a compiler complaint.

  "1" == 1
  true
  "1" + 1
  '11'


You can add error checking (e.g., compiler exceptions), but you cannot remove the parts of JS/other languages that cause that error checking to be important in the first place, not without fundamentally breaking the language/existing libraries.

TFA describes a paradigm where those features don't exist, and so the error checking is not important. That's a truly safe language. It's also hypothetical at this point, IMHO.


I view this as one of JavaScript’s myriad flaws. Adding “1” to 1 is nonsensical and the result of “11” is even more so.


These are peculiarities of type systems, not of functional or non-functional programming.


Well, we are talking about functional programming improving program correctness. Such behavior doesn't help. Better choose another language that is a fit for it.


You can write exclusively functional code in pretty much any language can't you?


Go is probably one popular exception.


You still can if you wish to. First-class functions are there.


I wish that were true. Unfortunately, functional composition breaks down when functions have multiple return values, as is the case for everything that returns an `err, res` pair. So even with generics added, Go is still hostile to functional programming.

If Go gets a Result<T> generic type similar to the one in Rust to replace (err, res) that would work. That seems highly unlikely, however. The node.js community went through a similar migration from (err, res) callbacks to promises and it was quite painful, I think its highly unlikely this will happen in Go.


(res, err) is Either monad, isn't it?


Its not a single value, so its not anything.


That may've sounded overly harsh - but as a first step, try for example to use the output of a Go function that returns two values in another expression.

e.g. assuming `g` returns (err, res), try writing `f` in such a way that you can call `f(g(x))`.

This has implications on how higher order functions like `map`, `filter` etc can be written in a way that makes them usable with any value (including those that represent an error)


func composeE[T](fn1, fn2 func(T, error) (T, error)) func(T, error) (T, error)

func liftE[T](func(t T) T) func(T, error) (T, error)

?

I'm not a big fan of functional programming though.


I think you might want the opposite of liftE, i.e. to be able to convert any function with two returning values into a function with a single returning value.

Then you can use those functions normally with standard FP tools such as `map` etc.

Yes, the node.js community did something like that with (err, res) and it still does it. Its pretty awful to be unable to use any of the standard library or community library functions (that can error) in a... functional way without wrapping them.


It used to be impossible to do anything remotely like FP in Go before generics, but now I believe it's indeed quite possible.


But everything you can do with generics you can do manually by copying and pasting code?


Oh that's true. But you're a terrible masochist if you consider that an option.


Or lets look at persistent data structures, a staple of functional programming:

https://github.com/tobgu/peds

Notice how you'd need to generate the DS for every type you'd like to use it with, which is not the case with built in mutable maps and slices.

To make them type-safe, you need to generate them for every type you use. While this is technically possible, it does make the language quite hostile towards functional programming. With generics, this is rectified but the problem with non-composable multi-return-value functions still remains


Lets take `map` for example. Before generics, you would have to reimplement map for every type combination. Each implementation would be done with a for loop. Now, while I'm willing to look over having one `map` implementation being done in imperative code and then everything else using that, I'm not exactly comfortable calling reimplementing map with imperative code for every type combination functional programming.


Reminds of

https://en.m.wikipedia.org/wiki/Greenspun%27s_tenth_rule

> Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.


Javascript is already functional enough without these libraries. Rambda et al make simple code unnecessarily complex or verbose, and complex code unreadable


You don't need any libraries to write functional JS. Modern React development is almost entirely purely functional.


Tangent: article has a useless 13.2MB gif at the top, with a "high" fetch priority.


One could almost say that functional programming will always be the future of software.


Linux on the desktop - written in a functional programming language. Executed on the Mill CPU. Powered by a fusion reactor.


Functional programming people are the vegans of coding, change my mind


I have found OOP better for thinking abstractly about complex systems.


Gonna be honest, this guy lost me at 'I decided to learn Haskell—and needed to do that on a business timeline. This was the most difficult learning experience of my 40-year career...'

like, really?


The programming world peaked with structured and modular styles. Everything else is random noise.

Keep your programs nicely structured and easy to follow. No code "architecture" nonsense, please.


At the end of the day, functional programming isn’t fun (to me). None of the qualities touched on in the article are unique to it, either. So… I’ll keep not doing it.


Can I state for the record that almost every C compiler still supports GOTO?

Rust supports the best traits of both Functional and Imperative languages.

Ergo: The future is Rust. Rust is the future.


I wish this all was based on some kind of science.


We should have less priests and more practical people. Mine is the only true approach is the shortest path to being told to fuck off.


Yes, in my production code, the real work is implemented by functions. Class method is just a proxy to the external world only.


> The first purely functional language to become popular, called Haskell, was created in 1990.

Nope: LISP - 1958.

Stopped reading there.


LISP is not purely functional


Functional programming fixes what ails you!

How do I know? A thought leader told me so!

Nothing against functional programming, which certainly has its uses and advantages, but this article is basically substance-free.

I guess it's really an ad for the author's company and book? Considering the shear number of things conflated with functional programming here, I'm not sure it would be worth the time.


Would be nice to have a functional language to construct images for something like midjourney


Love the dubious github repo chart.


“Kindly let me help you or you will drown said the monkey putting the fish safely up a tree.”

― Alan W. Watts


Oh! From Hacker News trends, I thought the future of programming was going to be Rust...


Near future vs. farther future.


wrong week.


Just don't include non-sense jargon in languages we are good to go. Elixir | Elm is dead simple FP you barely care about "FP", it's just module and functions. No monad, typeclass, lens(I still don't know this one, what it is).


Software should be the future of functional programming.


Hammers ought to be used to solve every problem.


I doubt so. Mutation is incredibly powerful. Problem is that we don't understand it.


They don't claim that pure FP is the future. Immutable is better default IMHO, make it explicitly mutable if you need it and know what you are doing.


This article would have benefited a lot by including a few examples of functional expressions and how they compare with other paradigms.


Naah. ;)


No.


No.


Show us the commercially successful wonderfull code you wrote using functional programming and you will be 1000x more convincing than anything you claim in an article or religious face-to-face debate.

For example, there are hundreds of glowing articles written about Lisp, making all kinds of amazing claims about the superiority of the language and the superior intelligent of people using it. However very little commercially successful software is written in Lisp. Making those claims laughable and not convincing at all.

Less talking more walking!


Me to Charles Scalfani (article's author): "Who's an edgy boy? Who's an edgy boy! Oh, you're such an edgy boy today. Good boy!" /s


FP is a thing that's come and gone. All the smart kids have moved on to Rust, where you get to play with types like a Haskell programmer but still have imperative constructs and no GC.


Rust is the industry's next incremental step toward functional programming. Eventually, more and more non-functional features will be removed from languages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: