Hacker News new | past | comments | ask | show | jobs | submit login
Functional Programming – How and Why (onsclom.bearblog.dev)
177 points by onsclom on Jan 2, 2023 | hide | past | favorite | 247 comments



Not all functional programming idioms work in all languages.

On my computer, the article's imperative range(6000) took 0.05 ms and the "functional" range(6000) took 400 ms. The whole [...cur, cur.length+1] thing turns a linear algorithm into a quadratic one. It wouldn't happen in all languages, but that's how JavaScript's arrays work. My advice is that if you really want to do stuff like this, choose appropriate data structures (i.e., learn about persistent data structures).

Except in this case the imperative version is already totally fine. It is modifying an array that it created itself. You have all of the benefits of avoiding shared mutable state already.

Also, the difference in performance would become even worse, except that for range(7000) I already get "Maximum call stack size exceeded" on the recursive one. The imperative implementation handles 10000000 without breaking a sweat. My second advice is to not use recursion to process arrays in languages (like JavaScript) that have a very limited call stack size.


And suddenly you care a lot about the "how", not just the "what". Voiding the whole "but it's declarative!" argument.

At least in imperative code the "how" is explicit. In functional code it's implicit and you need intimate knowledge about compiler and/or runtime to know what's going to happen.


I kind of agree that it's possible to slightly overstate the declarativeness argument. Functional-style JavaScript is not that declarative. Not in the way SQL is.

But also, writing imperative code doesn't guarantee explicit performance characteristics. Whether you mutate references or not, you still need to know which operations are fast and which are slow.

In JavaScript, concatenation, [...array1, ...array2], is non-mutating and slow. Adding an element to the end, array.push(x), is mutating and fast. But adding an element to the beginning, array.unshift(x), is mutating and slow. So even if you're sticking to mutation, you still need to know that push is fast and unshift is slow.

And yeah, sorry, "in JavaScript" is not quite right. I meant in my browser. This is not part of the spec, and it's not mentioned in the documentation. Is it the same in other browsers? Who knows. To me, this is just as much "you need intimate knowledge about compiler and/or runtime to know what's going to happen".


> array.unshift(x), is mutating and slow

This year it's slow. I wouldn't count on that being true in five years. I mean, it might, it might not. There's a risk in optimizing too much for current conditions. You can easily end up over-fitting for the current set of coincidences, and end up with less readable code that's actually slower in the long run.

But by all means, measure and improve until it's fast enough.


You never know what the compiler is going to produce until you look at what it actually produced, whatever language you are using.

Unless there is clear evidences upfront that the project will be a piece of software where local performance is highly critical, it makes sense to favor code readability and maintainability over optimality.

Of course, you can have different level of code quality whatever the paradigm retained. Most languages out there will allow you to mix different paradigms anyway.

Given this fact, we would surely better served with an article like "When to favor expression in each paradigm available in your favorite language".


> Unless there is clear evidences upfront that the project will be a piece of software where local performance is highly critical, it makes sense to favor code readability and maintainability over optimality.

Sure, but let's also not confuse "optimal" with "reasonable". One of the major challenge of modern programs is how slow they are at every level. Very often, this bad performance can be attributed to a style of programming that tries to completely ignore that the program will run on a real machine with real hardware. A little bit of mechanical sympathy (e.g., operations that make good use of the CPU caches or don't confuse the branch predictor) can yield a program that is 10x faster than a naive implementation, with little to no loss of readability or maintainability. (In fact, as noted by Nelson Elhage [1], faster programs enable simpler architectures, which helps make them more readable and maintainable.)

In FP languages, programmers face an extra difficulty: the distance between the code they write and the machine code that will be executed is greater than in languages like Rust or Go. They will need to be knowledgeable about the language, its libraries, and its compiler to avoid the pitfalls that could make their programs slower than would be reasonable (e.g., avoiding unnecessary thunking in Haskell).

[1] https://blog.nelhage.com/post/reflections-on-performance/#pe...


> the distance between the code they write and the machine code that will be executed is greater than in languages like Rust or Go.

First of all, one of those languages is not like the other (Go is closer to JS than to Rust) - second, we really can’t reasonably guess at the produced assembly, even C compilers do some insane transformations leaving the code nothing alike the original, let alone more expressive languages.


I completely agree. That functional style actually favors readability and maintainability is a quite strong claim which I read often but it's usually lacking evidence.

In my experience, software engineers "think" imperatively. First do this, then do that. That's what we do in everyday life (open a random cooking book..) and that's also what the CPU does, modulo some out-of-order and pipelining tricks. A declarative style adds some extra cognitive load upfront. With training you may get oblivious to that, but in the end of the day, the machine does one thing after the other, and the software engineer wants to make it do that. So, either you express that more "directly" in an imperative style, or try to come up with a declarative style which may or may not be more elegant, but that this ends up more readable or maintainable is on the functional proponents to prove.


Maybe we have different mental models and that’s what drives this conflict? I certainly wouldn’t say that first do this then do that is my primary mental model, in small blocks yes, but once you get past even 1 file that breaks down. Once you introduce threads or the network these assumptions have to go out the window anyways.

It’s funny you mention recipes, because i’ve always been frustrated by traditional recipe descriptions that muddle concurrency and make it difficult to conceptualize the whole process. E.g. the table structure here is superior to step by step http://www.cookingforengineers.com/recipe/158/Dark-Chocolate...


The network is the prime example for forcing serialization of events.

Tbf, I agree with the recipe criticism. Would be neat with a dependency graph instead of a step-by-step list of things to do when baking a cake. Would have saved me a lot of headache in the past. (The table in your link expresses a tree, which is probably sufficient for most purposes.)


> In my experience, software engineers "think" imperatively

I hear this often. In the past the claim used to be that they "think" object-oriented. This is a thinly veiled argumentum ad naturam.

> ... on the functional proponents to prove

Prove your own claims before you demand proofs from other people. And by prove I mean really rigorous thinking, not just superficially seeking confirmation for the things you already believe either way.


Not necessarily. If the current "default" is imperative, then the burden of proof is on the functional advocates, because they're the ones advocating for change.


Burden of proof is stupid if the only argument for the other is being the status quo.


Burden of proof is very relevant if neither side gave an argument.

People are doing A. Someone says "Do B instead!". "Why should we do B?" "Well, why should you do A?" At the end of that extremely unproductive exchange, what are people going to do, A or B? They're going to keep doing A, because they were already doing that, and nobody gave them any actual reason to change.

So "burden of proof" isn't meant in the sense of this being a formal debate, with rules. It means that, when ahf8Aithaex7Nai said "Prove your own claims before you demand proofs from other people", that ahf8Aithaex7Nai is wrong. OOP is the current default in terms of the bulk of professional programming; if FP advocates want that to change, it's on the FP advocates to provide reasons, not on the OOP advocates to prove the correctness of the status quo.


Right the status quo works. Computers are serving us. The only question of course is whether rigorous adaptation of FP would make them work even better.

The proponents of status quo only need to prove that the existing approach works and is useful. And I think that is proven already by the very existence of status quo. It wouldn't be there if it wasn't somehow useful.


A: This thing! It's true!

B: Prove it!

A: No, you prove first! With really rigorous thinking, please!


Bad faith


I mean, OOP is still a very great model for plenty of programs.


In my experience, people think and explain their ideas in natural language and sometimes pictures. So that is the best way to program computers.


Imperative is good (perhaps even better) on a local, small scope.

It is terrible on a system level with concurrent execution, there you really need all the safe guards.


As weird as it sounds, but even when performance is not a deal breaker for individual users, on high-traffic sites the sum of all energy wasted on unnecessary/unoptimized computations can become so large that it becomes an actual ecological concern. It's a completely new thing that we should keep an eye on.


Is it really when we literally waste countries’ energy usage on bitcoin and such? Computers are surprisingly low in energy consumption on the grand scale of things, especially compared to their usefulness.


Really? Only one application of computers --- Bitcoin --- wastes countries' energy usage and yet it doesn't make sense to optimize software for ecologic reasons?


No, but there are plenty much higher targets before going after computers, especially compared to the insane value they produce. Like, is that single server running all the logistics of a country’s train network “inefficiently” really significant compared to.. the inefficiency of even a single train?


What about those billions of devices running software in gadgets, phones, TVs, routers, dishwashers, etc ?


Crypto uses vast energy as a feature, a side effect of maintaining a competitive environment as part of proof of work (other staking mechanisms are different). You could not optimize this to use less energy.


I wrote a blog post once, https://www.fbrs.io/ramda/, where I compared a functional solution and a vanilla solution. I tried to cover both performance and some vague notion of readability.

In the end the vanilla solution won in both areas. And I say that as an FP and Haskell fan boy.


Ramda is what you get when people try to zealously make Haskell out of Javascript. The same happens in many other languages where people try to force Haskell idioms into languages.

Edit. For some reason people equate "functional" with Haskell, even though Javascript (and most modern languages) is perfectly capable of expressing functional idioms without the descent into madness.

Your vanilla solution is already a functional solution. Literally no need for `transduce pipe map over lens`


> [...] is perfectly capable of expressing functional idioms without the descent into madness.

Except of course deep recursion, which needs to be rewritten into a not so readable externalized stack or some kind of loop replacement. There are difficulties involved with rewriting some recursions as loops and some loops as recursion.

Of course, if JS engines adhered to the standard, they would all have TCO by now and the problem would not exist.


All you have to do is use a relocatable stack/heap hybrid like Go does.


I would argue that if you want to make an honest comparison than use an implementation that at least makes an effort to optimize functional code. And JS does jack shit for FP code compared to Haskell.


All code I've seen that uses Ramda looks like the one in the example.

No idea what your comment about jack shit is about. Haskell is often just as bad as Ramda.

Edit. By the way. All the unusable and unreadable jargon that Haskell uses can be much more easily explained with Javascript: https://github.com/hemanth/functional-programming-jargon Make of it what you will


What's wrong with Ramda?


Unusable unreadable unnecessary complex abstractions in a language that doesn't need them. Also, extremely non-performant.

As you can see even in the blog post: straightforward readable code using built-in functionality is turned into 8-12 imports, a bunch of auxillary functions and multiple function calls that obscure the actual functionality. At 35 times speed penalty and 2 times memory penalty.


The library itself is fine. The problem I see is twisting JS/TS in to a language it is not.

JS simply does not lend itself to currying, data-last signatures, piping, and pattern matching like an ML family language does. And as you can tell by the Ramda typings, neither does the TS type system. [0]

You will forever fight an uphill battle against every single tutorial and piece of documentation and colleague when going down this path. I don't think the effort is worth it.

0 - https://github.com/DefinitelyTyped/DefinitelyTyped/blob/dc9b...



As much as I like these concepts, I'd be hesitant about adding them to the language. At what point does a language support too many paradigms?

In contrast, take the recently stage-3'd Iterator Helpers[0]. These build on top of the language by using methods that already exists in other parts. It feels natural and is more of the same.

0 - https://github.com/tc39/proposal-iterator-helpers#implementa...


You can cherry-pick examples that go both ways, and we've all worked with individuals who have consistent biases towards and against functional idioms. In the end you have to rely on your own judgment for each case.


Great points! I am still trying to figure out how to best use functional programming with JavaScript. I've found writing performance critical parts imperatively while being conscious about shared mutable state is a decent compromise.

> My second advice is to not use recursion to process arrays languages (like JavaScript) that have a very limited call stack size.

Can also confirm this, here was my process on the some of the harder days in Advent of Code:

1. I wonder if I can solve this in a pure functional style with JavaScript

2. Yay, it works on the example input

3. Oh, I hit the call stack limit on the real input

4. Time to rewrite the recursion as a loop lol


Shame since ECMAScript 2015 was supposed to give of proper tail calls in JavaScript, but Blink chickened out and so did Gecko.


> I am still trying to figure out how to best use functional programming with JavaScript.

IME avoid recursion, use immutability in the large while avoiding in local scope beyond a simple .map.filter chain, and on the backend I allow myself fp-ts to have sane error handling (I assumed TypeScript there but I'd just not ever use vanilla JS nowadays).

With that said I'd really, really rather use a compile to JS lang that does the optimizations for you, specially for frontend use cases.


> trying to figure out how to best use functional programming with JavaScript

Use ES6 classes to guard every property read and write behind a method-call.

You can avoid mutable state by making methods return only primitive data, including JSON. Then nobody can modify such state from the outside. You control and can easily observe all state-changes.


5. refactor loop as loop function


The functional version is written in a style that uses tail recursion, which is not optimized in JS. It’s linear if arrays are not copied, which should be the case with TCO.

Both versions are quadratic if arrays are actual arrays and require a single chunk of memory. (push may need to copy the entire array)


And persistent memory structures also tends to have a cost. For example, persistent maps are usually O(lg n) lookup and insert rather than O(1) usually seen in mutable maps.

If I remember correctly this is a pretty fundamental limitation of persistent structures, so it's not like there is a better persistent map data structure out there waiting.


Lazy structures can usually get the log(n) back when you amortize over many operations.


Sure, but for some structures it amortizes to log(n) where mutable structures can amortize to O(1).


> On my computer, the article's imperative range(6000) took 0.05 ms and the "functional" range(6000) took 400 ms. The whole [...cur, cur.length+1] thing turns a linear algorithm into a quadratic one. It wouldn't happen in all languages, but that's how JavaScript's arrays work. My advice is that if you really want to do stuff like this, choose appropriate data structures (i.e., learn about persistent data structures).

That is a good point and should perhaps have been timed before being put as an example. Arrays themselves are usually quite the mutable data structure as well, so I think your advice about learning persistent data structures is spot on. Sometimes one can simulate a persistent data structure by using copying the whole or parts for some operations and use those only when one needs a separate copy.


Pre-allocating the size of the array lowers that number even more.

Persistent data structure are really useful, and are (often) magnitudes more efficient than DIY-immutable data structures.

Imperative ones however, like you mentioned, are often (comparatively) near ideal by virtue of being imperative.


“Functional Programming—When and Where”


I mean, in an example where the problem and solution is well understood, sure, but like all things there are downsides and you need to consider appropriateness.

Firstly if you're conceiving a novel algorithm it can be tedious to debug/test if written in a functional style because it's hard to step through in the debugger.

Second, you need to be extremely familiar with the language and ensure that you know that you're not mixing and matching mutable and immutable system calls, which will be hell to debug, see 1.

I'm not anti-functional and sometimes it can yield really clean and easy to read code, but I have to recoil when I see praises sung without the criticism.


I’m a functional fan and I don’t disagree with your points.

Like anything, real life functional programming is not a panacea and most languages that append functional constructs as an afterthought often come with a price (debugging, performance) and just as you pointed out, mixing functional code with imperative code indiscriminately can yield wonderfully nuanced bugs.

With that, I find languages like OCaml and F# provide a great ergonomic experience for writing both functional and imperative style code. (There’s nothing wrong with imperative code confined to the scope of a function or a well defined object)

Even with great tools, I still think there’s quite a bit that can be done to improve the hidden costs incurred from a functional-first approach.


> (There’s nothing wrong with imperative code confined to the scope of a function or a well defined object)

This is often the case with hash tables. Sometimes one simply wants to store an info and the fact, that this info has been discovered or processed or occurred. If one makes the hash table inside the function where it is used, then the important aspects of no side effects and no mutation of arguments of the function are still preserved.

However, sometimes a problem can arise later, when wanting to parallelize something inside that function involving the hash table. So one needs to be a bit careful here as well.


> There’s nothing wrong with imperative code confined to the scope of a function

Thhat is a fine point which I rarely see discussed in connection with FP. (Pure) FP is commonly understood (?) to mean that there can be no mutable data. You can "bind" a value to a variable but only once. Right?

But within a function you can have local variables. I don't see why it would be bad to assign multiple different values to the same local variable, which only exists during the execution of the function. The variable only exists in the "stack" not in the "heap" and is gone as soon as the function returns.

So if I run a while-loop and repeatedly assign a new value to the local variable "i" for instance, would that be unacceptable according to "FP"? Why? I can't see any ill effects from modifying a variable whose value can not "linger" past the execution of the function.

What am I missing? Why is it (supposed to be) bad to rebind any number of different values to a local variable?


I don’t know about good or bad, but I have a practical example of where rebinding variables necessitates rethinking the language a little. Erlang uses pattern matching for all variable assignments, where unbound variables are assigned and already assigned variables are used for pattern matching, returning an error if the pattern matching fails. In that particular context, allowing rebinding a variable change the language. Elixir behaves like Erlang but supports rebinding, so variables you wish to keep around need to be annotated with a “pin” character.

I prefer the Erlang way, I find it to fairly nicely solve a problem with the only downside being that I can’t rebind variables, which encourages me to write smaller functions. And the language use fewer tokens, which is an nice if you ever want to parse it for some reason.

https://www.erlang.org/doc/reference_manual/patterns.html https://elixir-lang.org/getting-started/pattern-matching.htm...


Pure functional programming is really about tracking and controlling effects by making them first-class, not avoiding them altogether. This has numerous benefits for reasoning, correctness and optimisation. Haskell allows mutable local variables, 'ST' references which can be used to build functions that are observably pure on the outside. The Haskell type checker will guarantee that no mutable state escapes. This is true "state encapsulation". It is completely acceptable for pure FP programming.


So mutable state is not bad, as long as it is "encapsulated"?

I thought (from what I've read so far) that "no mutable state" is the definition of "pure FP".

So should the definition of pure FP be "No un-encapsulated mutable state"? Instead state must always be "encapsulated". Where can I read more on this? Thanks


It’s not that it’s good or bad - mutation can only happen in a monad (IO or ST or similar). The monad is the encapsulation and that encapsulation is enforced by the type checker.

It’s actually quite liberating not having to worry about what’s “right or wrong” and instead just do whatever you like that compiles.

I suggest getting started with Haskell through http://www.learnyouahaskell.com/ - it was my favorite.


I'm looking at it not from the perspective of "Which language is the best?" but from the point of view of I'm working in a specific language now, what are the principles of "Pure FP" that I could benefit from when using that language.

How should I "encapsulate state" in JavaScript for instance?

If the principles of FP are great surely they must be applicable in any Turing-complete programming language. The (Haskell etc.) compiler is no doubt a great tool, great at type-checking. But I see using any specific compiler or type-checker as a tooling issue, not a foundational insight about FP.


Ah yes, indeed there are benefits that come from programming in Haskell that can be applied to JavaScript, but I think that without the compiler it’s very hard to guarantee success.


Somewhere down towards the transistors, the change has to happen. The registers of the machine are explicitly designed to be mutable. I don't know how functional languages are compiled and maybe somebody who does can chime in, but I would not be surprised if the IR already was imperative. So it may be easy to give some access to that in a language.

I'd say "pure" is if the calculation can be done with a function without side effects and it is a deliberate choice to restrict a language to (almost) only that. No side effects at all would mean no interaction with "the outside world" and no access to input data though.


Haskell compiles down to C- first, which is an even less expressive version of C basically, so you are right there.

But if you go low enough FP concepts pop up again, e.g. out-of-order execution is pretty functional, but to a degree so are vector instructions. And this is no surprise, Turing machines and lambda calculus (and recursive functions and anything Turing-complete) are equivalent. If you think about it, a Turing machine is just as side-effect free in itself, side effects are special to computers.


Deep down it must be, as the data is conveyored past fixed compute engines (which can be seen as pure functions), right? FPGA programming is pretty much functional as well. RAM is capturing the state there.


We can see FP as a tool to for us to express a functional thought so that the computer can understand it and will translate it into something the computer itself can run.


Purity is a poor word. It refers to referential transparency: if I give you an input to your foo function, “A”, and I get an output, “a”, then I can always substitute foo “A” for ”a” in my program and it will not change the meaning.

It’s poor because it invites other meanings: some people think purity means immutability for example; but that’s only part of it.


That is not the definition of FP.

There is no definition of what FP is that is agreed upon by everyone, but the key is the word "functional" in FP.

This comes from a mathematical function, which is a mapping of input values to output values.

You could write a definition of a function f(x) to be such that the value of f(x) is the result of the execution of some imperative-looking code (including loops that mutate variables).

Mathematicians would be satisfied with such a definition and would confirm that f(x) is a function just as valid as any other.

And so functional programmers should also accept such a function. In fact, Haskell has specific syntactical support ("do" notation) to allow you to write functions using imperative-looking code including mutations (the ST monad, or other State monads).


A "function" is a "function" if it always returns the same result for the same arguments, right?

Now I'm trying to wrap my head around how can I call a function which takes no arguments but which returns a value the user entered on the keyboard? How can I write a function which returns the current mouse-position on the screen, while still abiding by the functional principle of same result for same arguments always?

Can it be done? If not can we still call it "functional"?


Well, in a non-side effecting world these could still be a function of the state of the computer. mousePos(initialState) could for example be used to get the mousePos in a given state (functional reactive programming may also be interesting for you)

Another approach would be to divide the mutating, side-effecting world from the pure one by introducing a wrapper around the former. Let’s call that wrapper IO and create a few helper functions that can sequence these IO wrappers one after the other. The important part is that such a wrapper can provide a value, but that can only ever be read/seen by the next element in a sequence. So any mutating program can be described as such sequence and be executed at the end, while the program you wrote is perfectly pure. This is basically what a monad is, and a haskell function for reading the mouse position would look like mousePos() -> IO(Pos(int, int)) (in some hybrid type notation)


> A "function" is a "function" if it always returns the same result for the same arguments, right?

Yes. This matches the definition of a mathematical function that I gave above.

> how can I call a function which takes no arguments but which returns a value the user entered on the keyboard?

You can't. If your language allows you to write such a function then I would argue you are no longer doing FP (at least in this part of the code).

I believe that this is why many people like languages like Ocaml, F#, Scala. These are hybrid functional-OOP. You can use FP for the parts of the program that it's suitable for, and OOP/mutable-imperative for the other parts("the best of both worlds").

The purist FP programmers might claim that reading input from a keyboard is "uninteresting" and that programming is really about logic, algorithms, and computation, and therefore FP is enough for them (...in their ivory tower).

I personally am a fan of Haskell, but I don't like calling it FP, because it does allow you to write "code" that reads input from the keyboard. They still claim to be pure FP by using a loophole and saying that "getLine" is not a "function" but rather an "IO action". But to me it doesn't matter what you call it, you are still effectively allowed to write code that does I/O and mutates external stuff, doesn't matter if you call it a "function" or something else.

Of course the big advantage that Haskellers claim is that the language enforces the separation of effectful code from "pure" functions. In the other hybrid FP languages it is possible (and happens often) where you have a deep call chain of "pure" functions and then you realize you need some small side-effect somewhere, so you just add it. Haskell does not allow this at the language level, and forces you to refactor your code. In my opinion this does lead to cleaner architecture and more maintainable code in the long term.

But to me Haskell is really interesting for a different reason. I don't like to call it an FP language, because like I said, a lot of the code you write isn't really "functional". A better name would be: "mathematical oriented language". And by math I mean that the code is built using logical systems that have mathematical rules that can be reasoned about.

Going back the I/O (reading input from the keyboard): what the Haskell people have done is realized that you can actually model this behavior of user input as a mathematical model. This unlocks a type of thinking that allows you as the developer to go beyond the thinking of telling the computer: "do this, then do this, then do this, if this happens then do this, ...".

An example: the mouse-position on the screen. There are haskell libraries called FRP that model the mouse position as a "signal function" (a well-defined term they invented) that is a 2D coordinate that changes over time. You can then combine this signal function with others to create behavior such as an image that follows the mouse. You can build complete GUI applications using this approach, and the code looks nothing like imperative programming, and more like an electric circuit of different components connected together.

And that is just one example. The Haskell community has also come up with mathematical models for streams (think unix pipes) that allows you to write streaming code that is much more powerful and cleaner then what you see in other languages.

So my summary: Yes, functional programming is limited, but if we go one level deeper, we could say that the essence of FP is really mathematical thinking. And if we embrace that and run with it then we can open up entirely new possibilities.


Great explanation, thanks

> you realize you need some small side-effect somewhere, so you just add it. Haskell does not allow this at the language level, and forces you to refactor your code. In my opinion this does lead to cleaner architecture and more maintainable code in the long term.

I can see it leads to more maintainable code, but there is a cost associated with that. With non-Haskell I can just make a small change into a single "function" to add the side-effect and be done with it. With the Haskell I need to redesign my whole function-pipeline somehow.

That is a good example. Is there some example code you know of that would show how in fact you take a function pipeline of say 5 Haskell functions and then transform it so that one of the functions "simulates" some side-effect?


A common case is when you need to add logging to a function.

There are 2 ways:

1. You cheat and use unsafePerformIO. This is discouraged and I won't go into the details.

2. You convert your inner function from being a "pure" function to being a monadic function over some abstract MonadLogger m. Then you can still call your function in a "pure" context by using a Writer monad for your MonadLogger, but you can also call it from the IO monad and get "better" logging (with real-time output and timing info).

Note that you still do need to convert your entire call chain from being pure to use "MonadLogger".

But I would argue that even in non-Haskell languages you should also do this (meaning you should thread a "Logger" object through all functions). Why? Let's say you don't, and now you want to do logging from some inner function. How do you configure the logging? To which file do you save it? What verbosity level? You will need to use a global/Singleton Logger object. But what if you want to call your function from two different places in your code and use a different verbosity setting for each? You can't. So I argue you are always just better off doing the refactor in all languages. The fact that Haskell forces this means that developers don't have the choice and are forced to do the right thing.


> I thought (from what I've read so far) that "no mutable state" is the definition of "pure FP".

It's much more nuanced than that, certainly no arbitrary and unchecked mutable state is. There are ways though of achieving mutable state whilst maintaining referential transparency.


Yes, but the point is that you can fully isolate any part of the program from having side effects. I.e., the caller is responsible for executing its callee's side effects.


Right therefore it would seem that it is ok to mutate local variables inside a function anyway you want, because that can not have side-effects outside of the function. Right?


There is no one definition of "pure FP". Everyone has their own.


This is why it's common for languages to be multi-paradigm. For instance, in OCaml, it's perfectly fine to use while loop and mutable variable, and more generally mutable data-structures (typically records with mutable fields, or arrays).

It's mostly a matter of preference. I think experienced programmers will favour the pure approach unless there's a good reason to do otherwise, and know well how to solve common problems without relying on mutable structures (typically, it's very rare that you need a while loop in OCaml). But sometimes, it makes more sense and is less verbose to use mutable DS.


> This is why it's common for languages to be multi-paradigm

I can agree with that. Yet I somehow perceive an opinion that it is not FP unless it is Pure FP (?). Like even a small amount of impurity can poison the whole well.


The problem is that few functional languages can reach that level of purity (and for good reason IMO).

It would necessitate that every side effect, every boundary and every interface to be pure, on top of internals also being pure.

Haskell comes close, but we're skirting the real heavy hitters like Idris/Agda here.

Erlang hits the sweet spot for me, where processes (async, stateful modules) communicate by message, whose internal functions are 'pure' .


> Erlang […] whose internal functions are 'pure' .

Why is that a good thing? You can easily reason with mutable code on a small, local scope, yet the real hard part will be the interaction of all those messages. Sure, erlang has plenty of safeguards for those as well, but I don’t see what would be lost if local mutability would be allowed.


I think it's also true that on a small local scope it is easy to write a function that takes the old state as argument and returns the new state as result. I think that in some sense makes it easier to reason about the 'evolution" of the actor.

I'm not sure I can explain why I think it would be simpler that way, but perhaps it is that conceptually you can then "package" the logic of how the actor-state changes over time into a single (local) function.

You want to understand and describe how the actor-state evolves over time? Read and understand the logic of this single function, whose results depends only on its arguments. Note that such a function does not have any internal state itself, it calculates the new state for the actor, always the same way based on arguments the system gives it.

I think that's the key to understandability here, the programmer does not need to know and keep track of the state and where it is stored and how to modify it. The system/runtime gives the current state to them as argument. It is not hidden into some variables somewhere who knows where. It comes as argument to function local to the actor. It is part of the definition of that actor.

This is how Erlang works, right? Is there a specific name they use for such a state-evolution-function?


Joe motivated this by the 'let it crash' philosophy.

Processes are supposed to crash when something that isn't supposed to happen happens. Inside OTP, this presumes that the reason for the crash was an invalid message, as processes should be black boxes that only communicate via messages (Alan Kay's definition of OOP).

Statefulness inside functions works against this goal. AFAIK, SmallTalk had similar idioms of returning new copies data to be sent to objects.

In all other languages, I agree with you. I can read and understand 100 lines of someones else's Go code faster than 20 lines of my own Clojure. I write Clojure for a living.


I think some people who discovered functional programming after "Pure" OOP, where everything is an object, fell in love with it and got a bit too high on their own supply. But i don't think it's the majority.

Also, functional programming is harder than pure imperative or imperative+object imho, which help gatekeeping, and more impressive (again, only my opinion). There is also a lot a good ideas that (came from/were invented by/were introduced with) FP and are integrated in older languages. This is a perfect setup to have cult-like groups appears, even if the person they revere isn't as "into it" as the group is generally. Imho it's mostly young devs who are/were pretty good for their age group (and thus have a bit of an ego[0]), a bit noisy, and will grow up and be more welcoming in the future.

[0] which is understandable and reasonable, this is absolutely not a shot, i was one of you (lisp/CLOS fanatic instead of FP fanatic, but still) until i worked with people 100 times better than i. The "grow up" is a small shot though, but don't take this seriously :)


Harder? In what way and for whom?

I am suspicious of claims that procedural, imperative programming is “easier” than functional programming: in terms of how quickly one can learn it, how accurately one can predict what a program does by reading it, and how accurately one can make a modification. Assuming two developers both of equal experience in their preferred paradigms and languages I expect the functional programmer will score higher in these cases.

The cost of functional programming is that it does take some training to write programs in this style that many people are not familiar with. And I think it’s that lack of familiarity that people are referring to when they say that it is more “difficult.”

Personally I can’t disagree that it is harder to learn if you are already familiar with procedural programming. It took me quite a few months of dedicated work to get productive with Haskell. It was a painful, as I recall, because I had to throw out all of the accumulated experience I had with a handful of other languages.

Now that I am productive with it though I don’t find it any more “difficult” than programming in C or JavaScript or Python. In fact I find it easier: I can rely on the type system and reason about my programs and how they compose instead of stepping through their execution. Functional programming demands that I debug my thinking rather than my program most of the time. The former takes less time in my experience and becomes less frequent the more experienced you become.


Functional programming is routinely taught to complete beginners. Languages like Scheme or (core) ML are very simple and you can do a lot with surprisingly few constructs.

However, Haskell is in a different league in my opinion. If you want to do anything meaningful, you need monads which add some layers of complexity.

I consider myself a very experienced OCaml programmer, and I've been working casually for more than a year with Haskell, but I'm still much slower in Haskell. It still happens to me that I need one hour to figure out how to use some monad transformers, or specific API, to do something that would take me 10 minutes in another language. Well, probably I still need more practice, but the learning curve is certainly much higher for Haskell than Python.


Sure the learning curve demands more but the payoff is worth it. Monads, despite all the tutorials out there, are not that difficult a concept... once you learn it. It's frustrating to teach because of this. Like most things in mathematics it takes some effort to learn and pays off in spades. There are a ton of abstractions you get with learning monads and the rules of how they compose are consistent. It goes a long, long way. I'm better for having gotten through it.

Most abstractions in other languages though? Ad-hoc mish-mash of half-baked ideas with no laws and poor specifications. Each one you learn is absolutely useless when moving to the next program.

Monads are the same everywhere you go. Just like algebra.


> Thhat is a fine point which I rarely see discussed in connection with FP. (Pure) FP is commonly understood (?) to mean that there can be no mutable data.

Martin Odersky was discussing this recently. How they found examples of overly-complicated "functional programming" in their codebase (erroneous, and hard to understand) when an imperative function would be way easier, performant and correct.

The takeout? The imperative version was still pure. Mutability was limited to the scope of the function.


It might be "overly complicated" in Scala, but it works very well in Haskell and allows effects to be controlled, checked and reasoned about. In fact, monads were originally proposed to give formal semantics to imperative programming. Scala is really an object-oriented language, first and foremost, so please do not judge functional programming based on how it looks in Scala.


The point of my comment was not to diss FP, but to explain that imperative code with some local mutability is not necessarily in direct opposition to pure FP.

I have never tried Haskell, so I cannot really hold any truly fair opinion there, but I find Odersky's "values" or approach to programming language design much more in line with my own. Scala 3 seems like a very well designed language (pragmatic, expressive, conceptually solid).


> imperative code with some local mutability

Haskell allows this, you just have to actually demonstrate that the mutation is local:

    statefulSum :: [Int] -> Int
    statefulSum xs = runST $ do
        n <- newSTRef 0
        traverse (\x -> modifySTRef n (\y -> y + x)) xs
        readSTRef n


And you can do it with a state monad in Scala too.

But Odersky's point is that (at least in Scala) it is even simpler (and thus should be preferred) to do it with raw (but still local!) mutation.

https://www.youtube.com/watch?v=QRcD9Zc7eq4&t=943s


`ST` isn't `State` (though the interface is very similar), it's a restricted variant of `IO`. An `STRef` is an actual mutable reference, backed by actual memory.


Oh, I see. Nice!

I'm afraid we don't have anything like ST in any Scala library that I know of :(


> I find Odersky's "values" or approach to programming language design much more in line with my own.

Odersky's "values" primarily appear to be whatever will appease the masses, which is why Scala started with curly braces and OOP. So much of his intellectual capital has been spent on trying to combine objects, subtyping and FP, that I can't help but feel it could have been better spent elsewhere. Folks are moving on from OOP. Scala is in danger of building what people once wanted, but don't want any more. IMHO there are better ways of bringing first-class modules to FP (e.g. see the 1ML paper).


OOP is absolutely a must-have for plenty of code bases though. How would you represent for example some other system’s entity that holds state, let’s say the connection between your video card and your monitor as known by the linux subsystem?


OOP is absolutely not essential, that's why new languages like Rust, are not OO. OOP is not necessary for mutable state.


SEE: https://doc.rust-lang.org/book/ch17-01-what-is-oo.html#:~:te....

"Using this definition, Rust is object-oriented: structs and enums have data, and impl blocks provide methods on structs and enums. Even though structs and enums with methods aren’t called objects, they provide the same functionality, according to the Gang of Four’s definition of objects."

So the maintainers of the Rust language documentation at https://doc.rust-lang.org/book/title-page.html seem to disagree with your statement.


Haskell also has traits (type classes) and "methods" on them, I don't think anyone would seriously claim it's object oriented. The Rust website is clearly just a piece of marketing. Most languages that are not "OO first" will likely provide alternative mechanisms to encode OO if you want it.


Well, one might argue that “traditional” OOP is the best way to write GUI frontends and their lack show for Rust.


The best UIs, such as visual studio code, are now written in HTML.


Calling vsc the best design is quite something.

HTML has plenty to learn from even Windows Forms. Layouting only recently became okayish on the web.


You think the likes of GTK and Qt give a better UI than VS Code? I'll tell you why there are no mature bindings in Rust for these toolkits and it has nothing to do with OOP. It's because there's no demand for it.


Qt? Definitely. But I have also seen very great gtk apps. Electron apps always feel “non-native” and laggish. Honestly, try out the telegram qt app, it is lightning fast compared to any electron app.


> Odersky's "values" primarily appear to be whatever will appease the masses, which is why Scala started with curly braces and OOP.

I would agree that Odersky wants Scala to be popular and wants to accommodate even programming novices. For example Python (and a little bit Haskell and F#) proved significant whitespace to be popular, so Scala 3 now has it. In times of XML's popularity, on Wadler's suggestion, XML literals were added to Scala. Today AFAIK using XML literals is now popular in many frontend/browser frameworks/languages (although XML literals have been dropped from Scala 3).

But that doesn't mean that Scala design is unprincipled. On the contrary, there are 3 pillars that make Scala what it is, all working together in a unified and coherent manner:

* Modularity (objects, interfaces, etc, sometimes called OOP)

* Functional programming (ADTs, pattern matching, preferring immutability, higher order functions, ...)

* Meta-programming (Scala 3 features powerful Macro system, AFAIK inspired from MetaOCaml)

> Folks are moving on from OOP. Scala is in danger of building what people once wanted, but don't want any more.

More and more languages are becoming more and more like Scala:

* Rich type system instead of unityped

* type inference vs having to manually annotate everything

* higher order functions instead of not having them

* preferring immutability instead of mutation everywhere

* ADTs, pattern matching instead of "OOP style" modelling

* module system (sometimes also called OOP) instead of non-modularity

* strict instead of lazy

* ...

It's interesting to see such convergence. New languages are more like Scala now than before. And already existing languages, even though starting are more different, are getting closer to Scala by adding more features. That is to say, Scala is not magical or prophetical, not even original (most, if not all, individual feature was done before in other language), but it's just ahead of the others with combining these features.

https://news.ycombinator.com/item?id=26101435

https://old.reddit.com/r/scala/comments/lerd3t/from_first_pr...

> 1ML paper

I will definitely check it out. Thanks for the suggestion. A quick research suggest that 1ML can't model objects and classes (in the OOP lingo), as per slide 6 in https://www.slideshare.net/Odersky/from-dot-to-dotty https://skillsmatter.com/skillscasts/8866-from-dot-to-dotty

In the meantime, I will continue enjoying Scala that allows me to do Pure FP in a modular fashion while getting paid for it because of its huge (relatively speaking) job market :)


> A quick research suggest that 1ML can't model objects and classes (in the OOP lingo), as per slide 6.

But what does he mean by "Objects" and "Classes"? Everything good about objects that a pure functional programmer would want is captured by first-class modules. The internal mutable state and subtyping/inheritance relationships are much less useful. In fact, even OOP thought leaders are advising not to use them.

> In the meantime, I will continue enjoying Scala that allows me to do Pure FP in a modular fashion while getting paid for it because of its huge (relatively speaking) job market :)

That's a very good reason to use Scala! But please do take a good look at Haskell too, if only just for interest. Many leading Scala programmers ended up moving to Haskell, in order to get deeper into FP.


> But what does he mean by "Objects" and "Classes"?

To be hones, I'm not sure.

> Everything good about objects that a pure functional programmer would want is captured by first-class modules.

I just know that in the talk I linked above, Odersky mentions 1ML, that it converts modules into SystemF (extended Lambda calculus, but you probably already know that) and that he feels this attempt is force fitting where some (important?) things get lost in the conversion.

From 2:56 to 4:55

> The internal mutable state

That is very strongly discouraged in the Scala community.

> subtyping

Yep, that is embraced quite a bit.

> inheritance

Used to some extend but still generally discouraged even though not as strongly as mutable state.

> ... relationships are much less useful

How can you say that subtyping is not useful? In Scala it is one of they key ingredients for modularity. You have signatures (aka interfaces or traits) and than you have multiple implementations (modules, objects, classes as partial abstractions) which you can swap one for each other. That is possible because they form these subtyping relationships. A side note: Scala code which embraces subtyping has better type inference.

> But please do take a good look at Haskell too, if only just for interest.

I don't want to make an impression that I want to offend the Haskell community. Haskell is where I learned FP and the languages is a great success in many regards. Over Scala it has many advantages, like better syntax, better type inference, better control over memory layout, better optimized which turns even advanced idioms into efficient code, etc.

But it also has some downsides compared to Scala. It is lazy which makes debugging more complicated. It doesn't have any satisfactory modularity story (last time I was around there was Backpack, but it hasn't caught on AFAIK). And Scala has all the advantages that come with the JVM and Java interoperability: great debugging, profiling, introspection, telemetry, etc, great IDE, huge gallery of libraries, etc. I'm a software engineer and all these things are _very_ important to me. Compiling to efficient JavaScript with Scala.js is also nice.

> Many leading Scala programmers ended up moving to Haskell, in order to get deeper into FP.

Sure, if you want to write Haskell programs, you'll have much better time in Haskell than with Scala. There were people like that, and so they eventually left. But note that you can be deep in Pure FP in other languages, like Scala or PureScript, Haskell is not the only game in town, each comes with its pros and cons.

That being said, with Haskell, Pure FP is built right into the language. With Scala, while easily possible, Pure FP is built on top of the language with library(-ies) and that comes with some disadvantages.


> How can you say that subtyping is not useful?

Well I said less useful. The applications you list can be solved without the nominal subtyping that Scala has invested in. Sometimes Scala feels as permissive as a dynamic language, for example an erroneous integer in an array of floats, is still a valid array, it's inferred as an array of "Any".


> Scala is really an object-oriented language, first and foremost

You're technically correct, which is the best kind of correct. Scala is not based on the Lambda calculus. The DOT calculus (which is what Scala is theoretically built upon) is based on objects.

Yet, at the same time

> please do not judge functional programming based on how it looks in Scala

based on my research into the labour market, I would guess that Scala is the programming language where most of the Functional Programming takes place, both in the general sense and also in the Pure FP sense. It is more widely used than Haskell, F#, Clojure, Elixir, Erlang, OCaml, Racket, etc.

And it works rather nicely in my opinion. There are still areas that I'd like to see improved, of course, like the for-comprehension, but I'm already quite content.

> effects to be controlled, checked and reasoned about

Definitely! We have so called Functional Effect System libraries for that in Scala. Not just one, but two: Cats Effect and ZIO. Check them out. I would recommend anybody using Scala do use either of those.


Here's the recording of the talk

KEYNOTE Simply Scala Martin Odersky

https://www.youtube.com/watch?v=QRcD9Zc7eq4


The problem is that by rebinding the same variable you introduce temporal dependencies. The statement "i = i + 1" writes to be variable i and therefore must be executed before any subsequent reads of the variable i. If the statement is in a loop nest, the subsequent reads might even precede the assignment statement in the program order. This makes it harder for the compiler to rearrange expressions and in general runs counter to fp's principles of being as "loose" as possible. And not requiring a program order is more "loose" than requiring one. Sophisticated optimizing compilers often can figure out and remove redundant/accidental temporal dependencies but not always.


Is that really much different from one function calling another, the called function must be executed before the caller? There would seem to be a "temporal dependency" then as well and the compiler can not re-arrange the execution order of the said functions.


Yes, it is. What is the value of str(5/i)? If i can't be redefined you can always replace i with its definition. If i can be redefined what string str(5/i) will evaluate to may be impossible to deduce a priori. Both the programmer and the compiler can replace i with its definition when it sees the expression "str(5/i)". That freedom is lost when the language allows rebinding.


Your first point, I'm not sure that is true. If you are using a functional style in a language that supports side effects, its still pretty easy to debug. you could implement a tap function that console logs the value, then then returns the result of the previous function in the composition/chain.

If you are using a lazy evaluated language like Haskell, instead of the stack trace you're used to in an imperative programing language, you'd get the reductions that led up to the reduction of the expression with the breakpoint on it.

The second point, However, I do agree with, especially in languages where mutability is the default.


> Second, you need to be extremely familiar with the language and ensure that you know that you're not mixing and matching mutable and immutable system calls, which will be hell to debug, see 1

How is this not the case, for say, python?

E.g. You can get yourself into real trouble with a function like this in python.

    def f(dict = {}):
        return dict


Indeed. It's actually easier in languages like Haskell because the type system won't let you mix and match.


I mean, if you then use the result of f in a mutating function, the default value will be mutated the next time you call it


Right, but the point is that you can't actually write the equivalent of that function in Haskell. You'd have to write something like this:

    defaultMap :: Map k v
    defaultMap = ...

    f :: TVar (Map k v) -> IO (Map k v)
    f = \case
      Nothing -> newTVarIO Map.empty
      Just m  -> pure defaultMap
  
which would make it hard to be unaware of the possibility that the dictionary you get each time you pass `Nothing` would be the same one. And then the type signature returns an IO action, so you would know that "anything could happen here", whereas there's no such indication in python.

Conversely, if you had your `f` function in Haskell has a type like `Maybe (Map k v) -> Map k v` then you know that it can't possibly be creating a mutable map and inserting it in the process somewhere without your knowledge. (Modulo `unsafePerformIO` and other evils like that).


> ensure that you know that you're not mixing and matching mutable and immutable system calls

Languages that were designed from the ground up to be pure functional, like Haskell, keep you from making this mistake.


Can anyone elucidate why code written in a functional style is harder to debug?


The example given isn't really functional programming is it? Or am I missing something? In those examples you start with an object, call a method on the object returning a new object, then call a method on that and so on. I like this kind of chaining in OOP.

Languages like JavaScript and Rust (and Go?) have blurred the lines between purely functional and purely imperative. I think I prefer that compromise position myself.


Functional and object-oriented aren't two ends of a spectrum, they're orthogonal. Code can be both.

To program in a functional style means no more or less than to construct your program by composing referentially transparent functions (that is, functions that have no side effects, and whose output is deterministic based on the inputs -- in other words: the mathematical notion of a function). The solutions that are marked as functional in the post fulfill this criterium, so it's correct to call them functional solutions.

Whether your functions are written in method notation or not is not relevant to determining whether the program is written in a functional style.


OOP is not method notation. There's more to it (and arguably the notation part isn't required), and indeed you can't have both that and FP. OOP is the opposite of "referentially transparent".


Then it really depends what exactly you mean by OOP. Some people use the OOP features for modularity and structuring of the codebase, without any side effects/unmanaged mutation. Often they don't call it "OOP" then, but rather "a module system". But there isn't any theoretical distinction.

For example, I do _both_ Pure Functional Programming _and_ OOP (in the "module system" way) in Scala daily. And it works very nice IMHO, I encourage everybody to give it a try.

Use traits as interfaces for logical pieces of your business logic. Implement them in (case) classes which receive interfaces to other pieces of logic in their class constructor. Use a so called "Functional Effect System" like Cats Effect or ZIO to avoid raw side effects. And that's it, enjoy and profit.


It's impossible to get people to agree on what OOP is exactly, but almost everyone agrees that hiding/isolating mutable state (inside objects) is an essential part of it. Just google "what are objects in programming".


Totally agree that these paradigms are badly defined. See my other comment where I explain it more https://news.ycombinator.com/item?id=34217547

My point here was that all "OOP" languages (that I know) allow you, with all their "OOP feautres", to work with objects which actually don't contain any mutable state.

I'll let you be the judge whether it is still "OOP", but I know from experience that it's very practical for Software Engineering.


This is a pet peeve of mine, but referential transparency really doesn’t mean what FP people use it for. They use it as a fancy name for the same thing as pureness. Referential transparency is a syntactic thing and most languages without macros are referentially transparent — see this answer: https://stackoverflow.com/questions/210835/what-is-referenti...


When I talk about referential transparency in the context of FP, it's honestly of no consequence to me what the history of the term in philosophy is (mind you, I still find it interesting -- I didn't know this and it was nice learning about it -- but it doesn't significantly change my understanding of the term in the context of FP).

Technical terms can be overloaded, I don't think this is either surprising or problematic. Astrophysicists classify carbon as a metal; they're not wrong, it's just a peculiarity of the field that's almost always clear from context.

In the context of FP, the term "referential transparency" means that variables can be replaced with their definitions, and vice versa, without changing the semantics of the program. The term coincides with pureness, but IMO it emphasizes something different. "Pureness" is a negative definition, it the property of there not being any side effects. "Referential transparency" to me indicates what you can do with that purity: it allows you to reason algebraically about your program.

> most languages without macros are referentially transparent

I don't think so. In most languages, you cannot syntactically replace a variable with its definition and expect the semantics of your program to remain unchanged. For instance, in Python, given the following shared prefix:

    state = 0
    def f():
        nonlocal state
        state += 1
        return state
    
    x = f()
The programs

    x + x
and

    f() + f()
do not yield the same result, which is what you'd expect if referential transparency holds.


Method vs non-method calling syntax is a red herring; a method is just a function where you put the first argument on the left instead of inside the parenthesis

Hanging functions off of objects/types isn't really anti-functional either, as long as they don't mutate the subject

Functional programming is mainly about not changing state; you can do that with or without methods


Well there's a certain anti functional feel to it because your functions are tied to the object rather than being able to process any given object or objects of different types.

Difference between functional style and functional language?


No, those things are totally orthogonal and people just mix it up because OOP came with mutations.

Bundling data and functions has pros and cons but has nothing to do with functional programming it.

Also, there isn't really a good definition for either functional style or functional language. If anything, there is a good definition for pure functional programming, but that's another beast alltogether and still is orthogonal to bundling data and functions.


> rather than being able to process any given object or objects of different types

Not necessarily- the functions can still be polymorphic. Rust does a particularly cool thing here, where any struct or trait method can be called in non-method syntax (or even passed as a first-class function!). I believe Julia also has type-based polymorphism while still being considered a functional-adjacent programming language. And then there are other languages where any function in scope can be called in method syntax, without being explicitly associated with the value's type (there's a name for this feature but I can't remember it)

> Difference between functional style and functional language?

I don't know the precise definitions well enough to distinguish them (though I'm not sure they have precise definitions)


> though I'm not sure they have precise definitions

Of course they don't. These "paradigms" like OOP or FP are just human made up terms _without any basis in theory_. Sometimes they can be practically useful when communicating with other humans, like we do now. But more and more people are starting to realize how meaningless/nonsensical these categorizations are (and always have been): FP vs OOP, compiled vs interpreted, static vs dynamic, etc. We might as well categorize languages based on the colour of their mascot.

See this for more https://old.reddit.com/r/ProgrammingLanguages/duplicates/6xt...

The closest thing to the definition of an FP language that I was able to come up with is "based on the Lambda calculus". Haskell is, Scheme is, F# is. But e.g. Scala isn't, it's fundamental building block isn't functions but objects. But many people do consider Scala to be an FP language.

So let's all be aware of the limitations of these (pseudo-)concepts.


That's not really true. Yeah, the term "FP" has been watered down but there was a proper definition: "FP is whene every expression of a program is referential transparent" and that's it. (https://en.wikipedia.org/wiki/Referential_transparency)

In other words, this is not even a property of a programming language. It is a property of the code itself. It's just that some languages make it easy or impossible to build a program in that style - or they even enforce it. Haskell however has escape hatches. And Scala is considered an FP language because, unlike Javascript/Lisp/Clojure/ML/Rust/... it really allows and supports(!) to make the whole program referential transparent.


Java is referentially transparent. It just really doesn’t mean what you believe it does. It means that one can replace a reference to a thing with the thing itself.

It is a syntax level term, muddying execution model is just not correct usage, see https://stackoverflow.com/questions/210835/what-is-referenti...

What you are looking for is simply and plainly “pureness”.


> Java is referentially transparent

Referential transparency is not a property of programming languages but expressions (or more general, parts of computer programs).

> It means that one can replace a reference to a thing with the thing itself.

That's not the common definition at all. Read the link I posted. In any case, I'm talking about evaluatable expressions, you talk about "references" - however you define that.

Even in your own stackoverflow link, the accepted answer says this:

> the thing that an expression refers to


But expressions don’t mean the thing they evaluate to, this is an ad hoc meaning used in FP circles only. Is `fibonacci 1000` the integer value it evaluates to? No, it is an expression denoting the calculation of that value.

What’s missing from side-effect freedom or pureness? That is the property that allows replacing an expression with its value.


> But expressions don’t mean the thing they evaluate to, this is an ad hoc meaning used in FP circles only. Is `fibonacci 1000` the integer value it evaluates to? No, it is an expression denoting the calculation of that value.

Not sure what you are talking about now.

> What’s missing from side-effect freedom or pureness? That is the property that allows replacing an expression with its value.

I think you are just unfamiliar with the terminology, that's all.

See Wikpedia again:

> If a pure function is called with arguments that cause no side-effects, the result is constant with respect to that argument list (sometimes called referential transparency or idempotence), i.e., calling the pure function again with the same arguments returns the same result. (This can enable caching optimizations such as memoization.)

(https://en.wikipedia.org/wiki/Functional_programming#Pure_fu...)

There you go. All 3 terms in once sentence. Maybe, again, you disagree with Wikipedia and the definition and that's your good right, but I don't see any point in prolonging the discussion over that.


How do you define named functions in something that is referentially transparent?

Can a function-defining expression be transparently replaced with the value it computes?


Well, that depends a bit what you mean by "function-defining expression".

Let's say `(new function(input) { return input })` is a valid expression. If it then doesn't matter (in any circumstance) whether you do

    let myFunction = (new function(input) { return input })
    myFunction(1)
    myFunction(2)
or

    (new function(input) { return input })(1)
    (new function(input) { return input })(2)
that's when you can call the function-defining expression to be referentially transparent. If, on the contrary, creating a function increases a counter and you use that counter in your programing for something and it makes your program behave differently depending on how many functions are created, then the function creating expression would cease to be referential transparent.

I think that's not a bad example actually, because I assume it feels more natural to people to be able to treat an expression that creates a function in the way above and expecting that it won't change how the program behaves. And this is really what functional programming is about - it is exactly this feeling and guarantee that FP enforces throughout the whole application, not just in some places.

If you are talking about creating a named function (i.e. a class member) that is not an expression, then there isn't really much point in talking about it since we are now talking about definitions, not expressions anymore. Unless you can change/create definitions programmatically - in that case the code that does it programmatically is the one that needs to be considered, not the definition itself.


What is the value of

  let myFunction = ...
which can replace that whole construct so that the program remains the same? Or is this not functional?


I think you are confused. FP doesn't require any kind of token/syntax to be "replacable". It requires expressions to be "replacable" (RT).

A language could certainly treat assignments as expressions (some do) and have them return something, such as the value that is assigned to the variable or always null/unit. But we are now talking about a totally different thing.


I'm just trying to follow the proposed definition of FP.

Turns out there are non-expressions present; yet somehow if just the expressions are referentially transparent, we have FP?

We have to say something about the kinds of non-expression thingys that are required/allowed and what their properties must be.

Assignments can be expressions, but they are not referentially transparent. If an assignment is replaced by its value, it's no longer performing the assignment. Hence, assignments are easily found to be incompatible with FP under the RT definition, which underscores that it is useful.


> Turns out there are non-expressions present; yet somehow if just the expressions are referentially transparent, we have FP?

Why "just"?

> We have to say something about the kinds of non-expression thingys that are required/allowed and what their properties must be.

Well, if there are syntactical constructs that will execute code at runtime, then we can essentially consider them expressions, even though sometimes they are restricted in how they can be used. For example, in Javascript and Java there is an if-syntax (call it if-statement or whatever) that will execute code but it doesn't return anything and you can "assign" the result to a variable. Obviously this breaks referential transparency (or can't do anything meaningful). We can still see it as an expression for the purpose of the definiton.

Other than that, as I said in my parallel post, if you have something like templates (which are not executing code at runtime), then they can do whatever you want, the only thing that matters is what expressions they will produce and if those are RT or not to fullfill the definition of FP.

> If an assignment is replaced by its value, it's no longer performing the assignment

And the funny thing is, it doesn't matter. Your code might stop compiling because you refer to a variable that isn't declared, yeah. And that's about it. But otherwise in a fully RT program it won't change the bebaviour of the program. EXCEPT if it is a reassignment. Which is why reassignments are violating RT and are therefore not a thing in FP - there can be specific exceptions (i.e. shadowing something in a different scope) but those aren't really reassignments, they just sometimes use the same syntax.


To give a concrete example: Haskell supports something called do-notation. You cannot "replace" parts of it, because it is syntactic sugar that will turn into a number of expressions. But all of those expressions are then RT. In the same way you can have a C++ program that is FP (completely RT) but uses templates, which are obviously not "replacable". I hope that makes it more clear.


I'm familiar with rust but not with Julia. One pitfall I can see with calling this method chaining style functional is that you might not be aware of when a mutation occurred. For example, I've made the mistake before of calling .sort() in js on an array chain and it mutates in place, which you don't really expect given the other methods. At least rust makes this explicit.


> you might not be aware of when a mutation occurred

Yeah, this is a risk with any language that mixes some functional features/style in with non-functional features. Rust does a pretty good job of letting you mostly delineate the two, though there are still escape-hatches

I've actually been toying with a language design that has both, but draws a hard distinction between them. We'll see how it works out :)

But most of the time, if you're not using a totally pure functional language, functional programming is more a practice/style/ethos/feature-set than a hard constraint


Julia has the convention that mutating functions have names that end with a bang, e.g.

  sort(x)
returns the sorted object, but

  sort!(x)
returns a "nothing" and mutates the object in place.


iirc this is notation that they adopted from rust.


In rust ! signifies invoking a macro, so it couldn't be used like this in a function name even if you wanted to. Also, Rust allows you to notate immutable/mutating/owning for each individual argument type, so there wouldn't be much point:

  fn do_thing(a_owned: Foo, b_mutating: &mut Foo, c_immutable: &Foo) {
There is a convention to separate the two kinds of functions (consume and return in some, mutate in others), but that's the only commonality


More likely Scheme. Scheme used the ! signifier to indicate that something was being mutated. For instance `vector-copy` returns a fresh copy, `vector-copy!` copies into another vector, changing it.


I’m fairly sure Ruby predates both of these languages by a lot and it does use the ! notation for similar things.


Whether Julia took it from Rust or Scheme, or Rust took it from Scheme or Scheme took it from somewhere else, my point was that Julia did not invent this notation (this kinda sounds like an insult but it's actually not at all), and it's just a notation common a lot of functional-ish languages that allow mutation.


We are in agreement that Julia is not the origin of ! for this, I was just pointing out that if you want a historical take (what influenced what), Julia most likely got it from Scheme (whether it originated in Scheme or not is immaterial). Rust doesn't use ! to signify "This is a mutating function", it uses it to signify "This is a macro". Whereas Julia and Scheme use it as an indicator for the same thing: functions that modify their arguments. And Julia is known to have been influenced by Scheme, which existed when Julia's development started while Rust (publicly) did not.


There's nothing anti-functional in bundling related functions into an object.

On the contrary, this helps with modularity, which is a good thing. You can then swap out the object for another object with the same interface, but where the functions (methods) have different implementations.


An often overlooked feature of OOP is the use of pseudo-variable 'this' (or 'self in Smalltalk).

That is about name-scope. You can refer to your own method simply by its name via 'this', and when you call it you know it can access the state inside that particular object.

Without 'this' an object in JavaScript and other OOP languages could not access its own properties including it (possibly mutable) state.

Is there an equivalent to 'this' in purely FP languages?


> Languages like JavaScript and Rust (and Go?) have blurred the lines between purely functional and purely imperative. I think I prefer that compromise position myself.

The line blurring ones are the ones taking the comprising position right? So you're saying you prefer Javascript, Rust and Go?


Yes I personally find those languages to be the sweet spot between very object structured like Java and C# (although I haven't used either for 15 years so I can't speak to how they're currently used) and FP languages. Javascript a little less so, it's just very flexible and Typescript adds the typing layer. Rust especially seems to take just enough of the FP style to get some nice benefits while also being a traditional imperative language. Just my personal opinion of course, I'm not an advocate for any particular style.


Both C# and Java have incorporated many things from the FP paradigm.


Man, the downvotes lol. If you disagree just write a reply, happy to be corrected or read others opinion


[flagged]


Please don't do this. It only makes things worse.

https://news.ycombinator.com/newsguidelines.html


I didn't even disagree with anything, just asked a question! I'm not anti functional or anti imperative, they're both fine.

My point was I don't think it's functional to chain method calls in JavaScript. Maybe it is, although I think you'd very easily run into an unintentional issue, i.e. call .sort() on the array which mutates it.


It's a hairpulling experience trying to write functionally in javascript. All those imperative apis like sort as you mention. I will sometimes copy as in [...xs] but I always feel like I am swimming against the current. Plenty more to say but it appears this node has been flagged, it appears HN censors criticism. Saddening.


Being dead doesn't censor anything. After all, I'm here reading and replying 10 hours later. It just makes it invisible to people without "Show dead" enabled.

Your post most assuredly got down voted to death for calling "HN" "toxic." That's not constructive at all, and it inflames emotions which is the opposite of HN's goals of technical discussion.

I've taken many positions against the HN herd and yes many people do downvote on disagreement (which I agree they shouldn't), but given a bit of time it will typically even out. The new vouch system helps this even more.


>Being dead doesn't censor anything. After all, I'm here reading and replying 10 hours later. It just makes it invisible to people without "Show dead" enabled.

IMO, effectively, it does censor it; not de jure, but de facto. (Hope I used those terms right, IANAL.)

Because my guess is that not many HN users know about the "showdead" setting. After all, not everyone is into tinkering with systems, tweaking settings, etc., even among developers, who I guess form a sizeable percentage of users here. Some of them just want to use the software as needed and get on with their lives, even though they may be technical.


>In those examples you start with an object, call a method on the object returning a new object, then call a method on that and so on. I like this kind of chaining in OOP.

Then this post from my blog might be of interest:

Examples of method chaining in Python:

https://jugad2.blogspot.com/2016/02/examples-of-method-chain...


Not really, I already know what it is and use it all the time. I mean, I just wrote about it. Plus I don't program in python.

Honestly, I just opened your page and it's not a great example. Consider the method chaining in the linked article that operates on an array, it's much closer to real problems imo, and more succinct.


Which linked article?


If a “method” returns a value, is it really a method? Does it mutate “this”? What does it mean to be an object, if you’re not being changed at all?


I wrote that other comment just before dozing off for the night because I was surprised no one else had replied yet. But to expand on it:

In Eiffel (which is an object oriented language), methods on objects are separated into two kinds: commands and queries. This is also known as the "Command-Query Separation" principle. The idea is that commands are the ones causing changes to the systems and queries only inform you about the system. `a_set.add(item)` changes the set, while `a_set.has(item)` tells you about the set but makes no changes to it. It's very useful, and very common. I mean, I'd hope that `some_collection.Size()` doesn't actually change its size, that's not what it exists to do.

https://en.wikipedia.org/wiki/Command–query_separation


Thanks, I wasn’t familiar with the name of this principle!

Though I feel these definitions of OOP are closing on the Actor model, so exact definitions are not really possible to give here.


Sometimes, depending on the nature of the implementation, Size() might do the work to compute the size, cache the result, and clear a dirty flag on the cache.

Not sure if that matters.


It does not need to mutate this but it can:

1. Read the fields of 'this' 2. Write the fields of 'this' 3. Call other methods of 'this' (including "private" methods in many OOP languages)

That is very characteristic of OOP languages. Does anybody know if FP languages have something like the 'this' and if not what do they use instead?


Yes, it is a method. It would be a “query” if it causes no changes but returns a value.


I’m fanatical about functional programming, and I would never choose this example to demonstrate why I am. The imperative code doesn’t demonstrate any kind of problem (see Rich Hickey’s explanation of transient types), and the functional code demonstrates how but not why. I’d certainly use a modified version of this example where the inputs might be mutated by other parts of a program. But without that, this is just philosophical navel gazing pretending to explain how to solve mysterious problems.


>Avoid bugs related to mutable state.

And generate a lot of overhead both in execution time and memory. Which is unacceptable in games where every ms counts.

>Games tend to define update() functions which mutate some state each tick. We can avoid mutation by taking game state as an argument and returning new state. Now our game is much easier to test too! We simply call update with some game state and assert that the result is what we expect. Imagine how hard it would be test the imperative update function!

As a game dev I kinda cringe on the idea of copying WHOLE world state just to update it (and if I understand correctly we copy the state EVERY TIME we want to update i.e: from every actor instance?).


Immutable data structures prevent you from having to copy the entire state. In fact the first "copy" is essentially a no-op. Compare it to a "copy on write" filesystem, where the changes you make are the only thing that needs to be copied/written.

https://en.wikipedia.org/wiki/Persistent_data_structure


On the other hand it is kringe-worthy, when people use mutable state and then are unable to properly parallelize the workload onto multiple cores. I mean, if I understand correctly, we have many cores sitting around and only 1 is working? Sometimes also a few, before we hit the bottle neck of mutual exclusion.


On the other other hand, if you've got a game state, and you want multiple cores to do the update, and you're not using mutation, then when core 1 makes a change (from state 0 to state 1, say), then when core 2 makes a change, it has to start from state 1, not from state 0, so that core 1's change doesn't get lost. So then you have the problem of updating all the cores (and all the threads) with state 1 before any of them make any further changes.

The way I would do this imperatively with N cores is, I would have the game state be an array, and I would have each core update 1/Nth of the array. That might thrash the cache, but it wouldn't need any locking at all.

Functionally, it could probably be done better to have each core produce a new sub-state, and then have one thread combine the N sub-states into the new state, and update all the threads with the new combined state.


Not to disagree, but to be fair modern fp runtimes use data structures that wouldn't have to copy so many bytes as the whole state.

A nice showcase of this idea is in Juan Pedro Bolivar Puente's CppCon '17 talk Postmodern Immutable Data Structures: https://youtu.be/sPhpelUfu8Q


Maybe don’t comment on things you don’t know much about?

Lazy data structures can be very efficient (slower than mutable one in single-threaded execution, but much faster for concurrent usage) and they definitely don’t copy everything every time.


Let's remember that we are having this conversation in the context of this article.

I know something about lazy data structures from my IOI days and I'm pretty sure I know about game development (altough admittedly forgot about this stuff while reading this post as it is not mentioned in the article itself, others comments provided nice resources).

While they are extremely efficient you still have an overhead. It is obviously minimal but it can matter with -+10000 updates (game logic, actors only, with some overhead by itself) per frame - not per second. O(log n) is not dissmisible (in some hardcore cases I would say that even O(log log n) can have slight impact and If you can have steady 60fps rather than 59fps you want that).

Game development also doesn't really align with functional programming (as described in the article). I mean the main thing to care about is speed since it allows us to do more fancy stuff. Then again, I'm not an engine developer so who knows, maybe it's actually better and some big engines do use it at core (If anyone knows I'm interested). But in most cases it's a lot harder to do (since games do SO much different stuff that It's really hard to seperate parallelizable from sequential tasks anyway and it simply doesn't make sense in most productions).


I've done about a hundred coding interviews in the past couple of years, and one thing I've noticed is that unless you are really, really good at functional programming, don't do functional programming in an interview. Many people try it, because they think it is more impressive or proper or whatever, and then it doesn't work correctly, and they flounder because debugging within a functional idiom is different, if not more difficult. Then they switch to imperative, and things get better. I've seen maybe 2 or 3 people that can handle debugging when using the functional approach.

As a side note, those 2 or 3 were not any more or less capable, in the end, than other interviewers.

I don't quite know what to do with that datapoint, but it is interesting.


I did this, but practiced like crazy before the interview since I knew what we would be building during interview.


I quiet like declarative code. SQL, regex, and recently XLST just seem to click in my head. It's like telling the computer what output you want. Like providing the output scaffold and leaving the computer to fill it.

The more I read about functional programming the more it seems like that. Am I correct in my assumption? Is functional programming declarative? Is all functional programming declarative?

APL seems a little like this from my research.

The thing that confuses me is that under the hood isn't the computer doing those loops? It's just hidden from you?


No it is not. Functional programming (at least before the term got watered down) is all about referential transparency, which has some major implications of how you read and write code, e.g. allowing to apply equational reasoning and making effects explicit. Because it requires some more abstract concepts to be practically useful, you will also see the same people tending more in the direction of declarative programming - but strictly speaking the two things are independent of each other.

> The thing that confuses me is that under the hood isn't the computer doing those loops? It's just hidden from you?

No. Under the hood the computer is doing conditional jumps. And yeah, those are hidden from you. :)


> No. Under the hood the computer is doing conditional jumps. And yeah, those are hidden from you. :)

Of course my apologies. I was more thinking about in the language it's written in (C for some of it?). But yes at the assembly level.

It took me a while to get my head around it for J (array language written in C), which under the hood must be using a loop but you don't explicitly loop in array languages.


> was more thinking about in the language it's written in

Depends, but does it matter? It's usually recursion that is executed fast and with little overhead.


Declarative versus imperative is sometimes in the eye of the beholder.

Helping a C++ friend of mine with some Scala code the other day, there was a line like this:

    val requests = availableDrivers.filter(notPhonedHome).map(pleasePhoneHomeRequest)
When I explained filter and map to him, he thought of them as imperative. "Okay, filter loops over the array of available drivers and collects the ones that haven't phoned home in a new array. Then map loops over that array, creates a status notification for each one, and collects them in another array."

I thought of this line in a more declarative way: requests is a list of phone home requests for the drivers that haven't phoned home recently.

Both of us used a simplified mental model that didn't fully reflect what the computer is doing. My model didn't account for memory allocations. His model didn't account for potential optimizations. Neither of our models accounted for type-level trickery, possible side effects of pleasePhoneHomeRequest, or error handling (although his was closer to accounting for side effects and error handling.) Does the intermediate array actually get constructed? What happens if pleasePhoneHomeRequest throws an exception? Is requests a concrete collection type, or is it something akin to an iterator?

In practice, a language like SQL has similar concerns. Occasionally you may need to be aware that an awkwardly constructed query requires space that exceeds the size of the relevant data, or that a delete cascades in a way that is inherently slow.

tl;dr Functional programming idioms are designed to let you think more declaratively when you read and write code, but sometimes the declarative view isn't adequate for understanding the code.


I was trying to evaluate FP one time, try to get it, but I can't envision using it in a pretty complex front-end where many things inter-depend on each other. With imperative programming I can easily step through the code and reason about the flow of execution, looking at FP, I can't help but think the codebase would become very hard to reason about and extend. Side effects are inevitable so if you are going to have side effects in a monad, what's the point anyway, one way or another you have side effects. Unit tests may be easier, but it's still trivial to do mocks. Mutability? I'm using a custom map type for global state entities that helps me deal with it and I'm also very mindful whenever I assign anything to an object property (in JS).

I respect FP, but I don't envision using it personally, and by the way, I don't get how OOP is getting bashed on HN so much. Like google FP + HackerNews and there are articles about FP like this one, google OOP + HN and you get "OOP is not just bad, it's super-duper-hyper-ultra atrocious".


My team builds frontends in Elm and backends in Haskell and we experience the opposite: FP made the code very easy to reason about and plugging things together is easy too.


> Side effects are inevitable so if you are going to have side effects in a monad, what's the point anyway, one way or another you have side effects.

Imagine a style of programming where everything is assumed to be asynchronous, all the way down to language primitives. Someone who only ever wrote AsyncScript would look at TypeScript, say "Requests are inevitable, you're always going to end up in async/await anyway, so what's the point?"

This is exactly the relationship between imperative and pure-functional programming: the point of having explicit effect monads (or some other tracking system) is that it makes it possible to write code that isn't in them.


I agree, I missed the point. Come to think of it, this is probably the biggest advantage of functional over OOP.


I did try to evaluate again and I think the biggest deal breaker that would prevent me from using it is the added complexity and boilerplate. That impression is definitely amplified be me not having practical experience with it, nevertheless, it's plenty more code to do the same thing and extra effort to split the side effects from pure operations, going out of your way to make the data processing pure, which adds boilerplate to the code and also some runtime overhead as well. Imperative feels like the shortest distance between 2 points.

I feel like I would be more likely to get burned by the above point (complexity and boilerplate) vs. a mutated state bug or a "gorilla is holding the banana" problem, especially considering that the former is an ongoing effort and the latter is things you only need to deal with at times, if ever.


I suggest trying to build an app using Angular and just observables to handle all state + interactions. In my experience, it's the best way to build an UI where "many things "inter-depend on each other". It's on a whole other level compared to using React with `[name, setName] = useState("")`


> I don't get how OOP is getting bashed on HN so much.

I think a lot of the OOP bashing comes from soooooo much bad OOP code out there (I'm looking at you "enterprise JAVA")!

This is also the reason some people have a bad taste in their mouths about other things: PHP, JavaScript library hell, etc.


Functional combinators over generators (output) and iterables (input) works well for this type of problems, including ts type safety ie. [0]. We use it in production in many places and works well for years for us.

[0] https://observablehq.com/@mirek/project-euler


I'm curious about something. Is the time a functional program takes to run considered a side effect? You might think that in metaprogramming to generate a program you just care about the program you receive - it is a pure function, right? But there's a difference between waiting around for a long time for the result versus getting it instantly.


Yes, the passage of time is a side effect. If your functional program is measuring the passage of time, it must accept time-elapsing arguments which are independent of its computation. If its own computation’s elapsed time is part of the computation, that’s inherently impure. That’s okay, but it’s a limit of functional programming purity. If you’re not measuring the function’s own passage of time but rather time passing as a concept, you can just pass arguments to the function which satisfy that interface and test how it’ll behave. That’s what’s awesome about FP: time is a parameter to your function like any other, and you can call it with any equivalent thing.


(I didn't mean a function that measures time passing, I just meant the time that a computation takes.) Regarding my question which you answered, this means there is no such thing as a pure function then, since any function has the "side effect" of making time pass that wouldn't have if you hadn't called it.


The time passes whether or not the function is called, calling a function does not cause time to pass.

Time also seems to be a pure function: current state of the universe -> next state of the universe, which is neat.


> Time also seems to be a pure function: current state of the universe -> next state of the universe, which is neat.

Certain interpretations of quantum mechanics would disagree that, if you put a given state of the universe into the "iterate the universe's state" function, you'll always get the same universe out the other end. The idea that quantum state transitions are truly random would foreclose on that.


I know the author was only trying to show some benefits of FP but choosing such a trivial example gives a bit of a one-sided view of FP. Try solving the harder AoC problems and you will probably need mutable state (at least I do). Also I am not sure if for example recursion is always the best option when performance becomes a factor.


YMMV but each year I use Haskell to solve AoC and I've never needed mutation or any particularly intensive `IO/State` Monad trickery to get any day completed regardless of how far into the challenge it goes.

Part of the charm of purely functional solutions for me (and why I'm hooked) is that it forces you to think about the representation of the data that you would need in order to have a "clean" solution without mutation. E.g. IntMap for index of arrays and folding with Set/Map operations to build state without accruing algorithmic complexity. I've found that I do a lot more "deep thinking" when forced into using (or creating) the right data-structures and this is something I enjoy a lot more than purely iterative debugging/hacking, and it's more satisfying (to me) at the end of it all.

My go-to for AoC is almost always Attoparsec + Containers with a recursive "solve" function and the complexity seemingly always stays manageable. I'm no savant but feel free to give examples of a tricky task to solve functionally I can share a specific solution.


I suspected my novice-ness could be the reason. Still, I could probably solve them without mutable state but I wasnt satisfied with the readability of my code. For example day 12 day 13 were a mess for me when trying for purely functional.

Would love to see your solution for one of these problems!


Day 12 was a little hacky because of my foolhardy reluctance to ever use the obvious Dijkstra's Algorithm solution and always rolling my own organic recursive "flood fill" on the fly instead.

    -- ((start, end), heightMap)
    type Input = ((Coordinate, Coordinate), BoundedHeightMap)

    -- (length remaining, next coordinate)
    type DirectionsMap = M.Map Coordinate (Int, Coordinate)
I built up a map of "where to go next and how far left" and flood-filled it starting from the end and calculating for neighbours of those that just changed value for each step until convergence, which ended up being super useful for Part 2 since I got it "for free":

https://github.com/cronin101/AdventOfCode/blob/master/2022/D...

Day 13 on the other hand is something I'm far less ashamed of and really lets Haskell shine, it basically didn't need any code at all. I just defined how to parse an equivalent data representation, how it was sorted via Eq/Ord typeclass (following the brief), and used `compare` to do the lifting.

    instance Eq PacketData where
      (L l) == (P p) = P [L l] == P p
      (P p) == (L l) = L l == P p
      P p == P p' = p == p'
      L l == L l' = l == l'

    instance Ord PacketData where
      compare (L l) (P p) = compare (P [L l]) (P p)
      compare (P p) (L l) = compare (P p) (P [L l])
      compare (P p) (P p') = compare p p'
      compare (L l) (L l') = compare l l'
https://github.com/cronin101/AdventOfCode/blob/master/2022/D...


Thanks for sharing! Especially day 13 is really compact and elegant. Do I understand correctly that you define parser combinators to parse the packages? (I'm not really familiar with Haskell nor parser combinators)

For me the parsing was the hardest part when trying without mutable state so this helps a lot.


Yup! Parser combinators are incredibly useful once you learn the basics well. They compose predictably and you can test the individual parts pretty succinctly (I usually use an inline eval'd comment in vscode to smoke test, as you can see) and allow you to build an arbitrarily complicated data structure at parse time.

I'll admit first few times were a bit of a headache learning the library (https://hackage.haskell.org/package/attoparsec-0.14.4/docs/D...) and idiomatic patterns, plus how to test/troubleshoot. But now Haskell is my go-to language for parsing and I can parse pretty much any AoC input into a suitable representation, with the forethought taking no longer than reading the brief, by composing parsers for the different parts of the input string into a larger parser.

Also, disclaimer, since I'm the only one reading my code I often make oneliner "pointfree" parsers using dense syntax/tricks (as a little extra mental puzzle) instead of vastly more readable "do notation", so don't let the special character soup put you off.


I solved the first 13 days mostly in functional ways [1]. The 2 not functional things I used were arrays and hash tables. OK and outputting things ... .

For arrays I wrote a functional map function, since my language did not have a functional one and I only used it for transforming input, while preserving the original input under a different binding.

For hash tables I could have used functional sets, which I only bothered with in later puzzles, because I wanted to parallelize things.

I did parallelize quite a few things in later puzzles, just because it is so easy to do when you have pure functions.

So at least 26 of 50 puzzles are perfectly possible in functional style.

[1]: https://notabug.org/ZelphirKaltstahl/advent-of-code-2022


>Imagine we want a range function where range(5) returns [1,2,3,4,5].

The imperative solution makes perfect sense and is fast, everyone can understand it including future me. I don't have to think about it at all and just understand it as I'm reading it.

In the real world out there this is something that carries a lot of value.


> everyone can understand it

The recurrence relation of range(n)=range(n-1):n is easily understood and also already the naive functional implementation.

These kinds of recursive/recurrence definitions are part of K12 curriculae, and not just at the end of it.

Functional programming is not inherently harder to understand than imperative programming. Your school blackboard has been functional and stateless since grade 1. At some point somebody tacked state on to your mental model, and somehow I=I+1 now makes sense.

To me it looks like a purely educational problem, when FP is dismissed for not being easily understood, compared to the IP abstractions.


I get why functional programming, but why functional languages? The article uses JavaScript. I do c# professionally, which can do similar, and have only dabbled in f#. Seriously asking so I can understand better: what can f# do that c# can’t?


Higher order functions, currying, discriminated unions and using pipelines to start with. I guess all concepts are technically possible in C# but the language is not really designed for it.


It's not a question of what one can do that the other can't. They can both achieve the same thing (they both map to CIL after all). It's usually a question of ergonomics - how much work do you want to put in to achieve the same goal. Things like algebraic data types (discriminated unions in F# speak), currying, etc. all are just a matter of making the compiler do some things that you'd do by hand in a language like C#. While you can accomplish the same thing in C#, languages like F# do it for you so you get both ease (compiler does the work) and correctness (compiler not likely to mess up where you might by hand). I bias towards the F# world for the ease and correctness reasons, but there's nothing stopping a careful programmer from achieving the same thing in the C# world.


It's awkward to do Functional Programming, when all the libraries (including the standard one) for your language built around mutation.

F# (and other FP languages, like Scala) have standard libraries which come with (and are built around) persistent collections (immutable collections, but with smart structural sharing when doing copy-on-write). And all the other libraries and the whole community is designed for Functional Programming.

There are also other aspects, that FP languages like F# or Scala tend to have shorter and prettier syntax. And many of the FP idioms are easier to express in these languages. Good example would be ADTs (algebraic data types), or Computation expressions in F# and its analogue in Scala for-comprehension. But these are more subjective reasons than my former point.


I would add to existing replies, that recursion is probably an important part. The language needs to make it feasible to express things with recursion, by making it possible to use it for deeply recursive scenarios, without hitting language limitations. Not so many things more disappointing than not being able to properly use recursion in a language.


I've been using heavily functional c# at work with LanguageExt for about a year. It has been quite painful, especially when it comes to debugging but also because things like pattern matching are really tedious (eg. imagine a class with two Eithers, now every Match needs to handle 4 possibilities when I may actually only care about 1).

Unless f# has massive advantages over c# when it comes to functional programming, I would say this paradigm is not worth it in 90% of my programming projects.


More importantly is what functional languages leave out or restrict. Imperative languages can add functional language features, but functional purity is not enforced, the full benefit of functional programming is not realized.


Express programming concepts without 12 keywords


  type User = Admin | Basic
Sounds trivial but it changes everything.


Tiny examples like these really don't demonstrate the big benefits of FP. However, they do illustrate the altitude of approach to problem solving.

In the imperative solutions, the programmer is essentially thinking and acting like a problem solver AND computer. There's the human element of understanding the problem and devising a solution, and then there's the step-by-step implementation. This is fine, and it's quite good for learning how to think. But it doesn't take much experience to ascend above it...

The FP solutions illustrate a problem solver "telling" the computer what operations to perform on a dataset to get the result.

At first even the tiny FP example may be off-putting to someone from an imperative background. It may seem a bit obtuse. But really, it is just thinking about the major steps necessary to transform one set of data into another, and then knowing which tools/building blocks to use to achieve that goal. You intentionally stop caring about how to add elements of a list together and instead just choose the right functions to map/apply/(reduce) your data with.

Now what I have not yet wrapped my head around is the Clojure transducers. I get the concept, but haven't quite grokked it fully. They are the step beyond the data.do_this.then_do_this.etc.

But for me, FP approaches, even applied to traditionally imperative languages (even Python, with some struggling with inconsistent and unergonomic details) provide some key benefits.

1. Pure functions can be understood and then forgotten about; once you know what it does, your mind is free to never worry about unexpected things going on inside. This is counter to typical OO functions which mutate objects at will. The mental load is so much higher with the OO mutation functions.

2. Pure functions are vastly easier to test. Couple this with using simpler, decoupled data types (hashmaps, arrays, structs... things which generally do not require special setup and teardown due to instance creation activities) and you have a much easier time writing tests that cover 100% of lines/branches - in fewer lines of code than incomplete OO tests.

All that said, I am less enthusiastic about passing anonymous functions everywhere as is common in modern JavaScript. That increases the cognitive load for me simply because it's harder to remember what does what and where things are happening. So I try not to have more than 1-2 layers of functions passed to functions.

There is one FP-related challenge which doesn't really have a solution as far as I can tell: more small functions means more function names to think of. To keep them recognizable and memorable, the names must be pretty descriptive. This means you can end up with some_pretty_long_function_names. It's a small concern, however. And as a bonus, your code ends up reading a lot more like human language - meaning you can give a code tour to a less technical person and they may actually be able to follow.


I never would've thought religion will be a thing in programming.


Then you haven't been paying attention. It's always been like this.


A critique of FP per "mainstream": https://www.reddit.com/r/DilbertProgramming/comments/qg99f0/... (The title is sensationalist, the opinion more nuanced.)


So if I use/chain Object.methods in JS I'm basically writing functional programming?

Is that really what FP is? I had the impression that FP was something different.


Yeah just functional programming isn't that mind-bending IMO. F# and OCaml are pretty easy to pick up.

I think the confusing bit is that one of the most popular functional programming languages is Haskell... which is really hard. So certainly in my mind I had "functional programming = like Haskell = stupidly difficult" for ages. But Haskell is difficult for other reasons: really abstract type system, functions are pure, etc.

Oh also another thing is they all tend to really encourage you to use immutable singly linked lists and write recursive functions that match on (head :: tail)... but I don't think you have to do that, and IMO it's the wrong thing to do a lot of the time (it semantically nice but computationally terrible).


Yep! If the methods create new objects, and leave the method arguments unchanged, you'll likely be writing code that's vaguely functional.


At its core, FP is programming using only functions. The rest follows.


Functional programming is like having container-technology right in your programming language.

I.e., you can fully contain anything that a called function does.

It is immensely powerful.


I understand this “looks cleaner” but we should really understand what the transpiler or compiler is doing behind the scenes. Functional code like this might introduce bottlenecks in performance.

In some languages the imperative for loop is the most efficient way to iterate, sometimes the compiler does optimize functional code into imperative code


In this example, I would start with the FP solution because that's how I happen to think. Probably familiarity bias, but I find the cognitive load much lower.

If performance becomes a problem (it might, after all there are several more passes than strictly necessary), have tests in place and refactor to an optimised imperative version.


Biggest thing I got from functional programming: avoid state if you can.

Sometimes functional style isn't as clear (especially recursion for me). But things like map and filter are usually a lot easier for me to read than for loops.


Especially, avoid shared mutable state if at all possible. That's the biggest thing I got from FP, too. FP says that shared mutable state is evil, and FP is absolutely right about that.


Recursion in production code? Say it ain't so...


FP feels neat, but good luck debugging it.


Weird. Functional code is usually easier to debug in my experience because it's based more on composition so you can test components much more easily and isolate error causing sections. It's a lot easier than a 1k SLOC C function (or even 100 SLOC one) where you have to use a debugger or introduce logging (printf debugging) to try and isolate the error.


Yeah this has generally been my experience with f#.

On top of all that runtime errors are way less common because the compiler catches so much more


FP hides the individual elements behind neat functions. If you are investigating to find a single outlier that broke your program, you need to examine the data points one by one.


> you need to examine the data points one by one.

What does this phrase mean?

And yes, FP promotes "neat functions". Which isolate behavior and consequently means they can be individually evaluated and evaluated in combination with each other to find the error. It's not a hard problem. And much easier than the 1k SLOC (or worse, several 10k SLOC) C functions I've had to debug in my career.

I have repeatedly reduced that kind of garbage down to 10-20% of the original code by applying a functional style and decomposing the horrendously complicated logic into something sensible and, wait for it, debuggable. Why was it debuggable? Because unlike the original it was comprehensible.


FP assumes that every component works correctly for all input data. Even the example given in the OP, it assumes that the "+" operator will work correctly for all of the data entries. What if one of my inputs was an imaginary number? What if I want special treatment for specific entries?

Not all problems generalize.


> What if one of my inputs was an imaginary number?

Then the code would call that out and not compile and run until you handled that case (at least in F#, i believe ocaml and haskell have strong typing as well for this reason but it's been awhile).

One of the huge upsides of most FP is letting the compiler do a ton of the heavy lifting because it can better check the "sanity" of what you've told it to do. If it's possible to pass an imaginary number to that function, it will say "hey, you could pass an imaginary number to this, and you don't handle it, here's some red squiggly underlining until you deal with that".

There's lots written about this. Designing to not represent error states and domain driven design being the two main topics i've come across that really hammer home how a strong typing system and a functional mindset can create extremely bullet proof code.

It's trivial to debug (mostly because 90% of the time, if it compiles it runs) and it's trivial to adjust (because the compile errors light up everywhere you need to adjust and you just do that).


My original reply was just before heading to bed, but I have a real question for you: Do you think this is FP specific?

> Even the example given in the OP, it assumes that the "+" operator will work correctly for all of the data entries. What if one of my inputs was an imaginary number? What if I want special treatment for specific entries?

Do you really think only functional programming encounters the situation where the data may not all be valid? You think that this imperative style JS is going to work any better on heterogenous data:

  sum = 0
  for(const item of data)
    sum = sum + data
That's going to fail in the same way, and it's not functional, just imperative. So what is your argument about FP here?


> FP assumes

No, fools assume. FP assumes nothing, it's a non-thinking thing and consequently incapable of assuming. People assume, and we all know what assuming does. So don't.

Sensible people validate. Be sensible, don't be a fool.

> What if I want special treatment for specific entries?

Then do it? What's the hold-up?


My debugger works regardless of the paradigm, maybe step up from vim and emacs?


The cout school of debugging has worked for me anywhere with output (and I don't understand the point of computing with no output?), don't need a fancy environment for that.

Stepping through code is nice when it works, but it often falls apart when there's concurrency, since all you get to step through are timeouts.


You need to improve your debugging experience, that is why things like dynamic instrumentation and dynamic breakpoints exist.

There is no output in GPGPU for example, unless you consider updating a matrix set as output.


Thanks! It has been excellent luck. I always know what I’m debugging because the input and output are the only factors I have to consider.


liked it


> Why use functional programming

4. Make most code harder to read




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: