Hacker News new | past | comments | ask | show | jobs | submit login

> personally I find my brain works very much like a Turing Machine

Exactly this. How baking a cake in FP looks like:

* A cake is a hot cake that has been cooled on a damp tea towel, where a hot cake is a prepared cake that has been baked in a preheated oven for 30 minutes.

* A preheated oven is an oven that has been heated to 175 degrees C.

* A prepared cake is batter that has been poured into prepared pans, where batter is mixture that has chopped walnuts stirred in. Where mixture is butter, white sugar and brown sugar that has been creamed in a large bowl until light and fluffy

Taken from here: https://probablydance.com/2016/02/27/functional-programming-...




I actually don't know of any functional programming languages that don't have syntactic and semantic support for writing step-by-step algorithms.


Could you elaborate on this a bit? Basically calling a functions form an other is how a step-by-step algorithm would work in FP, no? And pattern match on what comes in, and return an immutable copy.

For example you can put functions in a list, and push a datastructure through them, like a pipeline.

edit: https://probablydance.com/2016/02/27/functional-programming-...


The control structure that takes the different functions/values and glue them together is what makes your code imperative or descriptive. While there is a lot of overlap between descriptive style and fp, it is not always the case.

In haskell, for instance, the do notation lets you write imperative code:

    f  article = do 
       x <- getPrices article
       y <- book article
       finishDeal article x y
...and then the compiler desugars it to a more descriptive form.


In fairness, we could be in the List monad here, and this would effectively be a list comprehension rather than an imperative program. Even if we are in IO, `getPrices` and `book` may never execute --- even `finishDeal` may never execute! --- depending on non-local details that aren't shown here.

The code certainly "looks imperative" but it's still a declarative program --- the semantics are rather different from what a typical "imperative programmer" would expect.


You miscounted the number of negatives in the comment you replied to.


The same can be said about imperative languages supporting FP concepts: they have it, but it's just not the same.


Okay, so first of all this is an excellent joke. But it's not that great of an analogy.

This quote chooses one of many FP syntaxes. It's cherry picking. It uses "a = b where c = d." That's equivalent to "let c = d in a = b." Let will allow you to write things like:

    let
        cake_ingredients = [butter, white sugar, sugar]
        batter = cream(ingredients=cake_ingredients,
                       dish=large_bowl,
                       condition=LIGHT_AND_FLUFFY)
        prepped_pans = pans_full_of(batter)
        oven = preheat(your_over, 175 C)
        cake = bake(cake, 30 minutes)
    in
        dessert_tonight = cooled(cake)
This isn't where FP and imperative are different.

What's really different is that the let statement doesn't define execution order. That's not so relevant to this part of the mental modeling though.

I think it's great that I can choose between "let ... in ..." or "... where ...". In real life, for a complex bit of language, I happen to often like putting the main point at the top (like a thesis statement), then progressively giving more details. Mix and match however's clear.


Perhaps it's the analogy leaking, but in baking, order of operations matters, and some operations must be done in parallel (pre-heating, based on initial oven state) to produce a good end product.


Yes, and this is one of the areas where functional programming really shines. An imperative program is defined as a series of ordered steps and the compiler can't (in general) reorder steps to optimize use of resources because the steps could have arbitrary side-effects.[1] The FP version is essentially a dependency graph which constrains the order of operations without mandating a specific final order. The pre-heated oven is needed for baking but not for the batter, so these parts can automatically be evaluated in parallel just by enabling the multithreaded runtime.[2]

[1] Certain primitive operations can be reordered but that depends on the compiler having access to the entire program. A call to a shared library function is an effective optimization barrier for any access to non-local data due to potential side effects.

[2] For the purpose of this example I'm assuming the unused `oven` variable was meant to be passed in to the `bake` function.


> the compiler can't (in general) reorder steps to optimize use of resources

i'm not sure what you mean by that because compilers reorder instructions to improve performance all the time (and CPUs do it dynamically too).


Compilers and CPUs only reorder over tiny instruction windows. He's talking about re-orderings over enormous windows, in a way that requires whole program analysis.

But that doesn't really happen in reality. FP languages promised auto-parallelisation for decades and never delivered. Plus you can get it in imperative languages too - like with Java's parallel streams. But I never see a parallel stream in real use.


It's not completely automatic but it is fairly close. If you enable the threaded runtime then "sparks" will be evaluated in parallel. You do have to say which expressions you want evaluated as separate "sparks" with the `par` operator but that's it—the runtime manages the threads, and common sub-expressions shared by multiple sparks will typically be evaluated only once. There are no race conditions or other typically concurrency issues to worry about since the language guarantees the absence of side effects. (That is the biggest difference between this and Java's parallel streams: If the reduction operation isn't a pure function then the result is undefined, and there isn't anything at the language level in Java to enforce this requirement.)

EDIT: An example of effective parallelism in Haskell:

    import Control.Parallel (par)

    fib n
       | n < 2   = 1
       | n >= 15 = b `par` a `seq` a + b
       | True    = a + b
       where a = fib (n-2); b = fib (n-1)

    main = print $ map fib [0..39]
Note that the implementation of `fib` has been deliberately pessimized to simulate an expensive computation. The only difference from the non-parallel version is the use of `par` and `seq` to hint that the two operands should be evaluated in parallel when n >= 15. These hints cannot change the result, only the evaluation strategy. Compile and link with "-threaded -with-rtsopts=-N" and this will automatically take advantage of multiple cores. (1C=9.9s elapsed; 2C=5.4s; 3C=4s; 4C=3.5s)


Yeah, I know how it works, and the level of automation is the same in all modern languages - as you note, Java's equivalent of "par" is writing ".parallelStream()" instead of ".stream()" so no real difference, beyond the language enforced immutability.

But it doesn't actually matter. How often is parallelStream used in reality? Basically never. I would find the arguments of FP developers convincing if I was constantly encountering stories of people who really wanted to use parallelStream but kept encountering bugs where they made a thinko and accidentally mutated shared state until they gave up in frustration and just went back to the old ways. I'd find it convincing if I'd had that experience also. In practice, avoiding shared state over the kind of problems automated parallelism is used for is very easy and comes naturally. I've used parallel streams only rarely, and actually never in a real shipping program I think, but when I did I was fully aware of what mutable state might be shared and it wasn't an issue.

The real problem with this kind of parallelism is that it's too coarse grained and even writing par or parallelStream is too much mental overhead, because you often can't easily predict when it'll be a win vs a loss. For instance you might write a program expecting the list of inputs to usually be around 100 items: probably not worth parallelising, so you ignore it or try it and discover the program got slower. Then one day a user runs it on 100 million items. The parallelism could have helped there, but there's no mechanism to decide whether to use it or not automatically, so in practice it wasn't used.

Automatic vectorisation attacked this problem from a different angle and did make some good progress over time. But that just creates a different problem - you really need the performance but apparently small changes can perturb the optimisations for unclear reasons, so there's an invisible performance cliff. The Java guys pushed auto-vectorisation for years but have now given up on it (sorta) and are exposing explicit SIMD APIs.


I mean that an imperative program spells out a particular order of operations and the compiler is forced to reverse-engineer the dependencies based on its (usually incomplete) knowledge of each step's side effects. When the potential side effects are unknown, such as for calls to shared library functions, system calls, or access to shared global data, or any call to a function outside the current compilation unit in the absence of link-time optimization, then it must preserve the original order even if that order is less than optimal.

The kind of reordering you see in imperative programs tends to be on the small scale, affecting only nearby primitive operations within a single thread. You don't generally see imperative compilers automatically farming out large sections of the program onto separate threads to be evaluated in parallel. That is something that only really becomes practical when you can be sure that the evaluation of one part won't affect any other part, i.e. in a language with referential transparency.


> How baking a cake in FP looks like:

> * A cake is a hot cake that [...]

The difference between a functional programmer and an imperative programmer is an imperative programmer looks at that and says “yeah, great takedown of FP”, while a functional programmer says, “what’s with the unbounded recursion?”

But, more seriously, it's long been established that real programming benefits from use of both imperative and declarative (the latter including—but not limited to—functional) idioms, which is why mainstream imperative OO languages have for more than decade importing functional features at a mad clip, and why functional languages have historically either been impure (e.g., Lisp and ML and many of their descendants) or included embedded syntax sugar that supports expressing imperative sequences using more conventionally imperative idioms (e.g., Haskell do-notation.)

The difference is that embedding functional idioms in imperative languages often requires warnings about what you can and cannot do safely to data without causing chaos, while imperative embeddings in functional code have no such problems.


And then you actually try to write it in a functional language, and end up with something like:

cake = map (cool . bake 30 175) . splitIntoPans $ mix [ butter, sugar, walnuts ]


I think partial application and pipe operators make this so very intuitive though:

[butter, sugar, walnuts] |> mix() |> splitIntoPans(pans = 3) |> bake(time = 30, temp = 175) |> cool(time = 5)


We can improve the syntax further

    [butter, sugar, walnuts]
    mix()
    splitIntoPans(pans = 3)
    bake(time = 30, temp = 175)
    cool(time = 5)
Hmm, wait a second.....


    [butter, sugar, walnuts]
    ^^^
     Somewhere wanted type CakeIngredients but missing record field "Flour"

If imperative style programming came with type inference on the level of the Ocaml compiler sign me up. For now, though, I can spare a few cycles in exchange for correct programs.


Careful, somewhere along that line you might even come to a conclusion that Haskell is world's most advanced imperative language, with the reprogrammable semicolons and whatnot.



But this doesn't handle the state. It is not working imperativ code.


If you want to bake a cake, FP like this could seem awkward.

But what if you want to run a bakery and split the work across multiple cooks? In that case it helps to have clearly defined ingredients.

I'm only trying to say that it all depends on the context. Obviously personal preference is a big factor too.


but now that you've written the cake baking data type, with a little small tweak, you've got a bread baking data type.


I'll find it more intuitive to do both as an imperative series of steps.

Some of my friends are in love with FP. I am not. I've done more FP than most, I can work with it, but my brain has never become in tune with it. I can bang out my intent as imperative code in real time, but with FP I have to stop and think to translate.

FP also means that I can't always easily tell the runtime complexity of what I'm writing and there's a complex black box between my code and the metal.

Maybe some of my friends' brains are superior and can think in FP, all the more power to them. but empirical evidence is that most people are not capable of that, so FP will probably forever remain in the shadow of imperative programming.


Do you think of types and transformations between types when you write imperative code?

I mean usually the problem in FP is that you simply can't type mutation (you'd have to use dependent types and so on), okay, so use immutability, great, but then every "step" is just some franken-type-partial-whatever. And TypeScript has great support for these (first of all it infers a lot, but you can use nice type combinators to safeguard that you get what you wanted).

I don't like pure FP exactly because of this, because many times you have to use some very complicated constellation of concepts to be able to represent a specific data flow / computation / transformation / data structure. Whereas in TS / Scala you just have a nice escape hatch.


Haha, that sounds like the C++ inheritance joke.


True, but what if you never wanted bread?


I'd rather have a baking class that takes an argument for what I want to bake, either bread or cake, and spares me the details of how baking is done. I don't have to know that a preheated oven is one that is at 175 grades etc


And when your oven has a problem with it's heating element you'll have no idea why your cake didn't turn out well. We're supposed to be engineers, right? Learning how things work is good.


My comment was supposed to be a joke about the vernacular in which OO tends to get presented.


But then your cake might easily burn.


Baking a cake is like being a compiler and a processor for recipe instructions. Of course it seems awkward from the perspective of a human baker because before you can process/bake you have to "compile" the expression to procedural steps. The computer does that without complaint.

This may illustrate that humans aren't good compilers of functional code, or in particular that humans aren't good at parsing poorly formatted functional code (again, computer parsers don't care about formatting). But I don't think it indicates that functional code isn't good for reading and writing, even for the same humans.

I also don't think this recipe resembles FP. Where are the functions and their arguments? There is no visible hieararchy. It is unnecessarily obtuse in the first place.


Same example of baking a cake to explain functional programming in R by Hadley Wickham. A good presentation.

https://speakerdeck.com/hadley/the-joy-of-functional-program...


You should read the OOP version of "for want of a nail" proverb near the end of this post (http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom...).


> In any case the point is this: I had some straight imperative code that was doing the same thing several times. In order to make it generic I couldn’t just introduce a loop around the repeated code, but I had to completely change the control flow. There is too much puzzle solving here. In fact I didn’t solve this the first time I tried. In my first attempt I ended up with something far too complicated and then just left the code in the original form. Only after coming back to the problem a few days later did I come up with the simple solution above.

There are two kinds of people, I guess. To me, this description simply encapsulates the process of being a programmer. Boo hoo, you had to think a little bit and come back later to a hard problem in order to figure it out.

I'm sorry, but that's literally how every profession which requires engineering skills plays out. And like other professions, after you solve a problem once you don't have to solve the problem again. It's solved. The next template Gabriel writes in that flavor will not take nearly as long.

Seriously, all of these points he raises against FP are entirely contrived, and come across as the meaningless complaining of an uninspired programmer.


"It doesn't fit the way I think" != "I'm too stupid or lazy to figure it out".

And why should s/he do so? Between the language and the programmer, which one is the tool? Should not the tool fit the human, and not the other way around?

FP fits the way some people think. It doesn't fit the way others think. And that's fine. It's not a defect that some people think that way, and it's not a defect that some people don't.


I think the whole conversation is silly; FP is another tool in my toolbox. Yes, with some effort I can accomplish most jobs with a crowbar, but why would I do that?


To the second question, when you work in the industry you realize the answer is often the programmer.

Edit: There were a lot of questions in that comment.


I agree that it often works out that way... but it shouldn't.


Never seen that before, thanks! It's very funny.

I can't write Lisp to save my life, but I know roughly how you're supposed to do it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: