Hacker News new | past | comments | ask | show | jobs | submit login

Richard Gabriel’s famous essay “Worse is better” (https://www.jwz.org/doc/worse-is-better.html) is an interesting perspective on why Lisp lost to C. In a way, the same arguments (simplicity vs consistency vs correctness vs completeness) can be made for why functional programming lost to OOP.

But those philosophical perspectives aside, personally I find my brain works very much like a Turing Machine, when dealing with complex problems. Apart from my code, even most of my todos are simple step-by-step instructions to achieve something. It’s easily understandable why like me, other non-math folks would prefer a Turing Machine over Lambda Calculus’ way of writing instructions.

This could be why OOP/Imperative was often preferred over FP.




> personally I find my brain works very much like a Turing Machine

Exactly this. How baking a cake in FP looks like:

* A cake is a hot cake that has been cooled on a damp tea towel, where a hot cake is a prepared cake that has been baked in a preheated oven for 30 minutes.

* A preheated oven is an oven that has been heated to 175 degrees C.

* A prepared cake is batter that has been poured into prepared pans, where batter is mixture that has chopped walnuts stirred in. Where mixture is butter, white sugar and brown sugar that has been creamed in a large bowl until light and fluffy

Taken from here: https://probablydance.com/2016/02/27/functional-programming-...


I actually don't know of any functional programming languages that don't have syntactic and semantic support for writing step-by-step algorithms.


Could you elaborate on this a bit? Basically calling a functions form an other is how a step-by-step algorithm would work in FP, no? And pattern match on what comes in, and return an immutable copy.

For example you can put functions in a list, and push a datastructure through them, like a pipeline.

edit: https://probablydance.com/2016/02/27/functional-programming-...


The control structure that takes the different functions/values and glue them together is what makes your code imperative or descriptive. While there is a lot of overlap between descriptive style and fp, it is not always the case.

In haskell, for instance, the do notation lets you write imperative code:

    f  article = do 
       x <- getPrices article
       y <- book article
       finishDeal article x y
...and then the compiler desugars it to a more descriptive form.


In fairness, we could be in the List monad here, and this would effectively be a list comprehension rather than an imperative program. Even if we are in IO, `getPrices` and `book` may never execute --- even `finishDeal` may never execute! --- depending on non-local details that aren't shown here.

The code certainly "looks imperative" but it's still a declarative program --- the semantics are rather different from what a typical "imperative programmer" would expect.


You miscounted the number of negatives in the comment you replied to.


The same can be said about imperative languages supporting FP concepts: they have it, but it's just not the same.


Okay, so first of all this is an excellent joke. But it's not that great of an analogy.

This quote chooses one of many FP syntaxes. It's cherry picking. It uses "a = b where c = d." That's equivalent to "let c = d in a = b." Let will allow you to write things like:

    let
        cake_ingredients = [butter, white sugar, sugar]
        batter = cream(ingredients=cake_ingredients,
                       dish=large_bowl,
                       condition=LIGHT_AND_FLUFFY)
        prepped_pans = pans_full_of(batter)
        oven = preheat(your_over, 175 C)
        cake = bake(cake, 30 minutes)
    in
        dessert_tonight = cooled(cake)
This isn't where FP and imperative are different.

What's really different is that the let statement doesn't define execution order. That's not so relevant to this part of the mental modeling though.

I think it's great that I can choose between "let ... in ..." or "... where ...". In real life, for a complex bit of language, I happen to often like putting the main point at the top (like a thesis statement), then progressively giving more details. Mix and match however's clear.


Perhaps it's the analogy leaking, but in baking, order of operations matters, and some operations must be done in parallel (pre-heating, based on initial oven state) to produce a good end product.


Yes, and this is one of the areas where functional programming really shines. An imperative program is defined as a series of ordered steps and the compiler can't (in general) reorder steps to optimize use of resources because the steps could have arbitrary side-effects.[1] The FP version is essentially a dependency graph which constrains the order of operations without mandating a specific final order. The pre-heated oven is needed for baking but not for the batter, so these parts can automatically be evaluated in parallel just by enabling the multithreaded runtime.[2]

[1] Certain primitive operations can be reordered but that depends on the compiler having access to the entire program. A call to a shared library function is an effective optimization barrier for any access to non-local data due to potential side effects.

[2] For the purpose of this example I'm assuming the unused `oven` variable was meant to be passed in to the `bake` function.


> the compiler can't (in general) reorder steps to optimize use of resources

i'm not sure what you mean by that because compilers reorder instructions to improve performance all the time (and CPUs do it dynamically too).


Compilers and CPUs only reorder over tiny instruction windows. He's talking about re-orderings over enormous windows, in a way that requires whole program analysis.

But that doesn't really happen in reality. FP languages promised auto-parallelisation for decades and never delivered. Plus you can get it in imperative languages too - like with Java's parallel streams. But I never see a parallel stream in real use.


It's not completely automatic but it is fairly close. If you enable the threaded runtime then "sparks" will be evaluated in parallel. You do have to say which expressions you want evaluated as separate "sparks" with the `par` operator but that's it—the runtime manages the threads, and common sub-expressions shared by multiple sparks will typically be evaluated only once. There are no race conditions or other typically concurrency issues to worry about since the language guarantees the absence of side effects. (That is the biggest difference between this and Java's parallel streams: If the reduction operation isn't a pure function then the result is undefined, and there isn't anything at the language level in Java to enforce this requirement.)

EDIT: An example of effective parallelism in Haskell:

    import Control.Parallel (par)

    fib n
       | n < 2   = 1
       | n >= 15 = b `par` a `seq` a + b
       | True    = a + b
       where a = fib (n-2); b = fib (n-1)

    main = print $ map fib [0..39]
Note that the implementation of `fib` has been deliberately pessimized to simulate an expensive computation. The only difference from the non-parallel version is the use of `par` and `seq` to hint that the two operands should be evaluated in parallel when n >= 15. These hints cannot change the result, only the evaluation strategy. Compile and link with "-threaded -with-rtsopts=-N" and this will automatically take advantage of multiple cores. (1C=9.9s elapsed; 2C=5.4s; 3C=4s; 4C=3.5s)


Yeah, I know how it works, and the level of automation is the same in all modern languages - as you note, Java's equivalent of "par" is writing ".parallelStream()" instead of ".stream()" so no real difference, beyond the language enforced immutability.

But it doesn't actually matter. How often is parallelStream used in reality? Basically never. I would find the arguments of FP developers convincing if I was constantly encountering stories of people who really wanted to use parallelStream but kept encountering bugs where they made a thinko and accidentally mutated shared state until they gave up in frustration and just went back to the old ways. I'd find it convincing if I'd had that experience also. In practice, avoiding shared state over the kind of problems automated parallelism is used for is very easy and comes naturally. I've used parallel streams only rarely, and actually never in a real shipping program I think, but when I did I was fully aware of what mutable state might be shared and it wasn't an issue.

The real problem with this kind of parallelism is that it's too coarse grained and even writing par or parallelStream is too much mental overhead, because you often can't easily predict when it'll be a win vs a loss. For instance you might write a program expecting the list of inputs to usually be around 100 items: probably not worth parallelising, so you ignore it or try it and discover the program got slower. Then one day a user runs it on 100 million items. The parallelism could have helped there, but there's no mechanism to decide whether to use it or not automatically, so in practice it wasn't used.

Automatic vectorisation attacked this problem from a different angle and did make some good progress over time. But that just creates a different problem - you really need the performance but apparently small changes can perturb the optimisations for unclear reasons, so there's an invisible performance cliff. The Java guys pushed auto-vectorisation for years but have now given up on it (sorta) and are exposing explicit SIMD APIs.


I mean that an imperative program spells out a particular order of operations and the compiler is forced to reverse-engineer the dependencies based on its (usually incomplete) knowledge of each step's side effects. When the potential side effects are unknown, such as for calls to shared library functions, system calls, or access to shared global data, or any call to a function outside the current compilation unit in the absence of link-time optimization, then it must preserve the original order even if that order is less than optimal.

The kind of reordering you see in imperative programs tends to be on the small scale, affecting only nearby primitive operations within a single thread. You don't generally see imperative compilers automatically farming out large sections of the program onto separate threads to be evaluated in parallel. That is something that only really becomes practical when you can be sure that the evaluation of one part won't affect any other part, i.e. in a language with referential transparency.


> How baking a cake in FP looks like:

> * A cake is a hot cake that [...]

The difference between a functional programmer and an imperative programmer is an imperative programmer looks at that and says “yeah, great takedown of FP”, while a functional programmer says, “what’s with the unbounded recursion?”

But, more seriously, it's long been established that real programming benefits from use of both imperative and declarative (the latter including—but not limited to—functional) idioms, which is why mainstream imperative OO languages have for more than decade importing functional features at a mad clip, and why functional languages have historically either been impure (e.g., Lisp and ML and many of their descendants) or included embedded syntax sugar that supports expressing imperative sequences using more conventionally imperative idioms (e.g., Haskell do-notation.)

The difference is that embedding functional idioms in imperative languages often requires warnings about what you can and cannot do safely to data without causing chaos, while imperative embeddings in functional code have no such problems.


And then you actually try to write it in a functional language, and end up with something like:

cake = map (cool . bake 30 175) . splitIntoPans $ mix [ butter, sugar, walnuts ]


I think partial application and pipe operators make this so very intuitive though:

[butter, sugar, walnuts] |> mix() |> splitIntoPans(pans = 3) |> bake(time = 30, temp = 175) |> cool(time = 5)


We can improve the syntax further

    [butter, sugar, walnuts]
    mix()
    splitIntoPans(pans = 3)
    bake(time = 30, temp = 175)
    cool(time = 5)
Hmm, wait a second.....


    [butter, sugar, walnuts]
    ^^^
     Somewhere wanted type CakeIngredients but missing record field "Flour"

If imperative style programming came with type inference on the level of the Ocaml compiler sign me up. For now, though, I can spare a few cycles in exchange for correct programs.


Careful, somewhere along that line you might even come to a conclusion that Haskell is world's most advanced imperative language, with the reprogrammable semicolons and whatnot.



But this doesn't handle the state. It is not working imperativ code.


If you want to bake a cake, FP like this could seem awkward.

But what if you want to run a bakery and split the work across multiple cooks? In that case it helps to have clearly defined ingredients.

I'm only trying to say that it all depends on the context. Obviously personal preference is a big factor too.


but now that you've written the cake baking data type, with a little small tweak, you've got a bread baking data type.


I'll find it more intuitive to do both as an imperative series of steps.

Some of my friends are in love with FP. I am not. I've done more FP than most, I can work with it, but my brain has never become in tune with it. I can bang out my intent as imperative code in real time, but with FP I have to stop and think to translate.

FP also means that I can't always easily tell the runtime complexity of what I'm writing and there's a complex black box between my code and the metal.

Maybe some of my friends' brains are superior and can think in FP, all the more power to them. but empirical evidence is that most people are not capable of that, so FP will probably forever remain in the shadow of imperative programming.


Do you think of types and transformations between types when you write imperative code?

I mean usually the problem in FP is that you simply can't type mutation (you'd have to use dependent types and so on), okay, so use immutability, great, but then every "step" is just some franken-type-partial-whatever. And TypeScript has great support for these (first of all it infers a lot, but you can use nice type combinators to safeguard that you get what you wanted).

I don't like pure FP exactly because of this, because many times you have to use some very complicated constellation of concepts to be able to represent a specific data flow / computation / transformation / data structure. Whereas in TS / Scala you just have a nice escape hatch.


Haha, that sounds like the C++ inheritance joke.


True, but what if you never wanted bread?


I'd rather have a baking class that takes an argument for what I want to bake, either bread or cake, and spares me the details of how baking is done. I don't have to know that a preheated oven is one that is at 175 grades etc


And when your oven has a problem with it's heating element you'll have no idea why your cake didn't turn out well. We're supposed to be engineers, right? Learning how things work is good.


My comment was supposed to be a joke about the vernacular in which OO tends to get presented.


But then your cake might easily burn.


Baking a cake is like being a compiler and a processor for recipe instructions. Of course it seems awkward from the perspective of a human baker because before you can process/bake you have to "compile" the expression to procedural steps. The computer does that without complaint.

This may illustrate that humans aren't good compilers of functional code, or in particular that humans aren't good at parsing poorly formatted functional code (again, computer parsers don't care about formatting). But I don't think it indicates that functional code isn't good for reading and writing, even for the same humans.

I also don't think this recipe resembles FP. Where are the functions and their arguments? There is no visible hieararchy. It is unnecessarily obtuse in the first place.


Same example of baking a cake to explain functional programming in R by Hadley Wickham. A good presentation.

https://speakerdeck.com/hadley/the-joy-of-functional-program...


You should read the OOP version of "for want of a nail" proverb near the end of this post (http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom...).


> In any case the point is this: I had some straight imperative code that was doing the same thing several times. In order to make it generic I couldn’t just introduce a loop around the repeated code, but I had to completely change the control flow. There is too much puzzle solving here. In fact I didn’t solve this the first time I tried. In my first attempt I ended up with something far too complicated and then just left the code in the original form. Only after coming back to the problem a few days later did I come up with the simple solution above.

There are two kinds of people, I guess. To me, this description simply encapsulates the process of being a programmer. Boo hoo, you had to think a little bit and come back later to a hard problem in order to figure it out.

I'm sorry, but that's literally how every profession which requires engineering skills plays out. And like other professions, after you solve a problem once you don't have to solve the problem again. It's solved. The next template Gabriel writes in that flavor will not take nearly as long.

Seriously, all of these points he raises against FP are entirely contrived, and come across as the meaningless complaining of an uninspired programmer.


"It doesn't fit the way I think" != "I'm too stupid or lazy to figure it out".

And why should s/he do so? Between the language and the programmer, which one is the tool? Should not the tool fit the human, and not the other way around?

FP fits the way some people think. It doesn't fit the way others think. And that's fine. It's not a defect that some people think that way, and it's not a defect that some people don't.


I think the whole conversation is silly; FP is another tool in my toolbox. Yes, with some effort I can accomplish most jobs with a crowbar, but why would I do that?


To the second question, when you work in the industry you realize the answer is often the programmer.

Edit: There were a lot of questions in that comment.


I agree that it often works out that way... but it shouldn't.


Never seen that before, thanks! It's very funny.

I can't write Lisp to save my life, but I know roughly how you're supposed to do it.


Even in maths, I find a solution in terms of the problem easier to understand than one in terms of the previous step.

Even when the recursive form is a more natural representation, like arithmetic sequences: start at s, increase by d with each step:

  a(0) = s, a(n) = a(n-1)+d

  a(n) = s + n*d
The analytical form seems simpler, neater, more "right" and more efficient to me - even though, if you want the whole sequence, the recursive form is more efficient (given tail-call optimisation).

I suspect I'm just not smart enough.

fp can be much shorter, and the execution model isn't actually hidden, just unfamiliar (and unintuitive and unnatural - for me). Consider: all suffixes of a list. In jq:

  while( length>0; .[1:] )


>I suspect I'm just not smart enough

Nah, I have a PhD in math and I agree with you completely. Imperative is way better. And most mathematicians agree with me. You can see this by cracking open any actual math or logic journal and looking how they write pseudocode (yes, pseudocode: where things like performance don't matter one tiny little bit). You'll see they're almost entirely imperative. Sometimes they even use GOTO!


Agreed. I arrived at programming through math (B.S. in Mathematics) and have no love for FP. At the end of the day all software (except hobby projects) is mostly about maintaining it. FP adds unnecessary complexity, abstraction and obfuscations. None of those qualities help code maintenance.


Is this view of FP based on actual experience maintaining a non-trivial program written in an FP language? In my experience, FP doesn’t necessarily add a lot of unnecessary complexity. Sure, languages like Haskell are perhaps initially a bit more abstract when learning them, but once you know the basics, you can write pretty straightforward code in it. You can also do crazier things, but there is no need for that in most software.

Keeping a functional style, regardless of the language (although FP languages lend themselves better to this) can help in keeping code more decoupled, since you have to be explicit about side effects.

I think that both FP and imperative languages have places where they shine, and I freely switch between them depending on the project. Given how much some imperative languages have recently borrowed from FP languages, I think that this shows that functional programming has some significant merits.


>You can also do crazier things, but there is no need for that in most software.

For dev teams of sufficiently large size, a general principle is: whatever crazy things the language allows, someone is going to do and commit into the codebase.


Lately I’ve been thinking that a lot of code style debates center around an explicit versus implicit axis. Imperative is more explicit, and, in one sense, easier to see what’s going on since it lays everything out step by step. On the other hand, those same steps are a mix of essential steps (that deal with the problem being solved) and accidental steps (that deal with computer and code in order to get the job done.)

It seems to me that OOP, Functional, and Relational programming models try to abstract away the accidental steps, but like all abstractions there are limitations.

I suspect that once familiar with one of these models, imperative seems awfully tedious, however now the code is more obscure to those not well versed in the paradigm, thus we have a trade off between ease of use for many and optimal for some.


Implicit also means a tradeoff in estimating performance, an instance of an abstraction leaking.

I've been trying to think of a totally clean functional abstraction, i.e. that's functional under the hood, but there's no way to tell. Perhaps in a compiler?


Absolutely, explicit vs implicit is part of imperative vs. functional. And doing more with less information is elegant - and has a deeper significance in terms of Occam's Razor, that simplicity tends to closer to the truth, and therefore generalizes better. And, like pg's take, shorter code means less code to write, to read, to modify.

There can be leakage, when the given model is not perfectly accurate, and you need the true implementation details (this also happens for imperative code - it can be very helpful to have source of libraries) - in debugging, in performance, in working out how to do things.

But I feel a general issue is that it might not be a good fit for the human code processing system... Our interactions in the real world are more like imperative programming - not just familiarity, but how we evolved. This issue is similar to how quantum physics and relativity aren't a good match to the human physics system, which seems to be the mechanical/contact theory. To convert things to recursion is like working out an inductive proof - you can do it, but it is harder and more work than just getting it done in the first place.

A specific issue about this is that functional recursion is based on the previous step, whereas imperative code is usually based on the starting step. Like, build a new list at each recursion vs. indices into the input list. The latter is easier because it's always the same thing being indexed, instead of changing with each recursion.


This doesn't look to me like the difference between functional and imperative so much as the difference between recursion / iteration and map / list comprehension.


You may need to exercise some charity here.

I've been trying to see why fp isn't intuitive for me.

I suspect it's like a second (human) language acquired as an adult: only those with a talent for language (maybe 5%?) can become fluent with practice.

Regarding my first example, I see recursion (or induction) as the essence of fp; and the recurrence form of arithmetic sequences is the simplest recursion I've seen used in mathematics.

The explicit form in that example is harder to justify as "imperative". But a commonality of imperative style is referring to the original input, rather than a previous step (see the first line of my above comment). This isn't the literal meaning of "imperative", but may be a key distinction between fp and ip style - the one that causes the intuitive/fluency issue for me.

To illustrate using my third (jq) example of suffixes, here's an "imperative" version, in py-like psuedocode:

  for i = 1 to length
    # a suffix
    for j = i to length
      print a[j]
    print \n
This is so much longer than jq (though shorter if used .[j:]), but it is how I understand the problem, at first and most easily.

It always refers to the starting input of the problem, not the previous step, and this might be why it's easier for me.

I'm interested in your comment - could you elaborate please? There's a few ways to relate your comment to different parts of mine, and I'm not sure which one was intended.


Well I agree with you that that kind of recurrence (which mathematicians love to use so much, as do some functional programmers who're overly influenced by math) is not very intuitive and frankly is a programming anti-pattern in my view.

But I disagree with you that recursion is the essence of fp. For your concrete example, a more functional version of doing that (in Python) would be something like:

  print("\n".join(a[i:] for i in range(len(a)))
No need to reuse f(i-1) when you can express f(i) directly.

Reusing the previous step (whether it is using recursion, rising intermediate computations in the form of local variables in a loop, or through a fold) should only be done when absolutely necessary.


> [recurrence] is not very intuitive and frankly is a programming anti-pattern in my view. [...] Reusing the previous step ... should only be done when absolutely necessary

Thanks, that's my main concern (fp was just an example). Would you agree the reason it is bad is becase there is more to track mentally in the execution model? (i.e. the intermediate results).

I think a complex execution model is problematic in general (it sounds obvious when I say it that way).

> which mathematicians love to use so much,

hmm... I was thinking "induction", and believed that fp is the same. .. > But I disagree with you that recursion is the essence of fp

This is BTW now, but that statement surprises me. Can you elaborate? What is the essencd of fp (has it one)?

Is your py version "more functional"? I'm so wedded to the idea that fp=recursion that that's the reason it doesn't seem functional to me. What makes it functional? Just that it's a nested expression (i.e. fn calls)?


Well I guess you could say the essence of FP is working recursively without (mostly) thinking of it, and not having to deal with the sort of control flow necessary for either loops or the kind of self-administered recursion you seem to think of.

The .join() taking the iterator in their example is, if you look closer, very much a fold/reduce repeatedly invoking a join of the thus far assembled string, the next part, and \n. Recursion!

Also rather than mutable i/j variables being incremented (albeit implicitly so in your example), generating a list of all numbers on which to run.


In an old textbook I haven't been able to find again (browsing in another uni's library) regarding the Entscheidungsproblem I read that Church wrote to Turing, saying he thought the Turing Machine waa a more convincing/natural/intuitive representation of how mathematicians thought about algorithms than his own lambda calculus.

Maybe he was just being modest, or like John McCarthy, just didn't see or believe its potential.

Note that this was before computers or programming, and that there's no formal proof that a Turning machine can encode any computation - so its convincingness was important.


This is correct. Everyone I've met that insisted that functional programming is superior to imperative has been a big time math/CS nerd, the kind that goes to grad school and was confused when the iPad launched because hey it does nothing that a laptop doesn't already do!

My experience doing functional programming is that hurt my brain, it just doesn't map as cleanly to how I think of things happening compared to imperative programming. It was just really painful to code in, and most of my classmates had the same opinion.


It’s mostly a matter of practice. I think that many people’s experience of functional programming is a (potentially poorly-taught) university course, during which there is not really enough time to really become comfortable with functional programming. Maybe it’s true that the learning curve is a bit (a lot?) steeper, though. But once you are comfortable with it, it’s not significantly more difficult than writing code in Java or Python. I also think that it’s worth learning even just for the sake of becoming a better programmer. It teaches you to think in a different way, which also translates to programming in imperative languages.


Beware: JWZ doesn't like people visiting his website from HN.


The fact to he took the time to do that shows who the real man-child is


The fact that it's the only site I've seen that demonstrates the ability to read HTTP referral headers from hacker news shows who the real hacker is...


Not sure if serious, but looking at referral headers is commonplace and trivial


Please. The "real hackers" are proxying their requests and sending custom headers to begin with.


That, or he hates the HN hug of death.


Nah, just having a problem with the hug of death would be an explanation for redirecting to a polite static message saying "sorry, my site can't handle the load when HN links to it". What he has done instead is excessively diskish.


What has he done? Everyone’s commenting he doesn’t like HN but when I clicked the link everything looks fine. Serious question.


It redirects to an image with a hairy testicle and gives a low opinion of HN readers: https://web.archive.org/web/20191014203443/https://www.jwz.o...


You must be using Brave or a browser plugin which doesn't send referral headers. If you use a normal browser, it displays a testicle in an egg cup with a silly phrase complaining about the demographic of HN users.


I open everything for which I don't need to be logged in, in an incognito window, and this page worked fine.


An incognito window doesn't quite count as "if you use a normal browser". Unless your not using incognito is the unusual case for you, which it isn't for most users.

Given a choice between changing my browsing behaviour to see his content or just blocking it so it (the testicle redirect or the other content) will never both my vision again, I go for the latter option.


I’m using iOS safari with AdGuard. It’s probably AdGuard.


Ah yes, I didn't remeber at first why that domain was added to my hosts blacklist.


Personally my thinking changes from Turing Machine to more math like with each year I do functional programming


Lisp lost in a much more profound way recently, and it's very rare to see anyone mention it, especially on the Lisp side of the conversation.

Over the last 10 years or so, we have come to the painful conclusion that mixing data and code is a very, very bad idea. It can't be done safely. We are putting functionality into processors and operating systems to expressly disallow this behavior.

If that conclusion is true, then Lisp is broken by design. There is no fixing it. It likes conflating the two, which means it can't be trusted.


> Apart from my code, even most of my todos are simple step-by-step instructions to achieve something.

> [...]

> This could be why OOP/Imperative was often preferred over FP.

Though this doesn't really explain why OOP is preferred over imperative (since the former doesn't really correspond to a set of step-by-step instructions).


The latest no-OOP imperative language with any kind of market share is C. So everything that's terrible about C: unsafe manual memory, portability issues, tooling, no generics or overloading, horrible ergonomics, 2nd class functions, globals everywhere, etc, are all forever associated with imperative programming. OOP was born at the time of fixing all those problems, so those languages were a big improvement in ways that had nothing to do with OOP. Now that all the top languages are multi-paradigm, only a puritan would avoid OOP, and they'd have a tough time interacting with libraries and frameworks. So every codebase is a little wishy-washy and OOP wins by default. Imperative has no advocates left in industry or academia, so most people don't even think of it as a defensible design decision.


One language that was not on the presenters list is SQL, very popular, but not OO nor functional.

One thing lot of programmers do is to abstract SQL to OO style, even though SQL describes a relation that can be computed to a result, in some way similar to a function, but it seems that most prefer to look at is has a state, even though it doesn't.

Sure, the tables where data is stored has a state, but the sum of the tables is a relationship in time & depending how you look at it you get different results. It is very hard to map relationships to OO correctly.

It is probably easier for most people to think about the world as set of things rater than a relation in time. Many of our natural languages are organized around things.


The link you shared now leads to this when clicked on hn: https://web.archive.org/web/20191014203443/https://www.jwz.o...

When copied and pasted into next tab it leads to article.


The link is NSFW.


OOP is nothing like a Turing Machine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: