Hacker News new | past | comments | ask | show | jobs | submit login
Functional Programming for the Object-Oriented Programmer by Brian Marick (leanpub.com)
80 points by saturnflyer on June 24, 2012 | hide | past | favorite | 28 comments



I've been looking for more of a direct comparison of OO vs. FP with side-by-side examples. I think there are a lot of specific but concise examples for demonstrating the two paradigms. I tried to do a little bit of this in a recent post:

http://news.ycombinator.com/item?id=4154484

Would be interested to hear of any other resources that do something similar -- that show specific OO code and corresponding functional takes on the same code. I find that a lot of the discussions avoid getting down to specifics, with code.


The problem is that often there is no direct analogy. And even when there is an analogy, often they behave differently as far as modularity and extensibility are concerned.

OK let me be more specific (keep in mind that I am a little bit Haskell-minded right now so some of my examples apply to generic FP but some are more for Haskell):

As far as encapsulating state goes, you have objects in one side and closures in the other. Pretty straightfoward, if you see it in practice.

As far as control-flow goes, things are kind of very oposite. FP languages tend to use pattern matching (that is, switch statements / the observer pattern) on transparent types to do branching while OO languages settled to use message passing over opaque types. These two approaches are very much opposites - pattern matching makes it easy to add new functions and harder to add new "cases" and also forces types to be transparent. OO dispatching on the other hand, makes it easy to add new cases (classes) but makes it harder to add new methods. The objects are also opaque and encapsulated. (For more on this, search for the "expression problem")

As far as polymorphism goes, things tend to be pretty different. OO languages focus on subtyping, writing code that works seamlessly with a class and its subclasses. This is good for always being able to add new subclasses without breaking old code.

FP languages, on the other hand, tend to focus more on parameric polymorphism (generics, that is, writing code that doesn't care what types its manipulating) and Haskell handles ad-hoc polymorphism with type classes. Type classes are kind of like OO interfaces, but they allow type classes to be created after the types that "implement" them but do not allow you to turn previously monomorphic code into polymorphic code (so its kind of the opposite of OO in this sense).

----

Finally, if you are willing to go really academic on this, you can check out things like Cardeli's "A theory of Objects". Computer scientists inevitably end up translating everything back into the functional calculi they are confortable with and this leads to formal FP-to-OO translations, if that is what you are looking for. Just to warn you though, it turns out that some things like support for inheritance and dynamic dispatching can seriously complicate the final models.


The threshold question: what is your definition of FP, are you a schemer, ML, haskell, or maybe scala or F#?

I could recommend a lot of things to read, (but you've probably already read them), by language designers Wadler, John Hughes, X. Leroy, Odersky, Bertrand Meyer, Rich Hickey. Also the Channel 9 videos of Meijer, Bright, and Alexandrescu talking about D and haskell were terrific.

And for books, you probably know about Pierce's TAPL, but Harper's PL book is rarely mentioned

http://www.cs.cmu.edu/~rwh/plbook/book.pdf

http://channel9.msdn.com/Blogs/Charles/Alexandrescu-Bright-M...

http://lambda.jimpryor.net/translating_between_OCaml_Scheme_...

and a debate about monads, since nobody ever debates about them

http://haskell.1045720.n5.nabble.com/Martin-Odersky-on-quot-...


I'm interested in how this turns out.

Note that:

   As of June 21, 2012, only the first chapter is complete.


I was going to purchase this but saw this fact and stopped myself. I get what the author is trying to do, but I'm not going to buy a book before it is completed. Especially with an unknown (to me) author.

It's like trying to sell me the Brooklyn bridge before it is even built.


This is actually advocated by the "lean" startup movement. I've read a couple of books that advocate putting "buy" buttons on content/applications that do not yet exist to gauge demand or test potential ideas. I get it, but it doesn't sit right with me.


How do you think the Brooklyn bridge got built?


I'm guessing the city built it with taxes. Probably without needing a public vote. I could be wrong though, I'm not from that area. Certainly not some guy on a virtual street corner asking for what amounts to donations.


To anyone who hasn't, I recommend downloading the sample PDF - the available first chapter is a decent size and a great read. I am looking forward to seeing the rest of it!

Here's an excerpt:

  I’ll start by teaching you just enough Clojure to be   
  dangerous. Then we’ll use that knowledge to embed an  
  object-oriented language within Clojure in a more-or-less  
  conventional way.
Now if that doesn't sound like an exciting start I don't know what does.


Looks like a good overview of Clojure and functional programming. Anyone have a review?


I would be interested in a review as well. Coming from an OO background, I've been looking for books/articles that focus more on "here's how you'd do that using functional programming" instead of just exercises going over syntax.


i always thought that sicp (the book here http://mitpress.mit.edu/sicp/) was a pretty good intro to fp for oo people. in particular, chapter 3 http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-19.html... shows the similarity between closures and objects, which (apart from just generally become familiar with the ideas in fp) is the main "aha" in bridging between the two.

having said that, in my experience (perhaps i am just dumb, or old) fp takes a while to get used to. i first looked at haskell some 16 years ago (i can still remember reading the intro while staying in a b+b in edinburgh), before using sml and ocaml, but the first time it really felt natural to me was with clojure this year. i don't think that's much to do with clojure - it just took multiple attempts before things really stuck. perhaps the biggest help was python's slow drift (despite gvr) towards fp idioms.


If anything, functional programming gives you too many ways to achieve the same thing.

How many ways can a reasonable programmer write the factorial function imperatively (without using the closed form formula or any dynamic programming). A maximum of two. A for loop or use recursion.

But just look at how many ways you could write factorial in a functional programming language. http://www.willamette.edu/~fruehr/haskell/evolution.html

It is truly unfortunate when your ability to program functionally is directly proportional to how much of the standard library you have memorised. This makes functional programming much harder to learn.

edit: I don't really care if you downvote. I come here discuss programming.


This list is a joke, based on a similar list for imperative programmers writing a "hello world" program. There are certainly a number of ways to write write something in either paradigm.

The number of ways you can write something is a horrible metric to evaluate languages on. Any sufficiently clever programmer can come up with a number of bad solutions. The useful imperative techniques are more obvious to you because you have more experience with that framework. Experienced functional programmers would just as quickly narrow their list of useful solutions.

In order to be productive in a functional style, you don't need to know much of the language's standard library. You simply write small functions that can be applied in a number of areas. You'll probably end up re-inventing the wheel though, as most functional languages come with most of the essential functional constructs included in the standard library. If anything, functional programming is more beginner friendly, because you don't end up dealing with mutable state and you can easily work on the level of abstraction that you're most comfortable with (you're just composing ever more advanced functions from simpler ones).


> How many ways can a reasonable programmer write the factorial function imperatively

This page is a terrible example. Most of these are not not what a "reasonable programmer" would do.

An unreasonable programmer would be able to screw up on any paradigm. You'd probably have even more ways to screw up like this with an OO language, creating classes for recursive functions, trampolines, a pattern matching class, implement an language interpreter inside your factorial implementation, etc.


> You'd probably have even more ways to screw up like this with an OO language

My example was about imperative languages. You assertion on how many ways one could screw up on OO is irrelevant and besides, you'd really need a good understanding of OO to even try implementing those.

I didn't mean to use to imply any of those methods were bad. I have revised my post so people can't miss the point.

That is

> It is unfortunate when your ability to program functionally is directly proportional to how much of the standard library you have memorised. This makes functional programming much harder to learn.

Now, addressing your concerns.

> This page is a terrible example. Most of these are not not what a "reasonable programmer" would do.

All of the examples except for few particularly egregious ones seems reasonable to me.

For example

  -- one, need n+k patterns but so what
  fac  0    =  1
  fac (n+1) = (n+1) * fac n

  -- two
  fac 0 = 1
  fac n = n * fac (n-1)

  -- three
  fac n = foldl (*) 1 [1..n]

  -- four
  facs = scanl (*) 1 [1..]
  fac n = facs !! n

  -- five
  fac = foldr (*) 1 . enumFromTo 1

  -- six
  fac n = product [1..n]
Six fairly distinctive ways. I'm sure there's more that's as elegant if not more elegant ways to do the same thing.


After the first 2, the rest are based on common and very useful abstractions. You could translate those to an OO language quite easily too, though with a lot more code.

For example, what foldl does, from my understanding, is to recursively apply a function to both a starting value and the first element of a list, and then the same thing again, where the result from the last call becomes the starting value and the rest of the list becomes the list we're working on, up until the list is empty. Not sure if I've explained this well but I've not been into this long.

If you see the second function, that's what it's doing, except it's working just with numbers and not a list. Since foldl works on lists, you need an enumerator, which in this case is just 1 through to n.

The sixth example is great here. Product could easily be an abstraction on something quite similar to three, which itself could very well be an abstraction quite similar to two (using lists, functions to deal with lists and a given function).

Please correct me if I'm wrong at any point here. I'm looking to improve these skills quite a lot so it'd be very, very welcome.


You got the core, I will correct your edges.

The [1..3] is just sugar for 1:2:3:[]

Product is not actually an abstraction as it's restricted to work on number types.

And no foldl is not an abstraction of (2) because foldl is tail recursive while (2) is not. foldr is not tail recursive however and combined with haskell's laziness can make short work of infinite lists. But you are right in that any tail recursive function on a list can be rewritten using foldl. What is particular to haskell vs a strict language like say F# is that foldl can still pop a stack due to haskell's laziness. You want foldl' as it will force the initial argument. Fold is also a fundamental operation on any algebraic data structure such as trees , lists, and natural numbers. You can write filter, map, filtermap etc in terms of it for example. There is so much to say about fold - they are like the cupboard which leads to narnia.. but I will stop now so as not to turn this into an infodump.


Oh wow, thanks! I should have also noted I have no experience with the language in the examples so please ignore any ignorance there (probably should have read up on it).

Interesting! All well noted and I'll look into this more.

I'm vaguely aware of fold's power and can definitely see how filter, map and the likes would be implemented in terms of it but I would be really interested in the infodump!

I'm just wondering, ignoring laziness and typing (or any language-specifics), if foldl could be written similar to (2)? Obviously it wouldn't be very good. I've written an example in Racket of what I have in mind:

  (define (foldl fn initial lst)
    (if (null? lst) 
        initial
        (fn (car lst) (foldl fn initial (cdr lst)))))

  (define (fac n)
    (if (= n 0)
        1
        (* n (fac (- n 1)))))

  (= (fac 3)
     (foldl * 1 '(1 2 3)))
Thanks again for the info, really appreciated. :)


Actually, what you have written is not foldl, it's foldr. Why is it going right to left? Let's expand your example:

    (foldl * 1 '(1 2 3))
    (* 1 (foldl * 1 '(2 3))
    (* 1 (* 2 (foldl * 1 '(3))))
    (* 1 (* 2 (* 3 (foldl 1 '()))))
    (* 1 (* 2 (* 3 1)))
    ...
    6
Basically, the calls to * start nesting into each other, so the one that actually gets evaluated first is the rightmost one.

What would actually be foldl is this:

    (define (foldl fn accum lst)
       (if (null? lst)
            accum
            (foldl (fn accum (car lst)) (cdr lst))
       )
    )
Now if you expand this (dropping the tail recursion):

    (foldl * 1 '(1 2 3))
    (foldl * 1 '(2 3))
    (foldl * 2 '(3))
    (foldl * 6 '())
    6
And even though you think that such a tiny thing can't be a good implementations, it actually is super efficient. The tail recursion is automatically optimized for you (and you can assume that for any functional language - it's a very crucial optimization for this style of code, after all). That's what the core of it will be in an actual implementation, though with added error handling, type checking and so on.

P.S. unlike haskell, in Racket you don't really need to fold the basic operators, they take variable arguments, so you can just:

    (apply * (list 1 2 3))
in order to get it to run on a list.


Ivo's answered so I'll just say the main trick to turning a function tail recursive is to have the function lug its state around. So always look to see if you can rewrite your function so it passes the current value explicitly to the next iteration. Non tail recursion is like winding a spring and then having it unwind, a jack in a box. Tail recursion is like a snowball rolling down a hill. You should always try to make a recursive function tail recursive if you can.

As for fold it's a bit involved and has a lot of scary sounding jargon on the way to understanding it. You can write a fold for a tree and then code depth first search in a couple lines. Folds follow some basic properties that allow for program derivation. I rarely use those but it does inform my programming. It's kind of like how knowing Lisp informs your python. There is also an opposite or dual to fold called unfold. You can write many algorithms efficiently just using fold and unfold and a couple techniques to prune inefficiencies. Fold is the same kind of object to your initial tree as say a matrix is to a vector, or a derivative is to a polynomial.


Also ignoring tail recursion for the purpose of the above piece of code, haha.


Yes, functional programming does give you more options, and that's the point. Each style of programming -- functional, object-oriented, relational, imperative, etc. -- has its strengths and weaknesses. It's up to the programmer to choose which one best fits the problem at hand.

In some languages, like Java, you're stuck in the OO style of programming because its syntax and features make anything else too cumbersome. Clojure actually works the other way around: though the most common style is functional, it's easy to work in terms of objects. It might not look the same as Java OO, but that's for the better.

Here's a good book about switching between programming styles within a single program: http://www.amazon.com/Concepts-Techniques-Models-Computer-Pr...


Well, to nitpick, the first, fourth and fifth example are just variations of the other ones, and they're (ab)using Haskell features - respectively n+k patterns, laziness and point-free programming.

Those features aren't available on every functional language, and they're not really used indiscriminately as shown here on most Haskell code bases, so yeah, I maintain that this page is a terrible example (albeit a very educational and fun one).

Here's a Python version, showing that even on an imperative language there's a lot of ways of doing the same things:

https://gist.github.com/289467


One would still need to know the majority of the functions in Prelude write a useful program, and even more to write an elegant program, since they are abstracted as concepts.

Where as in an imperative language often learning the syntax is enough to get the ball rolling.


> My example was about imperative languages. You assertion on how many ways one could screw up on OO is irrelevant and besides, you'd really need a good understanding of OO to even try implementing those.

Isn't OOP languages also imperative, at least most of those languages (java, C++, .Net, etc)?

I thought that OO was mostly an "structural" paradigm, where you organize your code as objects and methods that apply to those data structures, and you got inheritance and stuff like that.

Imperative programming on the other hand is about programming with statements, no matter if you are using gotos, routines, objects, etc. You just instruct the computer how to change from state to state.


I'm not qualified to judge the merits of the language, but as a manager I'd want something where the idioms are commonly understood. Writing new code is easy in almost any language. The real test is whether new hire can re-write it 12-18 months later when a bug or new requirement emerges.


BTW, since we are takling a bout Haskell, an interesting thing I noticed about it is that precisely I don`t need to memorize the whole standard library. Whenever I have a though like "man, I whished that there was a function that gor me a Bar from this list of Foos" I can just pass the "[Foo] -> Bar" type to Hoogle[1] and it will magically find the name of all the functions in the standard libraries that have compatible or similar type signatures.

[1] http://www.haskell.org/hoogle/




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: