Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Relationship between OO and functional programming?
194 points by spdionis on May 31, 2016 | hide | past | favorite | 98 comments
Since Alan Kay has shown interest (https://news.ycombinator.com/item?id=11806853) in saying something on the topic of OO and functional programming I opened this thread.

I purposefully left out the word "versus" because I do not believe there is such a strong dichotomy and to avoid flamewars.




Already lots of good comments and ideas below. My first attempt here on this topic turned out too wordy -- and this attempt is also (I tried to explain things and where the ideas came from). A summary would be better (but Pascal was right). I'll try to stash the references and background somewhere (perhaps below later in a comment).

I had several stages about "objects". The first was the collision 50 years ago in my first weeks of (ARPA) grad school of my background in math, molecular biology, systems and programming, etc., with Sketchpad, Simula, and the proposed ARPAnet. This led to an observation that almost certainly wasn't original -- it was almost a tautology -- that since you could divide up a computer into virtual computers intercommunicating ad infinitum you would (a) retain full power of expression, and (b) always be able to model anything that could be modeled, and (c) be able to scale cosmically beyond existing ways to divide up computers. I loved this. Time sharing "processes" were already manifestations of such virtual machines but they lacked pragmatic universality because of their overheads (so find ways to get rid of the overheads ...)

Though you could model anything -- including data structures -- that was (to me) not even close to the point (it got you back into the soup). The big deal was encapsulation and messaging to provide loose couplings that would work under extreme scaling (in manners reminiscent of Biology and Ecology).

A second stage was to mix in "the Lisp world" of Lisp itself, McCarthy's ideas about robots and temporal logics, the AI work going on within ARPA (especially at MIT), and especially Carl Hewitt's PLANNER language. One idea was that objects could be like servers and could be goal-oriented with PLANNER-type goals as the interface language.

A third stage were a series of Smalltalks at Parc that attempted to find a pragmatic balance between what was inevitably needed in the future and what could be done on the Alto at Parc (with 128K bytes of memory, half of which was used for the display!). This was done in partnership with Dan Ingalls and other talented folks in our group. The idealist in me gritted my teeth, but the practical results were good.

A fourth stage (at Parc) was to deeply revisit the temporal logic and "world-line" ideas (more on this below).

A fifth stage was to seriously think about scaling again, and to look at e.g. Gelernter's Linda "coordination language" as an approach to do loose coupling via description matching in a general publish and describe manner. I still like this idea, and would like to see it advanced to the point where objects can actually "negotiate meaning" with each other.

McCarthy's Temporal Logic: "Real Functions in Time"

There's lots of context from the past that will help understanding the points of view presented here. I will refer to this and that in passing, and then try to provide a list of some of the references (I think of this as "basic CS knowledge" but much of it will likely be encountered for the first time here).

Most of my ways of thinking about all this ultimately trace their paths back to John McCarthy in the late 50s. John was an excellent mathematician and logician. He wanted to be able to do consistent reasoning himself -- and he wanted his programs and robots to be able to do the same. Robots were a key, because he wanted a robot to be in Philadelphia at one time and in New York at another. In an ordinary logic this is a problem. But John fixed it by adding an extra parameter to all "facts" that represented the "time frame" when a fact was true. This created a simple temporal logic, with a visualization of "collections of facts" as stacked "layers" of world-lines.

This can easily be generalized to world-lines of "variables", "data", "objects" etc. From the individual point of view "values" are replaced by "histories" of values, and from the system point of view the whole system is represented by its stable state at each time the system is between computations. Simula later used a weaker, but useful version of this.

I should also mention Christopher Strachey -- a great fan of Lisp and McCarthy -- who realized that many kinds of programming could be unified and also be made safer by always using "old" values (from the previous frame) to make new values, which are installed in a the new frame. He realized this by looking at how clean "tail recursion" was in Lisp, and then saw that it could be written much more understandably as a kind of loop involving what looked like assignment statements, but in which the right hand side took values from time t and the variables assigned into existed in time t+1 (and only one such assignment could be made). This unified functional programming and "imperative like" programming via simulating time as well as state.

And let me just mention the programming language Lucid, by Ashcroft and Wadge, which extended many of Strachey's ideas ...

It's also worth looking at "atomic transactions" on data bases as a very similar idea with "coarse grain". Nothing ever gets smashed, instead things are organized so that new versions are created in a non-destructive way without race conditions. There is a history of versions.

The key notion here is that "time is a good idea" -- we want it, and we want to deal with it in safe and reasonable ways -- and most if not all of those ways can be purely functional transitions between sequences of stable world-line states.

The just computed stable state is very useful. It will never be changed again -- so it represents a "version" of the system simulation -- and it can be safely used as value sources for the functional transitions to the next stable state. It can also be used as sources for creating visualizations of the world at that instant. The history can be used for debugging, undos, roll-backs, etc.

In this model -- again partly from McCarthy, Strachey, Simula, etc., -- "time doesn't exist between stable states": the "clock" only advances when each new state is completed. The CPU itself doesn't act as a clock as far as programs are concerned.

This gives rise to a very simple way to do deterministic relationships that has an intrinsic and clean model of time.

For a variety of reasons -- none of them very good -- this way of being safe lost out in the 60s in favor of allowing race conditions in imperative programming and then trying to protect against them using terrible semaphores, etc which can lead to lock ups.

I've mentioned a little about my sequence of thoughts about objects. At some point, anyone interested in messaging between objects who knew about Lisp, would have to be drawn to "apply" and to notice that a kind of object (a lambda "thing", which could be a closure) was bound to parameters (which kind of looked like a message). This got deeper if one was aware of how Lisp 1.5 had been implemented with the possibility of late bound parameter evaluation -- FEXPRs rather than EXPRs -- the unevaluated expressions could be passed as parameters and evaled later. This allowed the ungainly "special forms" (like the conditional) to be dispensed with, they could be written as a kind of vanilla lazy function.

By using the temporal modeling mentioned above, one could loosen the "gears" of "eval-apply" and get functional relationships between temporal layers via safe messaging.

So, because I've always liked the "simulation perspective" on computing, I think of "objects" and "functions" as being complementary ideas and not at odds at all. (I have many other motivations on the side, including always wondering what a good language for children should be epistemologically ... but that's another story.)


> But John fixed it by adding an extra parameter to all "facts" that represented the "time frame" when a fact was true...From the individual point of view "values" are replaced by "histories" of values, and from the system point of view the whole system is represented by its stable state at each time the system is between computations.

This reminds me of the way Datomic stores data.


> many kinds of programming could be unified and also be made safer by always using "old" values (from the previous frame) to make new values, which are installed in a the new frame

John Carmack came across this idea at some point - in 'Functional programming in C++' [1], he writes:

"When you start thinking about running, say, all the characters in a game world in parallel, it starts sinking in that the object-oriented approach of updating objects has some deep difficulties in parallel environments. Maybe if all of the object just referenced a read only version of the world state, and we copied over the updated version at the end of the frame… Hey, wait a minute…"

I don't know if he got this explicitly from the Strachey lineage, or if it's the inevitably approach if you try to use immutability to tackle concurrency.

[1] http://www.gamasutra.com/view/news/169296/Indepth_Functional...


The way I have looked at this is that a big problem is the confusion of "objects" with "abstract data types" and the clinging to "data" and "assignment statements". As soon as encapsulation is really enforced the object can do what it needs to deal with the design parameters (including keeping histories, etc.).

Again: it's really worth looking at Dave Reed's MIT thesis ...


The best way to predict the future is to update^H^H^H^H^H^Hcreate it.


Many of the ideas you discuss here were formalized in the 70s by Amir Pnueli and others into the LTL and CTL temporal logics -- ideas that still form the core of much of software verification today -- and those heavily influenced both Lamport's temporal logic TLA and specification language TLA+, and the synchronous language Esterel, by Gérard Berry[1] (Wikipedia says that the synchronous language Lustre was influenced by Lucid) -- "time doesn't exist between stable states". Both TLA+ and Esterel have achieved a certain degree of success in the industry (to this day), although mostly in safety-critical realtime software and hardware design; the ideas are just starting to flow to "everyday" software. While they have a very explicit, clear notion of time modality and are "pure", they are not functional.

I think that as many ideas are crystallized, it is no longer helpful to characterize every pure and/or temporal-logic approach as functional; indeed, the two notions may often be at odds. (Pure) functional programming, at least as (sort-of) defined today. FP is not directly about modeling time and certainly does not impose synchronicity (a clock), but is more about the idea of denotational semantics, where computations are imagined as functions, and function application imposes an equivalence relation (the function application is said to be equal to its result in this relation). This is very different from the philosophy of synchronous languages.

[1]: The modern language Céu is based on Esterel: http://ceu-lang.org/


Thanks for the additional references (Yes, I should have mentioned Amir's work also). A good basic point is that the central ideas of "reasoning with time" go back into the early 60s and -- unfortunately -- did not get into the mainstream thought patterns of everyday computing.

Terms often lose their meanings as they become religions and more rigid choices and styles. The two in question here are certainly "object-oriented" and "functional programming"!


> Terms often lose their meanings as they become religions and more rigid choices and styles. The two in question here are certainly "object-oriented" and "functional programming"!

Absolutely, especially when they're vague. But I still think that the synchronous or temporal-logic based languages (which I love!) are very much distinct from FP (at least in the Haskell "tradition"), although they share with each other -- and not with the classical imperative approach -- the explicitness of state transitions: the synchronous/TL model with its clocks/steps, and the FP model with its (hideous) monads. They also "feel" very different (Esterel/Céu and even TLA+ feel more imperative than functional). It is not surprising that the killer-app for the FP approach is compilers (and other code analysis tools), while the killer-app for synchronous/TL approach is reactive systems, two application classes that are at the opposite, extreme ends of Pnueli's (I think) classification of programs (transformational, interactive and reactive).


I don't identify the general notion of "functional programming" with any particular language, including Haskell. I stay with the "idea of a function" as a reliable mapping. Similarly, I don't identify the general notion of "object-oriented" (as I characterized it long ago) with any particular language. Both of these terms have now been "colonized" and their meanings changed, etc.


What are your current thoughts on objects and simulation in distributed environments? You have historically worked or advised on projects using approaches that never reached the mainstream. I'm thinking primarily of the nice properties of pseudotime and the synchronous p2p messaging in Croquet.

Most people building commercial distributed systems today accept a different set of tradeoffs, generally preferring fast performance and the avoidance of possible deadlocks even though it means giving up some of the nice semantics and consistency of the operations.

Do you still think it is possible to build a system with the organically-growable scale of the internet but with some of the nice, predictable pseudotime-like semantics?


Good questions. To keep my first comment as short as I could make it, I didn't mention the next wave of researchers who got interested in this perspective on computing, for example: Dave Reed ("the slash in TCP/IP") in his 1978 MIT thesis, Dave Jefferson and Henry Sowizral at RAND ("Time Warp"), Leslie Lamport, etc.

And the next round about 15 years ago -- for Croquet -- which included Dave Reed, David A Smith, Andreas Raab, myself, etc. There were three or four different versions of this mechanism, all of which worked, and the designs progressively got better.

There are several sets of issues and tradeoffs. Reed's initial ideas in the 70s were about what an operating system for the whole Internet should look like. One of the many interesting ideas he proposed was to deal with the slow latencies and large numbers of potential users by distributing cloned computations organized by pseudo-time, and using the slow Internet for just input and synch deltas. This is what we first implemented in Croquet in the early 2000s when a typical good Internet round trip for a ping was about 80-100 milliseconds. This is good enough to create a massively parallel game like World of Warcraft (or even a flight simulator game) without any servers at all by just using the distributed machines that the game players are using to play on.

Further versions did more, and more comprehensively.

(A side note is that Croquet without the real-time graphics and interaction, etc. automatically provides a distributed object-oriented data base, etc. Pseudo-time is the larger idea underneath protected distributed atomic transaction data-bases, etc.)

Your last question is a good one, and it has been discussed over the years. Can it be done? How much would need to be done, and where? Etc.

The object work we did at Parc was a part of the network thinking going on in the ARPA/Parc community. And there was an eventual intent to think of higher-level network entities. A quite good intermediate real implementation of "net level entities" was the LOCUS system by Gerry Popek and his group at UCLA after he spent a year at Parc thinking about what network systems should be like (this was really good, and the first several chapters of his book are well worth reading). It was a migratory load balancing idea over heterogeneous machine types, which was quite complementary to the pseudo-time ideas.

I would like to see a highly talented group take another pass at your last question. Resilience to scaling is quite difficult to predict ahead of time, and it's good to do real implementations. Still, the 9 orders of magnitude that the Internet needed was possible with the scheme that the ARPA/Parc community had, and Croquet worked perhaps better than it should have (especially with just a few people working on it).

In any case, I think it is a bit crazy to not use pseudo-time if you know about it, and it's a bit amateurish not to be aware of it. In the middle are religions whose identities are built on various "one true way"s (and this is the opposite of the way any science should be conducted).


Thank you very much for this answer. I am glad to hear that you still expect pseudo-time approaches to yield fruit. And I appreciate some pointers to more recent work in this direction.

I only recently read David Reed's thesis and did not know about this later work until you mentioned it. Almost all of the links I can find about Croquet no longer work. (OpenCroquet and OpenCobalt websites seem to have disappeared.) Can you recommend reading for someone who wanted to understand the design decisions made in the several versions of Croquet?

I have been experimenting with layering pseudo-time on top of asynchronous message-passing in Erlang. One useful trick which I have not yet seen published is that you can have two-dimensional pseudo-times, so that one running system can track multiple chains of causation. (You would not want this happening all the time, since the whole point is to get a consistent history everywhere. But it is very useful for developers.)

An example: Let's say I have an existing object and a clone of it where I have changed some of the methods' behavior, but I want to understand all the consequences of these differences across all the distributed objects they cooperate with. I can send them the same message with different y-coordinate pseudo-times. In a system that relied on wall clock time, the side-effects of these two message sends would merely step on each other non-deterministically. But with pseudo-time, their effects are causally distinct throughout the system and can be dynamically inspected, making it a lot easier to build a human understanding of the behavior of the distributed system.


Yes, I think you are characterizing what is called "possible worlds reasoning" in AI. A notable system was "ART" (Automated Reasoning Tool) by Inference Corp some years ago. More recently you might be interested in the "Worlds" work by Alex Warth et al. http://www.vpri.org/pdf/tr2011001_final_worlds.pdf. This is a highly efficient design and implementation of fine grain pseudo-time contexts for general programming.

But what you are doing is good and important. We like a quotation by Goethe: "We should all share in the excitement of discovery without vain attempts to claim priority". In other words, it doesn't matter when good ideas happen -- it is enough that they do happen.


Have you tried putting the links in Archive.org's Wayback Machine[0]? Might produce some results

[0] https://archive.org/


> 'The key notion here is that "time is a good idea" -- we want > it, and we want to deal with it in safe and reasonable ways - > - and most if not all of those ways can be purely functional > transitions between sequences of stable world-line states.'

...and it is a nice way to make games:

[1] 'How to Design Worlds: Imaginative Programming in DrScheme' http://world.cs.brown.edu 2008, Felleisen, Findler, Fisler, Flatt, Krishnamurthi [2] https://www.realmofracket.com/games.html [3] https://docs.racket-lang.org/teachpack/2htdpuniverse.html [4] http://www.1k3c.com


Just a bit of a fun post on the subject, I quite like this koan about OO vs Closures [0]:

The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said "Master, I have heard that objects are a very good thing - is this true?" Qc Na looked pityingly at his student and replied, "Foolish pupil - objects are merely a poor man's closures."

Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire "Lambda: The Ultimate..." series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.

On his next walk with Qc Na, Anton attempted to impress his master by saying "Master, I have diligently studied the matter, and now understand that objects are truly a poor man's closures." Qc Na responded by hitting Anton with his stick, saying "When will you learn? Closures are a poor man's object." At that moment, Anton became enlightened.

I was surprised at having come to a similar understanding after implementing a very basic OO system in Scheme some years ago.

[0] - http://c2.com/cgi/wiki?ClosuresAndObjectsAreEquivalent


I was recently working on some ML transformation code. The whole transformer/estimator thing that is pretty standard now in ML libraries. I got really frustrated when I realized that a "pipeline" made of "transformers" is just a composed function where each individual function in the composition is curried/closed over its own parameters.

I thought "why don't they just keep this simple and use functions instead of all these cumbersome classes!"

Then I started to write something using function composition to keep things simple and got annoyed because sometimes I wanted some helper functions on my functions and it was cumbersome to do that. And I thought "I have made a mistake. The true nature of these things is objects."

And then I didn't actually become enlightened. Because everything's an object which is also a closure and we're all made of stardust. But none of that means anyone is going to understand or use my library. Sometimes it's necessary to give things names people expect and keep the higher order abstractions to myself.


I was going to post this -- beat me to it!

Once you internalize the truth that Objects and Closures are two ways to express the same sort of things, your mind is free to craft wonderful solutions.


PHP implements closures as objects and I wrote a blogpost which shows the equivalence at http://drupal4hu.com/node/339.html .


C++ too - a lambda expression (with optional captures) is literally defined in terms of transforming the expression into an instance of a local unnamed class. And of course Java where the equivalence is supremely explicit - before Java 8 anonymous inner classes were the way to implement closures, and the Java 8 lambda expressions are merely syntactic sugar for creating an equivalent AIC instance.


Ah this reminds me of all the different proposals for closure syntax in Java before it was implemented, the following links put compare some of the proposals:

1. http://www.artima.com/weblogs/viewpost.jsp?thread=180638 (Inner Classes or Closures)

2. http://www.artima.com/weblogs/viewpost.jsp?thread=182412 (Clear, Consistent, and Concise Syntax (C3S) for Java)

3. https://www.artima.com/weblogs/viewpost.jsp?thread=202004 (Comparing Inner Class/Closure Proposals)

PS. I remember the Artima articles because I liked them at the time.


> Java 8 lambda expressions are merely syntactic sugar for creating an equivalent AIC instance

I think that's a decent way to explain java's lambda expressions to developers without fp experience, but it's not technically correct. Lambda expressions are compiled into static methods in the class file.

/pedantic


Hah, I never thought of it that way. In Lisp, you use lambdas to define classes. In C++, you use classes to define lambdas.


I gave at talk about this at Scala Days NY, and I'll give it again in Berlin in a few weeks if I remember to actually book transport and a hotel. I'll just try and summarise here, because I don't have the energy or time to write out my talk :-)

I think FP and OO are fundamentally different in how they approach programming: the way they break down problems, what they consider to be good code, and so on. OO views code as small unit of encapsulated state communicating with one another. FP views code as a series of transformations. OO code is not good FP code, and vice versa.

On a lower level you can find many similarities. OO patterns are largely FP language features, from a time when knowledge of FP was not very widespread. At least five of the Gang of Four patterns are first class functions is disguise. Going the other way, a very typical pattern in functional programming is to build up a description of what should happen, and then have an "interpreter" actually execute that description. These interpreters often look like "coalgebraic" structures, which is an FP term for ... objects. Now isn't that interesting?

In summary, the techniques often look very similar if you squint, but the fundamental way they approach programming is very different.

That's a very quick sketch, which is somewhat hazy on the details. Apologies for the brevity and lack of nuance.


That's really neat you've given a talk about this. If by any chance the talk is recorded it'd be great to watch it!


Looks like he's posted slides here: https://twitter.com/noelwelsh/status/730449315789934593

I also really liked his talk on probabilistic programming: https://www.youtube.com/watch?v=e1Ykk_CqKTY


Awesome, thanks!


The talk was recorded. I don't know when it will be online. The talk doesn't go heavily into details (no talk of coalgebras) but spends time trying to explore what good code means in the FP and OO paradigms.


Well-defined object systems map objects to objects. Why would it be surprising that category theory can describe these things?

The questions for software development are about creativity and economy. The jury us still out on these to the best of my knowledge.


In the past five years, I've written C#, Ruby, and Javascript in a style I'd describe as "OO, but with a heavy functional accent"; it might even be fair to call it "object-functional".

A few little things I've identified that I do, that aren't totally mainstream:

- Don't reassign parameters, in fact, don't reassign things at all, if you can avoid it; show your work. It makes for more expressive code, and you'll have fewer bugs.

- Think of your code as doing "evaluation", not "computation". Code should take a bunch of inputs, do some stuff to them, and then return an "evaluated" response, vs. the viewpoint of "doing a procedure".

- Try not to make functions side-effecting. Avoid "void" unless you really need it, and really, really avoid distant side-effects.

- Don't be afraid to return a new object that seems "large" from a method. If you do it right, you'll mostly be copying references anyway which is pretty cheap.

- Share data liberally by reference all over your code.

- Don't be afraid to use a bit of recursion.

On the other hand, I'm not afraid of a little grubby OO now and then, especially when dealing with I/O, file handles, network streams, etc.

All to say, I think the two styles are about 90% compatible but there are some parts that don't mesh well. I'm not afraid to mix them, though.


I agree with you on all points.

Regarding side effects, it might even be possible to create almost-side-effect-free server code; but the state of most native client frameworks are just not ready for this. State can be banished/abstracted into the lower levels, but it doesn't go away.

(Speaking as an iOS dev. There is React Native, but this only handles the view layer; good luck with stateless CoreData)


Note that this was the advice of developers of distributed OO systems in the early 1980s.

See in the HOPL... "The Development of the Emerald Programming Language"


"-Share data liberally by reference all over your code. -Try not to make functions side-effecting" Aren't those 2 points contradictory?


Immutable data is very easily shared.


Doesn't "by reference" imply mutability and allow for side-effects (as opposed to by value, which cannot side-effects by design).


No. In C++, for instance, you can share data either by a const reference (or pointer), or by a non-const reference (or pointer). The latter allows mutation; the former does not.

Now, that means that the language is not forcing you to share without mutability, because you can leave off "const". That is, you have to be disciplined. But if you put "const" on the reference, the compiler won't permit you to modify the data through that reference.


no, the shared data is not modified


Ok, I understood "sharing data" as equal to read and write ...


The introductory Scala Coursera teaches this style -- and to a large extent, writing code in this style is a significant reason why Martin Odersky created Scala.

For me, it's a definite "accent" that I have when I write code in other languages.


I was reading "The Early History of Smalltalk" a while back (http://worrydream.com/EarlyHistoryOfSmalltalk/) by Alan Kay, and the thing that jumped out at me was goal-oriented methods (see the section "Object-oriented" Style). That is, when you are calling other code you don't tell the other code how to do something – and of course you don't go poking into what the other code "knows" (i.e., accessing properties on an object) – instead you say "please, make it this way". The code may succeed or may fail, but you aren't micromanaging the code.

From my time with Smalltalk I found this pattern quite remarkable, though also at times hard to trace. You'd tell an object to do something, and it might access some object it knows about, maybe do a test or guard, and then it would tell the other object to go do that thing. And the other object might in turn delegate, many levels deep. The result could be really incredible and compact, though it also felt like it wasn't actually easier to understand. Kind of Ravioli Code (http://c2.com/cgi/wiki?RavioliCode), though not the kind of modern Ravioli Code you might see because someone takes DRY too seriously – instead it reminded me more of a mathematical proof. Mathematical proofs are really cool, but they do not contain the information necessary to reproduce them; a mathematical proof is the result of studying and pondering and struggling with a problem for a long time until you reduce it to something compact and beautiful. The result seems magical, but in part because you threw away all the struggling that produced the result. That's what the core Smalltalk class hierarchy felt like to me; beautiful, maybe not reproducible, not always accessible. All of which is a critique of one place and time of code, and OO is bigger than that, but I suspect it's also some of the result of created goal-oriented methods All The Way Down.

Functional programming can also be goal-oriented if you can reify all your goals, because ultimately that which isn't reified (that is, made into data or state) can't be the result of a function. I don't have as intuitive a sense there, is goal-oriented functional programming an ideal people hold?

Another Smalltalk lens on OO is message-oriented programming, where you have a set of objects or actors which pass messages, but never directly "touch" each other, and in some sense objects are more autonomous than a conventional understanding of OO. This does not map well to functional programming – the purpose of messages to autonomous objects is to effect state, to make changes in the world. Of course that's the purpose of all programming, so it's not crazy. In some sense functional programming lifts the purposeful and useful outcomes of a program out of the programming, and into the environment. I would consider that a stronger contrast with OO.


Here's a weird viewpoint: OO and FP only seem different because we serialize code through text.

Some languages (spoken languages) construct sentences Subject->Verb->Object, others Subject->Object->Verb, etc, etc. The underlying relations between the parts are still there (there's an isomorphism in the graph), but the projection onto linearized words just takes a slightly different angle.

In OO languages, we treat objects as the important component, and functions as their children: bar.foo(). In FP languages, we treat functions as the important component, and objects as data passing through: (foo bar).

There's no difference in the relations, it's just a different projection from high-dimensional computation-space onto one-dimensional text that causes all sorts of confusion.

That's not to say the difference doesn't matter: projecting computation in a different way frames a different mental model for the programmer that might help or hurt, depending on the specific domain and on the specific problem.

But I think there's value in trying to see the zen in OO's and FP's isomorphism, and practicing switching projections in the same domain as a problem-solving technique.


Don't agree with this. Usually, when people refer to "OO", they're talking about subtype-polymorphism along with OO 'encapsulation' of mutable state.

These are very distinct from a FP approach. Note that in Scala, you can do 'pure FP' that looks very much like OO,

    myData = oldData.change().map(x => x+1).filter(x => isTrue(x))
So you can see that the verb/object dichotomy sort of breaks down. In mutable 'OO' worlds, state is always implicit (because of this).

In the large, this is much harder to reason about! Even in Scala, a language which gives us OO, we (Verizon Labs) as an organization elected to lean hard on the FP side and shy away from the OO side of things.


I agree the analogy is loose, but your comment raises an interesting question. It seems like we're talking about talking about two different definitions of what FP is.

There's a strict, technical definition of "functions which have no side effects", which defines a coding style. But there's also a more subtle, ambiguous definition of "functions as the primary and most irreducible building block of code", which defines a coding philosophy. In my opinion, FP is the latter, so I'd actually disagree and say that your Scala code isn't FP, since it emphasizes transformations on an object.

Maybe I'm in the minority here.


the term 'function' refers to a true function, no side effects. So, when I say "FP", that's what I mean.

>so I'd actually disagree and say that your Scala code isn't FP, since it emphasizes transformations on an object.

What I wrote is no different than

   change(oldData) |> map(x => x+1) |> filter(x => isTrue(x)
Often the '.' style in scala is literally syntactic sugar that will get rewritten to pass the data to a stand alone function.


I like Michael Feather's quote https://twitter.com/mfeathers/status/29581296216

OO makes code understandable by encapsulating moving parts. FP makes code understandable by minimizing moving parts.

I think they can be used in a complementary manner. One area where functional thinking can be a benefit can be in code that is tested with a lot of mocks. Instead of having a bunch of objects that mutate state and that need to be mocked out, model that part of the program as a series of pure function transformations of data. It will be much easier to test and reason about.


Tiny request -- can you indicate that the line following the link is a quote? I wasn't sure if that was from the 'tweet' or if that was your interpretation/summary of it


One way to start to think about this is the relationship between objects and abstract data types. These are both techniques for introducing data abstraction, but their approach is different and have different tradeoffs.

https://www.cs.utexas.edu/users/wcook/papers/OOPvsADT/CookOO... is a paper that argues them as complementary to one another. It's worth reading in its entirety, but an excerpt from the conclusion:

A detailed analysis of the trade-offs involved must be made in order to choose whether an ADT or PDA ["object"] is better suited for a problem. Some relevant questions in this choice are: How likely is it that the system will be extended? Must the security of the system be preserved, even at the expense of extensibility? Is it possible the unforeseen interactions may be desired? Is the environment dynamically extensible, and must the additions interact in complex ways with existing code? How much efficiency is required? Is it likely that there will be a large number of binary operations with complex behavior?

In general, ADTs tend towards tight coupling, yielding more predictability, performance, etc. Objects tend towards looser coupling, allowing more flexibility and extensibility. This is a reasonable microcosm of the relationship between FP and OO.


Disclaimer: I am surely not experienced enough to give a highly informed opinion on the topic.

As I mention in the opening I do not think there is a strong dichotomy between functional and OO programming. I believe historical accident created "Objects vs functions", and by historical accident I mean Java. It is unclear to me why would you not allow to create plain simple functions and force everything in objects. Probably a misunderstanding of "everything is an object".

Famously Alan Kay emphasizes "message passing" as the key characteristic of object orientation. Because every discussion about OOP and functional programming eventually ends up talking about State as the main discussion point, I fail to note how a definition of OO based on message passing contrasts the benefits of functional programming around state management.

In fact the way I see it is that when we have more units communicating with each other they form a system that should be functionally pure from an external point of view but can be OO in its internal implementation.

EDIT: Continuing my train of thought: the problem with OO in it's current common form is that objects often/always leak details about their state. So you have a tuple (state, [actions]) where often using an action is invalid given the tuple's state.


This is a really interesting question and much work has been done.

For a long time, lambda-calculus has been seen by a lot of programming language researchers as the foundation of programming to which every other paradigm has to be reduced. This research tradition produced much work on encoding object orientation (whether class-based or object based) into lambda-calculus. Ideally such encodings should be sound and complete. That has proven difficult. In parallel, functional languages acquired OO extensions. The first serious implementation to make this (sort of) work was OCaml. Arguably, all that work wasn't quite convincing. Eventually, it became clear that lambda-calculus isn't the basis of all other programming paradigms. Instead people tried the reverse: encoding lambda-calculus into object-orientation.

Scala really nails this, and validates your intuition that there isn't a dichotomy. Instead FP is a special case of OO.

If you want to understand the relationship between OO and FP there is no better way at this point to learn Scala and study how lambda-calculus (both at program and type level) is encoded into class-based object orientation.


This is unfortunately not true, though I think the factually incorrect parts (as opposed to those I simply disagree with) are due to misuse of terminology more than anything.

First, the factually incorrect:

> For a long time, lambda-calculus has been seen by a lot of programming language researchers as the foundation of programming to which every other paradigm has to be reduced.

... not really. The lambda calculus is a way to represent computation, which is Turing equivalent; it can encode any computation which a Turing machine can. You can reduce any program to either a Turing machine or the lambda calculus, as they are both capable of representing any computation. The lambda calculus is not inherently functional, or OO, it's much lower level. To be fair, lambda calculus did lead directly to Lisp, which has long been associated with functional programming (though it is also not inherently functional - see Emacs Lisp or Common Lisp).

> Eventually, it became clear that lambda-calculus isn't the basis of all other programming paradigms.

Again, no. The lambda calculus is as much a basis of all other programming paradigms as the turing machine is.

> encoding lambda-calculus into object-orientation

After the above explanation, hopefully it's clear that this statement doesn't really make sense. Again, I'm guessing that this is mostly due to incorrect use of the term "lambda calculus" - you likely meant to use another term.

On to the stuff I just disagree with:

> FP is a special case of OO

Highly disagree. FP and OO are simply two different methods of organizing and connecting code and data - in OO, you have objects that encapsulate state and expose functions to the outside world that modify and retrieve that state. In FP, you write programs as series of functions that transform data. The fundamental difference is that in OO you are treating data and its related functions as a fundamental unit together and structure your program around connecting objects together, and in FP your program is essentially a data pipeline. The other distinctions between OO and FP (immutable vs mutable state, generic collection interfaces vs specialized interfaces for different kinds of data) stem from that fundamental dichotomy.

Scala may combine the two, and may do so elegantly (I really don't know, I've never used it), but even if its particular kind of FP is encoded in an OO system, that doesn't imply that FP at large is a special case of OO. In fact, I'd argue the opposite: a method on an object is simply a function with an implied argument (the object) in a scope where that argument is inacessible from outside. You can see this clearly in pre-ES6 JS, where that's literally how an object is constructed (in the Crockford style - the prototype style is a bit different).


I'm afraid this is wrong.

Yes, all programming languages are equivalent in the sense of the Church-Turing thesis. From the point of view of PL research that means equivalency with TMs is an uninteresting criterion for comparing PLs. Consequently, PL researchers have developed finer tools for comparing languages, typically centring around compositionality. An encoding enc from language L1 to language L2 is compositional whenever P = f(P1, ..., Pn) is a program, then enc(P) is of the form enc(f)(enc(P1), ..., enc(Pn)). This idea, that I present here in a very simplified form, has been studied quite deeply, and in this sense it has been difficult to reduce OO to FP, unlike the other way around.

And yes, lambda-calculus is the essence of FP. Indeed there is no PL paradigm that is so closely connected with a single formalism as FP.

   a method on an object is simply ...
This view is too simplistic, and doesn't scale up to include OO features such as method overriding.

In summary: OO is substantially more expressive than FP.


Not really.

The goals of OO features such as method overriding is accomplished by typeclasses or implicits in FP.

It's just that the default polymorphic mechanism for FP is parametric polymorphism, and the default mechanism for OO is ad-hoc polymorphism.

OO didn't have parametric polymorphism initially, and added it with Generics.

FP didn't have ad-hoc polymophism initially, and added it later with typeclasses or implicits.

Choosing OO or FP really just puts you one of the two side of the https://en.wikipedia.org/wiki/Expression_problem . They are more complimentary than proper subsets of each other.


I agree that implicits and type-classes can provide ad-hoc polymorphism, but I'm not aware that they have been reduced completely to FP without them, I'm also not aware of implicits and type-classes working in an untyped setting. If you have references to the contrary, I'd be interested to hear about it.


Well if one is looking for proofs, I think this is hard while remaining abstract from specific languages.

I could counter that support for parametric polymorphism is poor in OO langs as much as you claim support for ad-hoc polymorphism is poor in FP langs. (but I don't place much stock in any of the aforementioned claims outside of some kind of artifical, academic context)

Real-world problems that aren't an academic edge-cases will be supported by both FP and OO.

Multimethods give you ad-hoc polymorphism in an uptyped setting.


Parametric polymorphism is a concept that is orthogonal from the FP/OO distinction. You can define parametric polymorphism for parallel computation for example.


> In summary: OO is substantially more expressive than FP.

In the same way assembly is substantially more expressive than C. Expressiveness in the way you've defined here is the least interesting aspect of a programming language.

Since it's well established that OO and FP are equivalent in the sorts of things you can compute, you ideally want a paradigm that makes it easy and intuitive to code up computations versus one that is strictly 'expressive'.

> This view is too simplistic, and doesn't scale up to include OO features such as method overriding.

I'm pretty sure this is simply false. Modern Haskell with GADTs for example can easily encode any aspect of OO, including method overriding. Do you have a counter-example in mind?


   counter-example in mind?
I note that GADTs are types, so this might not scale to an untyped setting. Anyway, I'm not following research on GADTs closely, but you show me a sound and complete, compositional encoding of (idealised) object-calculi a la Abadi/Cardelli using (idealised) Haskell?


I think the issue is not that mafribe misuses "lambda calculus". I think the issue is that you missed mafribe's use of the word "paradigm".

Two programming paradigms may be both Turing complete (and therefore formally equivalent to each other), but the paradigms may still be quite different. Yes, at the level of lambda calculus structured programming and functional programming and OO are all equivalent. That doesn't make structured programming into functional programming, though. As paradigms, they are radically different.


"You can reduce any program to either a Turing machine or the lambda calculus"

Can you easily reduce a program with concurrency to either of these? If not then you are leaving out a large class of programs and systems. I think this is why Milner invented the pi calculus (a calculus of objects)


Concurrency (understood as message-passing concurrency) is much more expressive than either OO or FP. Milner invented pi-calculus because its predecessor (CCS) was not expressive enough. One example of the lack of expressivity is that the encoding of Turing machines into CCS is quite awkward, whereas the encoding into pi-calculus is smooth. Still, pi-calculus has expressivity limits, e.g. n-way synchronisation (n > 2) a la Petri nets cannot be encoded in a compositional way, and neither can broadcasting or Ambient-calculus mobility.


Stuart Halloway has a great talk on this called "Simplicity Ain't Easy" [0] where he more or less states that a FP language like Clojure is basically a better designed OOP language, because FP is basically contains all its strengths (namespaces, abstractions and the like) and none of its weaknesses (e.g. uncontrolled state mutation) and you are able to select and use specific features you need to solve a specific problem, and not being saddled with every feature (including the ones you do not need to solve the problem at hand) at once.

[0] https://www.youtube.com/watch?v=cidchWg74Y4


For my money, OOP and FP are two axes on the same Expression Problem [0]. OOP languages, it's easy to add data on which a fixed set of functions can operate. In FP languages, it's easy to add functions that can operate on a fixed set of data. This is why there will never be a distinct "winner" between the two.

It's also why you have Greenspun's Tenth Rule, and why there is probably a Smalltalk corollary for FP languages. You eventually need to be able to solve both halfs of the expression problem.

https://en.wikipedia.org/wiki/Expression_problem


Maybe the future of programming is a language with a highly dynamic, flexible syntax that can switch axes easily/visually/syntactically (or gasp <meta>programmatically!</meta>).


Rich Hickey has mentioned that abstracting i/o devices is an example of where OO can make sense. using OO properly requires a honed intuition about how to build large, stable software systems and see occasions where, in such systems, some component or aspect would truly benefit from a bit of OO sprinkled in. i think one great way to develop such an intuition is pick up a functional language and use it for a significant period of time (a couple years) -- because this let's you see how far you can get just by writing simple functions and passing around simple data structures without any classes.


OO is when you really love objects, you put objects in objects.

FP is when you really love functions, you put functions in functions.

Of course, if you like both, you can put functions into objects and objects into functions.


I really whish more people knew about OCaml's object system.

I would actually argue that OCaml has a better OO system than most OO languages, but it looks and feel quite different (it's "functionalish": immutable by default, objects are structural and can be classeless, etc).

In the end, despite its strength, it's used very little in the OCaml community: Modules are better 95% of the time.

The gold standard for doing nice encapsulation, abstraction and separation of concern is neither functions nor objects, it's ML modules.


My take on this is that computation is the act of describing different states that the processor can be in - or describing the path from one state to the next.

I think an understanding of state is fundamental - one thing that brought this home to me was a talk I watched given by Sophie Wilson [https://en.m.wikipedia.org/wiki/Sophie_Wilson] in which she displayed an image of a cpu. The image showed a 'path' across the cpu that was the current state, the next image showed a different path describing a different state. The path was a lot like a bolt of lightening.

An understanding of functional programming (FP) matches this a bit more directly than OO as FP is a pipeline of functions that describes state change and it is easy to reason about the state at any point in the pipeline. OO represents a real world object so you are encouraged to think about the problem you are modelling rather than the how to represent different states. But obviously in the end they both describe state and logic that describes state transition.


Neat take on it. Finite state machines tend to be very robust since the designer has to lay out all the state transitions beforehand. Everything else is all combinational logic. Bad OO tends to have state spread out everywhere, but even good OO usually won't be so strict about state.


Yeah - I think FP forces you to think about state and state transition while OO forces you to think about your abstraction and modelling. So if you do OO without consideration of what's going to happen next - the compilation and what the computer is actually doing, i.e. moving from state to state, then it's easier to make bad design decisions.


It seems to me that this is all about access to data (state).

Pre-OO (and FP), we had structured programming, where the data lived in variables or structs, and anybody at all in the entire program could alter the data. That turned out to be difficult to maintain, because if the data wound up in a bad state, any line in the program could have changed it.

OO (as typically used) meant that an "object" owned the data, and nobody else could touch it. That meant that, if the data wound up in a bad state, only a very small section of the program could have (directly) changed it. That was a huge improvement.

In (pure) FP, the data disappears. You don't talk about it directly (except within one function, where you have bindings to it). The data disappears into the plumbing (the connections between functions), and nobody changes it. If it was computed correctly when created, it's correct forever, because nobody can change it.

In this sense, both FP and OO are trying to solve one of the biggest problems with the old structured paradigm.


One relation I feel is that OO stresses on behavior and data to be a part of a single entity. Functional paradigm asks to keep them separate, with immutable data structures being passed to pure functions that have the behavior.


eh when you get down to it they tend to be 2 extremes with most programing being somewhat in the middle, there is a great article [0] that discusses the absurdity about talking about object oriented program by comparing it to talking about 'pants oriented clothing' in that it misses the big picture

[0] http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom...


The biggest difference I've seen in my career:

- OOP code is usually separated and organized by the principal data it operates on, i.e. the instance of a class, and more specifically the instance's fields.

- FP code is usually separated and organized by the more abstract or philosophical responsibility of the functions, and they may not always share the same arguments.


I suggest people to take the pharo mooc on FUN. I have to admit, smalltalk has a very different flavour of OO. It's very tiny, and helps thinking in tiny reusable pieces of logic from objects to objects, not very far from function composition.

https://news.ycombinator.com/item?id=11690331


I think they are both concepts with different traits or required features. These are my subjective definitions:

OOP:

- Everything is an object

- Inheritance (an object inherits all features of its class and super classes)

- Polymorphism (one object can replace another if the required class is one of its own or its super classes)

- Encapsulation (an object exposes only what it is meant to expose (black box))

FP:

- Functions can be used as values (therefore it is possible to write higher order functions)

- There is no state/variables, so that any function call with its input parameters define the result and can therefore be cached quite easily

So in theory it should be possible to combine the two concepts, but in my opinion it doesn't make much sense as they have different strengths and weaknesses (e.g. introducing state to FP is a very bad idea...).

Sadly many languages try to combine both concepts and implement both poorly and on the other hand there are too few people who actually understand the concepts and use them appropriately.

As a rule of thumb:

- Take OOP if you have to model the relationship and interaction between things (objects)

- Take FP if you have to calculate something (like Excel formulas do)


>As a rule of thumb ...

I've heard variations on that a lot, but have not found it to be true at all. I've tried 3 times over the years as I got better at programming to write a server for a little mmorpg in OO languages. Each time it turned into a buggy, unmaintainable mess. On my fourth try I was using ocaml because I thought a better type system could help. Again the OO version I wrote initially was buggy and terrible. But being ocaml, I could rewrite bits and pieces in a functional style and improve it gradually. I ended up with no OO code at all, and very little imperative code (which was only for performance reasons). This is the very definition of "you have to model the relationship and interaction between things (objects)", but OOP was terrible and FP is great.


I think Scala is doing a decent job at bringing these concepts together. Scala ticks all the OOP points you made - even more than Java does (Java has special primitives for int/booleans etc, in Scala that's more hidden, really everything in an object). FP also likes Encapsulation as you decided it - pure functions, that don't do side effects (even logging) without it being explicit in the type signature.

FP can model relationships/interactions OK. They just have new tools/techniques to do it. For example, instead of mutating things, you create a copy with the change. You then push the actual state/mutation to the edges of your system (e.g. the database), and thus hopefully make it easier to reason about. But it can be slower or hard to change things deep in an object graph. For slowness - we have persistent collections which re-use unchanged parts of the collection and implement performant "copying". For deep structures, Haskell & Scala have lenses.

It's still an ongoing exploration, with people borrowing/discovering new techniques, but I think it's working quite well and I'm happy to keep exploring.

In the other direction, I've been following Rust. I find it very interesting that it's got a very good type system, but still has mutation. Instead of avoiding mutation, they track ownership in the type system, and thus keep it easier to reason about. By doing this, they make the the performance/memory management easier to reason about, without making it too hard to reason about where effects are happening.


Smalltalk has a lot of similarity with functional programming. Smalltalk's True and False instances are basically Church Booleans, and Smalltalk collections were processed frequently by passing a Block (like a first class function).

I joke that if Smalltalk were invented today, it would be called a "functional/object hybrid language for creating Internal DSLs"


I wrote the following blog post after a fairly detailed exchange with Dr Alan Kay.

http://www.brandonkeown.com/2016/01/the-interrelationship-of...


The original idea behind OOP as articulated by Dr. Kay was that since general-purpose computers (Turing Machines) are powerful enough to simulate/approximate any process, then computers should be the smallest building block of a system (especially one that wants to scale -- recursion is awesome).

In general, objects are supposed to be processes (in the general sense, not necessarily OS processes). Some of those processes might be equivalent to full Turing Machines, but most will be much more limited; it just depends on what kind of formal grammar they recognize, which is key to understanding objects: They are just processes that accept some input and dispose of it as either code (something that is translated and executed) or data (something that is stored and forwarded). Crucially, in either case, objects do work by sending and responding to messages. This is why Dr. Kay says that the actual big idea of OOP is message passing[1].

Once I learned to appreciate real OOP, I realized that there is actually a natural simpatico between FP and OOP rather than a dichotomy. Consider that if you're doing real OOP, you're going to end up designing lots of little languages and then implementing interpreters/compilers for them. Well, it turns out that it's comically easy to build compilers and interpreters using statically-typed languages like ML (F# in my case).

Now days, I use F#, C# (immutably w/o setters), and SQL to simulate an environment that recapitulates what I think is the real power of Smalltalk/OOP: It can rather directly simulate the process of scientific progress itself. I now find myself designing classes that represent falsifiable statements [2] (e.g rather than a class 'SearchEngine', I'd have something like 'LowLatencySearchEngine'). Just as scientific progress is often concomitant with theories and models that have ever more symmetry, so it is the case with the graph of classes that describe my system: I somehow end up with fewer central points of failure. For instance, in the case of 'SearchEngine' vs 'LowLatencySearch' engine, the latter made it apparent that my class should be initialized with a set of search providers, and that it should keep latency statistics (thus being able to falsify the proposition). The end result is that the proposition represented by the class naturally contraindicates that I should rely on any one search engine.

There's a lot more to say about this, but the last thing I'll mention is that FP languages like ML teach you that there is a difference between a class and a (static) type. The big idea is that types correspond to logical propositions (the so-called Curry-Howard Isomorphism), but I naturally ended up there just by trying to more effectively simulate the process of science, which brings us full circle to Alan Kay's big idea: computers can simulate any process, they can even simulate better computers. I would argue the scientific process is the best computer of all.

[1] http://c2.com/cgi/wiki?AlanKayOnMessaging

[2] https://en.wikipedia.org/wiki/Falsifiability


>objects are supposed to be processes

>FP languages like ML teach you that there is a difference between a class and a (static) type

DING DING DING THIS MAN GETS THE PRIZE. I've been trying to come up with a nice way to articulate this. When writing Java I try to think of "value classes" as structs, and classes as "actors".


Leaving out the 'VS' was exactly the theme behind a presentation our team mate Wojtek done on OOP & FP: https://tech.evojam.com/2016/05/23/take-the-best-from-both-w...


Well, OO tries to represent entities, and their state, via objects.

FP emphasizes immutability. So in OO + FP, objects are treated as immutable data. Functions create new objects instead of modifying the same object (avoiding side effects).

For convenience you can create functions to access parts of an object instead of creating new objects to achieve the same.


I wrote about this here: http://somethingdoneright.net/2015/07/30/when-object-orienta...

Comments welcome.


Question: What is the difference between x.foo(12) and foo(x,12)?

Answer: Nothing but syntax and ideology.


On the meta level, I came to think of the relationship between OO and FP as similar to being and doing.

Does a duck, walks like a duck and talks like a duck, because it is a duck (being) ?

OR

Is a duck, a duck because it walks like a duck and talks like a duck (doing) ?


I believe OO and FP are separate. OO languages should not be extended to have functional characteristics. OO is a complete system and functional is a different system although I cannot say why it is.


Hmm...you don't "extend" OO with FP, you constrain it, OO is the richer paradigm.

See https://www.info.ucl.ac.be/~pvr/book.html



OO favors indirection to make code supposedly "maintainable." FP likes completeness in one place so code is more readable.


I don't think FP will necessarily translate into increased readability.

That depends on proper code organization, identifiers, formatting, documentation, coupling, good practices in general.


I would say an object is a state machine and a pure function is a transition.


They are orthogonal. Program in Scala and do both.

http://www.scala-lang.org/

Free course(s):

"Functional Programming Principles in Scala" https://www.coursera.org/learn/progfun1

"Functional Program Design in Scala" https://www.coursera.org/learn/progfun2

(There are two more free courses in the "specialization" - only the certificates and the capstone project cost money - original announcement http://www.scala-lang.org/blog/2016/05/23/scala-moocs-specia...)


"Program in Scala" is just promotion, not an answer to the question. There are more than a few languages which advertise the possibility of using both OO and functional styles.


That's correct, but in which other ones is the integration so smooth as in Scala?


Swift, OCaml, F#, Common LISP?


No?


I am not a Scala programmer. Can you please give some examples of how Scala integrates OO and FP better than other languages?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: