Hacker News new | past | comments | ask | show | jobs | submit login
When FP? And When OOP? (2013) (raganwald.com)
147 points by ausjke on Jan 28, 2019 | hide | past | favorite | 151 comments



I think these concepts are often conflated. One thing I miss most in OOP and appreciate most when doing FP is ADT's, when I can represent

  Type payment = Invoice(Address) | Card(CardDetails) 
And then match/switch on those things, while the compiler will tell me if I did something wrong such as not handle one case. Adding a third type of payment (Cash) is as simple as adding "| Cash" to the definition - and the compiler will tell me exactly where I haven't handled cash payments.

A simple smell test for a programming language is this: if describing the above takes more than the above code - your language has a wart. If you try to describe this properly in C# you'll be writing an abstract outer class and N concrete inner classes, plus boilerplate for comparisons and so on. You'll be looking at likely 50-100 lines of code where no single line of code actually shows whan what you are modeling. This is the terrible part of OO. I'm sure there are a few cases to show an FP weakness handled elegantly by OO - but I'm equally sure those cases are fewer.

Also, there is nothing particularly "FP" about ADT's. They are just common in FP languages. Any OO language with a sum type and exhaustive matching can do this. But typically, such as for Java, C++ and C# they don't (yet). There are languages that support this that aren't FP (but not necessarily OO either) such as Rust and Kotlin.


> Adding a third type of payment (Cash) is as simple as adding "| Cash" to the definition

This is the "expression problem" https://en.wikipedia.org/wiki/Expression_problem

In FP with ADTs, anyone can write new functions for a datatype. Yet, as you say, adding a new case requires modifying the definition and existing functions.

In OOP, anyone can add a new case, by defining a new subclass. Yet writing a new method for that class requires modifying the definition and existing subclasses.

One of these isn't always better than the other, so it's useful to have both.

> If you try to describe this properly in C# you'll be writing an abstract outer class and N concrete inner classes, plus boilerplate for comparisons and so on.

This overhead is due to emulating one approach using another. It would also take a bunch of boilerplate to implement subclassing with ADTs (e.g. a sum type to describe the methods, a smart constructor to implement the inheritance, etc.).

(Personally I prefer ADTs too, but they're not a magic bullet)


OCaml has ADT's, but also polymorphic variants[0] which are open, and let you say "these alternatives or more", or "these alternatives or less": https://v1.realworldocaml.org/v1/en/html/variants.html#polym...

It's the main thing I've missed from OCaml when programming in Haskell (you can sort of do something like this with typeclasses, but I much prefer the OCaml solution).


Ocaml also has open variants where one can add cases to an existing sum type (essentially this is like exceptions but less special) and it’s objects are like a dual of polymorphic variants where you get a record which you may add fields to. Neither of these features is as commonly used as polymorphic variants however.


Haskell's typeclasses (or, equivalently, traits in Rust) help a lot with this problem, though. It's one of the biggest language features I wish I had access to in more languages.


Haskell type classes are great for most situations, except when they fail it can be quite spectacular. I think the "magic" of type classes comes from two things:

- All instances are thrown into one global namespace, so we don't need to worry about where to find the right one.

- There can be at most one instance per type. Hence there's no ambiguity about which one to pick.

These make it easy for the machine to pick the correct instance automatically. Unfortunately, it also means that every time we import or upgrade a library, there's the chance that it will break things by exposing an instance that clashes with what we have.

The "backpack" stuff looks interesting in this regard, although it seems very over-engineered (mostly due to legacy concerns).


What is backpack stuff?



Oh thank goodness. Finally.


You are talking about expression problem: https://en.wikipedia.org/wiki/Expression_problem

"Extend program with new data types and new functions"

It is quite well studied and solved in various languages. And, in my opinion, typical OO language solution does not come as especially succinct.


> It is quite well studied and solved in various languages

I wouldn't say the expression problem is "solved"; various languages pick different tradeoffs, but the underlying problem is still there.

The best description I've seen of this is at https://www.tedinski.com/2018/02/27/the-expression-problem.h...


> Yet, as you say, adding a new case requires modifying the definition and existing functions.

The same is true in java if you use instanceof. The thing is, just like enums inductive data type definitions are not supposed to be extended by any module other than the one declaring them, and adding more should be considered an API change.


> The same is true in java if you use instanceof.

Java isn't a particularly pure example of OOP: it has `if`/`for`/`while`/etc. from structured programming, non-OO "primitive types", non-polymorphic `+`/`-`/`*`/etc. and so on. `instanceof` is another example of a non-OOP feature (the OOP approach would be an `isFoo` method, where each class can decide what to return).

> inductive data type definitions are not supposed to be extended by any module other than the one declaring them, and adding more should be considered an API change.

Exactly. Yet in OOP, anyone can add new subclasses whenever they like, without ever telling the original author, and instances of those subclasses can co-exist alongside any other subclasses and will (in theory) work fine with all the existing code written for the superclasses.

What if I altered your sentence to say the following:

"Functions acting on a datatype are not supposed to be extended by any module other than the one declaring that type, and adding more should be considered an API change."

This would seem crazy if we applied it to an FP language; yet that's exactly the case in OOP, where a fixed set of methods are implemented inside each class.


I'd argue that OOP has a place when you're doing modeling and simulation work. What you should never do is OOD - crafting your entire software architecture as a hierarchy of classes.


As a person who spent more than couple years writing modeling and simulation software and modeling various systems, I cannot agree at all.

In digital logic circuits simulation you will be forced to compute things transactionally - update all relevant outputs at once, - otherwise you will have interesting and subtle errors to debug.

For single clock domain designs it is best done using Haskell's data type like "data S a = S a (S a)", i.e., infinite lists where each element is a state of a "bus" at the beginning of the clock cycle. The register then can be defined as "register resetState inputs = S resetState inputs", addition as "addS (S x xs) (S y ys) = S (x+y) (addS xs ys)" and so on.

Then you can combine these "S-processors" in many interesting ways and throw Quickcheck at them with LTL specification of behavour (when input satisfies these conditions, then output must satisfy these) and, after investing some more time, even get working Verilog source for entire system.


OOP-based modules seem to only handle relatively simple "structures" well. If we consider these structures as "entities", then it's limited to roughly 5 entities of less than a few thousand instantiated objects before it starts turning into a mess. It has a scaling problem. And FP doesn't entirely seem to solve it either.

My feeling is that better database-to-app-language integration is needed. Databases better handle big-picture issues, but integrating them with code is a pain in most frameworks because app world and database world ended up too different for unknown reasons.

It seems there was an anti-database fad(s) starting in the mid-90's that made app languages want to pretend databases didn't exist, to be magically hidden behind an interface. We are paying the price for this head-in-the-sand thinking. We should find a way to embrace databases rather than try to wrap them away, because that failed.


> It seems there was an anti-database fad(s) starting in the mid-90's that made app languages want to pretend databases didn't exist, to be magically hidden behind an interface.

This assertion entirely misses the point.

In most applications which employ a db to handle persistence there is a need to map relational data to objects. Either you reinvent the wheel each time you need to handle data to/from the persistence layer, by rolling your own ORM infrastructure, or you reuse a component that is far less bug-prone and efficient and far easier to maintain and update. Framework developers recognized the need to free developers from wasting time writing bug-ridden and ineffixient boilerplate code, and thus started offering their own ORM frameworks. Real-world developers who felt the pain of having to roll their own ORM components or recognized the value of using those tools, whether in reducing bug count, increased performance, lower time to market, and lower maintenance needs.

Your comment sounds an awful lot like the comments from decades ago where random geeybeards whined that these new fangled compilers and interpreters want to pretend machine code doesn't exist, and how hand-written assembly is the right tool for the job.


Re: or you reuse a component that is far less bug-prone and efficient and far easier to maintain and update.

I'm still looking for that. Where is it? I've yet to find a good ORM. They end up being dark grey arcane boxes that require dedicated specialists to troubleshoot and tame. It appears objects and RDBMS just don't mix well: https://en.wikipedia.org/wiki/Object-relational_impedance_mi...

I suggest we look at using more relational concepts on the application side instead of trying to translate back and forth between paradigms, spending too much grey matter and code on translating and converting back and forth.

Oracle Forms shows some possibilities. It requires about 1/5 the source code of OOP frameworks for similar CRUD applications. There are clunky aspects to Oracle Forms, I admit, but I believe the good stuff can be borrowed without also being forced to take the bad stuff. It's a starting point to app/DB integration exploration.


Which tracks with the lineage. SIMULA has an argument for first OOP language


ADTs are common in statically typed FP languages (and others you mentioned like Rust and Kotlin). I have been working with Elixir a lot, and being dynamically typed the concept isn't appropriate.

Pattern matching is the concept that they all have in common, that I miss when it isn't available. You don't have the 'exhaustiveness' check in Elixir because it's not possible in a dynamic language (dialyzer aside), but it's still a very powerful tool.


I write Scheme code as if it had ADTs, e.g.

    (match payment
      [('invoice address     ) (foo address     )]
      [('card    card-details) (bar card-details)])
As you say, there's no exhaustiveness check; plus the language won't guarantee that the product types are adhered to. For example, someone might give us `('invoice card-details)` and the language wouldn't complain.

Racket's contract system can help with this; although it's very slow.


Use Scala -- first class support for objects and ADTs.


But not for that syntax unfortunately, unless I'm missing it? I use sealed traits and case classes to do the same - the union syntax would be much nicer.


Coming in Dotty/Scala 3.0, along with a slew of other improvements to the language.

Combine that with the proposed new enum syntax and existing sealed trait plus case class/object boilerplate will be gone. Looks pretty great, but we'll have to wait until next year sometime to use it in production.


In Java, you can use https://github.com/derive4j/derive4j to achieve this.


You seem to be assuming that modelling payments as sub-types is appropriate. It may not be, or not stay that way over time. Often one needs a buffet of independent or semi-independent features to be flexible. Hierarchy-centric ontologies, such as "types", are limiting.

Yes, I know types don't "have to" be hierarchical, but they are usually messy to manage as non-hierarchies.


AFAIK Kotlin only supports exhaustiveness if the result is assigned to a variable.


I don’t disagree with your sentiment exactly but it seems like an apples to oranges comparison. I would think adding cash would be as simple as something like: Class cash implements PaymentTypes { ..

And then the compiler will tell you all the stuff you have to do to implement a payment type. It might be a little or it might be a lot. Your ADT example also might be a little work or it might be a lot. I know some, or many, “frameworks” in the oop world might have that abstracted out n degrees for some rare or imaginary problem where you have to implement the abstractpaymetypefactoryfactory and all it’s constituents... to be fair that would ha e the ability to solve that rare or imaginary other problem though.


Yes, Obviously you can describe the same thing in any reasonable language, the difference is mainly how concise it is, and how simple it is to overview.

Traditional OO focused on extensibility over everything else. I think the idea of openness to extension is way oversold. The idea of a closed and simple to overview enumeration is much more common in my experience (after 25 years of OO...) and it’s sadly hell to create in C# compared to any language with ADTs.


One important consideration I think that's missing here is how the computation over the data is processed. Immutability, a hallmark of FP, has a small performance hit on both computation and memory[0] but allows for computation to be divided more easily via referential transparency[1].

For applications that will scale horizontally, such as web apps, I think FP makes more sense, since computation over data can spread across multiple servers/processors. Where as for applications that cannot scale horizontally but must be performant, such as game engines, OOP seems to make sense since it makes mutation easier to handle by hiding it with encapsulation.

I have a hypothesis that multiple core processors and cloud computing led to the current surge in popularity of FP.

0. https://en.wikipedia.org/wiki/Functional_programming#Efficie...

1. http://wiki.c2.com/?ReferentialTransparency


Imho this kind of thinking is wrong. I believe in every large enough program such as a web server or game engine there will be parts that are suitable for fp and suitable for oop. A web server (of a large enough company) and a game engine are complex enough programs that they do not include one type of data flow, so why would we need to use only one type of programming to formalize that data flow?


Applications large enough will likely contain both paradigms, yes, but the processing constraints of the application will likely dictate the appropriate methodology to use dominantly.

For example, I always found it ironic that the JavaScript library Immutable.js uses OOP heavily, but I'm guessing this was for performance reasons.


Yes, every compiler/transpiler inevitably must address its underlying (virtual) machine's primitives.


> Immutability [...] allows for computation to be divided more easily via referential transparency.

Do you mean it allows computation to be parallelized more easily? Compared to what?

How does "referential transparency" affect the situation one way or the other?


This is a great point.


1. What you see in Java is not OOP. It's static, late-bound class/interface oriented programming. Mostly just a set of subtly bad ideas.

2. Real OOP and FP do not operate on the same level. Functional, logic and imperative programming are about language design. OOP is about system design. C++ and Java poisoned the concept. Listen to Alan Kay. Note that he openly admits that "objects" as they were implemented in Smalltalk are too small. Good objects are probably somewhere in between Ruby/JavaScript object and ad a microservice in size/complexity.

3. You can implement an OOP system in an FP language. Elixir/Erlang are a good example.

---

Side note: most of what Kay said about system design was validated many times over. If you think you know better than him simply because you use command line and functional programming, while he talks about graphical interfaces and OOP, you are missing the point, big time.


Mostly just a set of subtly bad ideas.

This describes most programming languages. Actually, on average it's usually a bit worse. It's a set of nifty-on-the-surface ideas which are also subtly bad, which put together result in emergent really-bad.

Programming languages are like bands or sports teams. The constituent parts might not be the "best" field leading examples, but put together the whole gestalt can still be awesome. It can help if there's one part which is attention getting, but the whole thing has to "jell."

You can implement an OOP system in an FP language. Elixir/Erlang are a good example.

Lisp as well.

The biggest FP/OOP idea I've seen in my career has to do with awareness around dataflow. If your app is actually about dataflow, then dataflow and explicitly representing the relationships is key. I've seen lots of bad OOP thrown at dataflow, where only few of the relationships are explicit, and you "just have to know" how information gets from one part to another.


Could you recommend any articles outlining what you’ve said?


That's all from my personal experience, including consulting.


Which category would you put Design Patterns under? I mostly do "functional" style programming but your comment makes me think maybe learning more about design patterns could improve my system designs.


If you're talking about GoF design patterns, then I think they are a waste of time. They are low-level and extremely class-oriented. This is exactly the area where FP wins over Java-like languages hands-down. And I'm saying this as primarily Java/C# developer. Hinton has a great presentation about it:

http://www.norvig.com/design-patterns/

What I would recommend is learning about the Smalltalk approach (not necessarily the language, but how Smalltalk environments are designed and what you can do in them). Another thing I recommend looking into is Agent-Oriented programming.

Agents seem to be much closer to Kay's original vision than anything we call "object" today.

Here is a really cool read:

https://alumni.media.mit.edu/~mt/diss/prog-w-agents.pdf

It might sounds academic, but there are tons of great design ideas there. I used some of them in HR applications, which is as "boring" and practical as programming gets.


Could you recommend any articles of Alan Kay outlining what you’ve said?



Thanks.


> Good objects are probably somewhere in between Ruby/JavaScript object

How is a Ruby/JavaScript object different from a Java one?


They are dynamic and late-bound, for starters. Ruby always supported the notion of methods as message handlers. JavaScript added some support for that in ES6.


Which seems to make for monkey patching and a more brittle programming model. Just because Java is not "true" OOP, doesn't mean it's worse. As a matter of fact, it seems better than most golang (which seems happy not to call themselves OOP) code laying around with no notion of proper structure or way to cleanly navigate the codebase.


> A well-crafted OO architecture makes changing the way things are put together easy.

I agree and its why I don't like OOP. Either you are thinking in terms of putting things together or tearing them apart. Complex means to intertwine or put together. OOP thus increases complexity and a well defined OOP system makes increasing complexity easier.

I, personally, prefer simple. Simple systems takes more work up front, but are easier to read, reason about, extend, and maintain.


While a lot of people make objects for the sake of making objects, I think it is instructive to think why you want to make objects in OO. As another poster has said, what if I had structs and functions with multiple dispatch? Why would that not be better than OO?

People tend to think of OO as a way of encapsulating data and then providing access to it. I rather think of it as a way of encoding program state. Often you have multiple operations that you want to perform on the same program state. It is nice to group that code together. Indeed, we have a computer sciency word for that: cohesion.

In FP, too, we do exactly the same thing. Take a look at any modern FP program and you will see that it is usually broken down into modules that represent program state. The modules contain code that operates on that state. We do it for the same reason we do it in OO: it creates good cohesion.

Getting back to my original question: why not structs with multiple dispatch? Actually, I wouldn't mind that at all (what's not to like about multiple dispatch?). However, the problem, I actually have is the struct. It's the same problem I have with the way many people approach OO: they are thinking about the code from the perspective of how to encapsulate data, not about what functionality they want. They are thinking about raw data instead of program state. A struct is a fine place to store data, but it's not the first place you want to go when you are doing design.

Instead you want to build your functionality and identify where you have commonalities in state. Then you should start thinking, "Hey these are using similar state. Are they related? Would it be more cohesive if I grouped them together?" Of course, it would be kind of strange to do that all the time. Most experienced programmers have a pretty good idea what program state they need to work on before they start. They can design data structures that are likely to be needed ahead of time. However, the priority should be to adjust the data structures to match the needs of the program state and to create more cohesive programs -- not to provide rigid boundaries separating your code.

Of course, nothing I've said is much different than programming in FP, and I wish that were not a surprise. OO and FP are not really so different. It's pretty easy to make the exact same mistakes in FP as you can make in OO. I sometimes feel like FP is just not popular enough to have widespread bad practices (like writing entire applications using only free monads).


> People tend to think of OO as a way of encapsulating data and then providing access to it. I rather think of it as a way of encoding program state. Often you have multiple operations that you want to perform on the same program state. It is nice to group that code together.

That's very consistent with the original definition of OO, which hardly anyone uses much anymore. And erlang and elixir (and Ruby if you follow Gary Bernhardt) both fps do that quite well, via the actor model. Where OO starts to fail is when you gave to make a decision between two objects, e.g. "knight attacks mage" is the "attack" function a member of knight or a member of mage?

Grouping functions by what state they touch is basically just namespacing, common between OO and FP.


This is only true if your object system is based on the idea of "methods belong to classes" as opposed to "methods belong to generic functions", as seen in Common Lisp for example.

If you define a generic function ATTACK with arguments ATTACKER and ATTACKED, you are free to create a method with specialization ((attacker knight) (attacked mage)) and write code that will be executed only when a knight attacks a mage.


> e.g. "knight attacks mage" is the "attack" function a member of knight or a member of mage?

For me, it's a member of the Fighter protocol/interface to which they both conform, and either might override the default implementation or leave it be.


Can't we just create a separate class, like a `Battle`? And then do `Battle.between(knight, mage)` where Battle would handle the negotiations between the two classes? Maybe producing a result without mutating the two entities we pass as arguments. I feel like it depends on what the business is. Is it tracking wizard adventures, fighter exp calculator or distributed battle log.


> Can't we just create a separate class, like a `Battle`? And then do `Battle.between(knight, mage)`

Exactly. And at the end of the day all you're really doing is defining a function. Why do you need a class at all? Is it just because the language you're using doesn't allow standalone functions? If you're defining a static function that has no state on a class, you don't need the class at all. It's just a function that you want.

battle(knight, mage)


The missing observation here is the `Battle` or `Collide` operate at a level of coordination. It is at this level where FP shines.

Simply adding an `addDamage` method to the above system exemplifies where OOP shines. The above method works at lower, domain level, where the semantics of adding damage are specific to the object to which damage is being added.

The point here is that _both_ paradigms should be employed.


In the spirit of separation I would envision knight and mage (nouns) to be objects containing data properties defining their characteristics against an interface. The action (verb), attack, is a function which makes decisions and has portability. Expressed as separated semantic components knight attacks mage would be a subject, action, predicate relationship and each piece is individually portable and mutable.


I think you missed the real question: should the verb be `attack` or `defend` (i.e. "defend from", or "attacked by")?

In class- or prototype-based OOP, we have to choose between `knight.attack(mage)`, with the functionality living inside `knight`; or `mage.defend(knight)` with the functionality living inside `mage`.

> The action (verb), attack, is a function which makes decisions and has portability.

OOP discourages standalone functions, so this function would either have to be in the knight, in the mage or (I would argue even worse) some sort of `GameState` object.


I guess it would depend upon the rules at play and various other unspoken variables, but generically speaking I don't see the difference between attack and defend in this limited scenario. One agent is interacting with another. The same instruction set would apply if it were knight attacking mage or mage attacking knight.

> OOP discourages standalone functions

Oh how I detest OOP. Portability of atomic components is amazing. So is scope nesting. Its like driving to work whenever you want in any of your cars (or your neighbors) instead of checking out a car from a car model inherited by a sedan class inherited from Toyota car class when such a thing becomes available.


I agree that nether way around seems inherently better than the other, so it would depend on the specifics of that project.

> The same instruction set would apply if it were knight attacking mage or mage attacking knight.

Not necessarily! For example, a mage might cast long-range magic, whilst a knight would only have melee attacks. Where we make this distinction can affect our understanding, maintaining and extending of the code.

Standard OOP practice would use an inheritance hierarchy to handle this difference, e.g. a `Character` class could have `RangedCharacter` and `MeleeCharacter` subclasses whose `attack` method handles the details of those sorts of attack; then `Knight` and `Mage` subclasses those, to specialise the parameters. The major problem with this is that we can't model multiple distinctions in this way; for example knights and mages might both be "noble", whilst trolls and imps are "evil". If we want `Knight` to inherit from both `MeleeCharacter` and `NobleCharacter` we'd need multiple inheritance, which is where things get tricky.

More recent trends favour composition rather than inheritance, e.g. passing in an `Attack` argument during construction, with `RangedAttack` and `MeleeAttack` subclasses. Separate objects can be passed in for the `Noble`/`Evil` distinction. Of course, these are just standalone functions with more boilerplate ;)

Mixins are somewhere inbetween, where we avoid some of the boilerplate chaperones ( https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo... )

The CLOS approach, mentioned by others in these comments, lets us define verbs like `attack` outside of any particular class or mixin. We can then define it case-wise based on its arguments, like `attack(RangedCharacter, Character)` and `attack(MeleeCharacter, Character)`.


> Not necessarily! For example, a mage might cast long-range magic

For me that is just a difference in data properties. The event is the actual attack. A ranged attack would be the distance between the two parties. A melee attack would have a distance of 1. A self-inflicted attack would have a distance of 0. If you attempt a melee attack from a different room the attack action still happens, but the other party receives no damage and probably no awareness of the attack. The handling of the action and consideration of the data is still the same regardless of who commits the action and who is the attacked party.


Aren't "data" and "program state" mostly one and the same? Encapsulation can easily be seen to matter whenever one has several plausible representations of data/state that all are to some extent isomorphic to each other, and one wishes to operate on these representations in a way that can be easily verified to respect the isomorphism, and not to rely on irrelevant details.

These considerations are actually becoming more relevant to FP rather than less, especially as we're slowly gaining a broader understanding of the potential of newly-developed features such as homotopy types, in enabling one to write programs that can work directly with such isomorphisms, ensure that desirable encapsulation-like properties hold, and even convert code to work "across" any arbitrary change in internal representation.


> Instead you want to build your functionality and identify where you have commonalities in state. Then you should start thinking, "Hey these are using similar state. Are they related? Would it be more cohesive if I grouped them together?"

Unfortunately in most OOP langauges, this isn't an option. Free functions aren't a thing, for the most part.

An object is the smallest unit, so if you need a single function you end up creating an object to hold it. Which results in an explosion of FooHelper, FooUtilities etc

This objects as the smallest unit also can end up in an explosion of regular classes, all only slightly different.

It's madness.


Static methods are a thing - all they need is a "module"-like unit to hold them, but no that's not the "smallest unit" in any real sense.


In a language like C# you still need to put those static methods in a class.

Even extension methods need to go in a class (albeit a static class).

It would be nice if you could define functions that lived in a namespace.


Just treat the static class with static only member as if it were a namespace.

C# now even has support for omitting the need to type the class name because you can import it as if it were a namespace (`using static`). Seems like a trivial complaint.


> A well-crafted OO architecture...

I've been programming for over 20 years and I have yet to encounter such a thing.


I know righ!

They have always ended up devolving into a complete shit show.


To be fair, entropy increases in any software project, whether or not it was designed under an OOP or FP paradigm. Quite easy to write spaghetti code and get into callback hell scenarios with FP.


I have learned to manage callback hell in a way that results in fairly flat code that is easy to follow regardless of the callback depth. It still turns to hell though when multiple decisions (more than 2) that result in the determinate callback path.

For example, am I going to minify for beautify code and am I reading from a file, stdin, or a directory and am I going to output the actual processed code, a report, or something to stdout? If I were doing a comparison then I to make those same decisions on two separate sets of data and consider additional decisions.

In this horrid example its decision hell plus callback hell and the result is a challenge in flow control.


> I, personally, prefer simple. Simple systems takes more work up front, but are easier to read, reason about, extend, and maintain.

I wish I could make people at work realise this.

IT's all about do what is easiest now, rather than put the work in up front to make sure it's easy later.

All our codebases devolve into a tire fire as every change is made in the way that it is easiest at that moment.

I think they assume this is the cheapest way to run, by just kicking the maintainability ball down the road continually. BUt that ends up building up drag on all of our code that eventually makes making even small changes so complicated, slow and error prone.


In my mind the best way to make people realize this is to reward refactoring.

To the opposite many organizations in my personal experience view it as a waste and a risk. This is often because they are only rewarding enhancement/defect user stories, testing the wrong things (if they are testing at all), and placing too much emphasis on weak diff tools as a substitute for an actual code review.

When you are only rewarding user story completion those who complete the most stories (not how) are the most rewarded. If your team contains insecure people it means achievement by code style, otherwise it means to do what is fastest. If you are locked into stories by sprint cycle it means lots of time to stare out the window.


I'll take structs and multiple dispatch [0] over both of those any day. Raw FP is too primitive and OOP mixes too many concerns, multiple dispatch strikes a nice compromise between the two and offers additional flexibility.

While Common Lisp's CLOS is often referred to as OOP, it's definitely less coupled than single dispatch.

[0] https://gitlab.com/sifoo/snigl#functions


Right. Multiple dispatch (the version of OOP that is most suitable for FP) is far superior to classic OOP.


I can't imagine what makes FP too primitive. It is essentially abstract. FP's degree of being primitive depends on the coarseness of its combinators.

What I find limiting is easy access to efficient, immutable data structures. Ocaml leaves you with the option to switch to mutable data structures at the cost of safety, and Haskell provides things like the ST monad for handling mutable updates safely at the cost of type-level complexity.


The post drew a line between OOP and FP; for that line to mean anything, polymorphism would have to go with OOP. Which is what I meant by raw FP.

And solving the same kind of problems without polymorphism (be that multiple dispatch, type classes or otherwise), is a struggle.


Polymorphism doesn't require anything to do with OOP or objects themselves, see clojure multi methods as an extremely pure example that involves nothing even close to an object.

Protocols or typeclasses are also ways to implement functionality over (usually) immutable data. I don't see the relation to a mutable object with a closed set of methods defined at compile time.


I didn't make the distinction, the post sort of did between the lines.

Go back to my original comment and you'll see that multiple dispatch is what got us here.

I never said anything about mutable/immutable data, I'm talking about polymorphism. But I did mention type classes as an alternative.

I frankly fail to see how this is leading anywhere.


Please do correct me if I am wrong, but isn't multiple dispatch basically typeclasses + pattern matching?


But that would still be single dispatch, right?

Multiple dispatch means functions are generic, and all arguments are considered when deciding which implementation to call.

Snigl supports both multiple dispatch and pattern matching, as does Common Lisp.


Haskell type classes by default pick the implementation based on a single type variable in its signature, but it can occur in multiple positions (including the return type). A commonly used language extension (multi parameter type classes) extends this to an arbitrary number of type variables.


As far as I know, all of the arguments are considered when deciding which implementation to call on typeclasses.


A tangent here, but a place where object-orientation works better than anything else is in simulations of the real world. Thus, nothing to do with databases, and in fact it is referred to as object oriented modelling (OOM) rather than OOP.

The way it works is that, let's say you are modelling a pump. The pump is made up of a motor, a shell, the fluid with its own conservation equations etc. In such cases, inheritance and aggregation are godsends that allow us to quickly compose real-life objects as aggregates of easily modified and understood sub-components. Also, once you've taken these sub components to a point where you trust them, there is no further need to spend any time thinking about them. They are there, and you trust them, and you can reuse them and combine them with confidence.

More than anything, it gives us a great way to think about the modelling problem. The success of Modelica speaks to the superiority of this paradigm over things that came before it.


Eh. Most processes in the physical world are reciprocal and most relationships bidirectional. OOP wants you to add a method to one class. So you're either figuring out which billiard ball gets the 'collide' method call or you end up modelling very abstract concepts like CollisionPolicy and the "real world modelling" starts looking more and more like functional abstractions OnlyWithMoreNouns.


If you are giving your ball a collide method, or creating a CollisionPolicy, then you are probably doing it wrong. Therein lies a huge part of the problem of OO design: modelling is difficult, and lots of us get it wrong.

Just because you use FP or some other paradigm does not automatically mean you will get it right. I have seen poorly modelled solutions written in many different languages. I have also seen elegant solutions implemented in a variety of forms as well.

I think that having some depth of experience in a language, as well as a solid grasp of the problem, and perhaps some anal retentive tendencies (i.e. an innate desire to keep things very structured and consistent), can go a long way to solving problems elegantly :-)


Yeah, as with everything, I guess languages need balance. In the case of Modelica (and others inspired by it), we can have functions that are not part of any class at all. So, you could just have a function called evalCollision that accepts two instances of the class BilliardBall and calculate the net result.

My main point, though, was that these new languages are way better than Simulink etc. that went before them.


What you are describing is actually much easier to model with abstractions like multimethods (lisp) or type classes (Scala or Haskell). Traditional OOP is fundamentally hierarchical, but the world isn't. This produces the soup of AbstractFactories and design patterns and also is a cause of the expression problem.

Multimethods allow dispatch on more than one argument (versus one ('this') for traditional OOP). Typeclasses decouple behavior from type hierarchies like Java interfaces do, except that they are extensible.

OOP is orthogonal to FP. That said, I'm increasingly of the opinion that it brings little to the table when a language already has typeclasses. I primarily use Scala, which supports typeclasses through traits and implicits. Just about every time I've modelled something in a traditional OOP way with abstract classes and such, it winds up becoming brittle and I refactor the behavior into typeclasses. I got burned by this enough times that I literally never use OOP hierarchies in my code, except to model algebraic data types.

Note that typeclasses and multimethods aren't fundamentally specific to FP, but they are mainly found in FP languages.


I found that trying to solve the Advent of Code (https://adventofcode.com/) - once they were describing a problem where people were doing things, I'd actually model out the people and the stuff they would do. Elves fighting? Awesome, let's abstract that out. They all get stats and attack functions and make decisions. Elves in an assembly line? Cool, make each one take care of their own work.

So much easier conceptually than trying to use the math to abstract it all down.


Yep, did spacecraft simulation work years ago using Ada95, which was nice in that Ada's OOP support was largely orthogonal to the rest of the language (you could use OOP only where it made sense), unlike Java...


I think this is key. I believe OOP came out of modeling simulations, and it seems well suited for that. It starts to go wrong when you use it as a general purpose paradigm.


Usually, I follow a general rule of thumb:

* For domain language (entities, services, value objects) use OOP as it is better at communicating intent

* For data processing and low-level algorithms (usually happens inside OOP methods) use FP style as it is less error-prone.


I write mostly Ruby these days, and this is my approach too. I usually write the pure functions as class methods. It would be great if it was possible to declare a function "pure" and have checks that it only uses pure functions itself - although this would be impossible to guarantee with Ruby, given its extremely dynamic nature.


I found an article which has similar rules:

* Use FP when you need to build something purely computational.

* Use OOP when you want/need to build something generic for other people to use.

* But if you are application or game developer, ECS way of thinking brings a lot of benefit to the table.

https://hackernoon.com/when-not-to-use-ecs-48362c71cf47


I have to disagree here. It's much easier to use data driven APIs than OO ones. With the former you get what you see and there are no additional rules. You call the API, it produces some result that you can use in any way you like. Meanwhile, objects have behaviors and internal state that you have to understand because each object is its own ad hoc DSL with its own rules and behaviors.


I would read Domain modeling made functional.

I wouldn't go back to domain modelling in a oop language after that.


What do others think of "SQL as a very functional language?"

The insight that businesses decouple data from procedures is right. But SQL has poor support for functional hallmarks such as recursion, functions as first-class values, etc. SQL is not functional but declarative: it's less Haskell and more Prolog.


> What do others think of "SQL as a very functional language?"

I think the author is using "functional" in a different sense from its usual use nowadays. Basically I think he just means that with SQL, you focus on the operations you're performing (i.e., functions), and the functions are only loosely coupled to the actual data, since you can write many different functions (SQL statements) on the same database, and the same SQL statement can be applied to many different databases with different underlying storage architectures. But of course, as you note, SQL being "functional" in this sense does not mean it supports more recent features associated with functional programming.


The idea of functional programming is much older than this article. I learned the principles of functional programming in the 90s at San Francisco State University, on a language actually called "FP"[1]. Similarly, SQL has been known as a logic/declarative language since it's start and these two paradigms have been known to be different for a long time. I don't think you can get away from a situation where the author is abusing the concept and terminology.

Edit: the argument seems especially galling given that logic languages seldom get credit as "industrial strength" and functional programming has a huge following of "language geeks" when a logic programming language is what works-with a broad swath of regular languages.

[1]Also, John Hughs' manifesto, Why Functional Programming Matters published in 1990. https://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.p...


>I think the author is using "functional" in a different sense from its usual use nowadays. Basically I think he just means that with SQL, you focus on the operations you're performing (i.e., functions), and the functions are only loosely coupled to the actual data, since you can write many different functions (SQL statements) on the same database, and the same SQL statement can be applied to many different databases with different underlying storage architectures

That's called "declarative" -- not "functional".


Huh? To me, "declarative" means focusing on just stating what you want the result to look like, not what operations need to be performed to obtain the result. I agree that SQL could be considered declarative as well as functional, since an SQL statement has aspects of both. But I don't see how "focus on the operations" is declarative.


>To me, "declarative" means focusing on just stating what you want the result to look like, not what operations need to be performed to obtain the result

And that's exactly what SQL does.

You specify what the result should be like "I want the result to have those two sets common elements, but only items where this column is larger than 10, and I want it grouped by these 2 columns and ordered by this over column", and now how to get to it (e.g. open these files, read those indexes, load a hash table with matches, iterate over them to find those where column > 10 etc).

GROUP BY x and WHERE X > 10, etc. are not "the operations performed to obtain the result", are high level descriptions of the result we want.

In fact in CS courses, SQL was one of the canonical examples of declarative programming (despite some nits to the premise, e.g. the ability to specify use of indexes etc).


> And that's exactly what SQL does.

So an SQL UPDATE statement is purely declarative? "Update" sure looks like an operation to me. Similar remarks would apply to DELETE or INSERT.

For queries, I can see your point of view, yes; but even for queries, SELECT, GROUP BY, etc. can be viewed as operations just as well as descriptions of the result set. (WHERE, I agree, doesn't really look like an operation.)

> GROUP BY x and WHERE X > 10, etc. are not "the operations performed to obtain the result", are high level descriptions of the result we want.

Again, I think this is a matter of point of view. GROUP BY isn't a low-level operation, sure; but that doesn't mean it's not an operation, or can't be viewed as one. High-level operations are still operations.

> in CS courses, SQL was one of the canonical examples of declarative programming

I'm not trying to dispute this. I'm saying that given how the author of the article is using the term "functional", it seems to me that SQL could be considered "functional". But I've already noted that the way the author of the article is using the term "functional" is probably not the way that term is commonly used.


>So an SQL UPDATE statement is purely declarative? "Update" sure looks like an operation to me

The Update statement tells "set this column to this value". It's totally a declaration of intent.

It doesn't specify anything about how the update is going to happen behind the scenes (open this file, use this tree structure, traverse, find the nth entry, write this value to the disk, and so on).

(Databases also support PL/SQL which is more imperative in nature, but here we're talking about SQL the query language).

>but that doesn't mean it's not an operation, or can't be viewed as one. High-level operations are still operations.

Operation in the context of declarative vs non declarative is not about high vs low, it is about whether the language has the user explicitly define control flow.

In the "group by" the user just tells to the DB that they want the results grouped, they don't specify any control flow, or tell how that grouping will happen.


> The Update statement tells "set this column to this value". It's totally a declaration of intent.

"Set this column to this value" sure looks like an operation to me.

> It doesn't specify anything about how the update is going to happen behind the scenes

Neither does the expression y = f(x). But that expression is the canonical example of a function, not a declarative statement.

> Operation in the context of declarative vs non declarative

Is whatever you are defining it to be, I get that. But I don't think the author of the article is using words the same way you are.


I understand and agree with your comments about SQL the language, but would you consider an application consisting of stored procedures to be a functional program? That is, would you consider the result of a modularized SQL _system_ to be a functional system?

I find the FP vs OOP to be more of a discussion about "approach" more than anything else, where the language is more of an implementation detail. When understood like this, I don't think it's unreasonable to call SQL functional.


In my opinion, SQL is a good mixture of DSL/Declarative & Functional paradigms. It enables programmers to specify operations to perform without worrying about the 'How'.


I completely lost him at that point. Like you said, SQL is more like prolog. To boot, it's a language designed to mutate data, using a sequence of commands. Not very functional in my mind. I think the whole article suffers from a poorly defined notion of what functional exactly means, and what OOP exactly means, and I think the next step would be to stop thinking in terms of OOP and functional because most industry languages tend to blend elements of both now, so it is better to break it down into the features that you find useful and analyze them one by one such as, can functions be values, is there immutability support, pattern matching, types of polymorphism, generic data, etc etc


Yeah I'm learning Datomic's datalog and I can see how it solves a number of things in my SQL solutions, that I didn't even realise were problems


I agree with your assertion and I think that SQL is a fantastic language, regardless of which implementation we are dealing with.


I have recently came to like the following distinction:

- Let data be relatively static, immutable, objects (just as the post suggests)

- For functions which are merely read-only views of this data, implement them as methods on the class (An example would be "Volume()" on a data object representing a geometric object)

- For functions that changes the original data on data objects, implement them as separate functions (can be encapsulated in asynchronous black-box processes ala flow-based programming if you like), returning new, immutable instances of the objects they operate on.

This allows keeping data objects immutable, while still allowing you to gather certain frequently used calculations on the object itself, for code readability / maintainability, as long as they are read-only.


The problem arises when you try to couple an OO language with a FP design (or vice versa). You end up leaving large portions of the language on the floor, and are constantly "inventing" ways to solve problems that you'd get for free if you just did things the way the language was built to do them.

The reality is we should be much more flexible with what languages we choose to solve problems, if we want the flexibility to best match our problems to the right design paradigm.

Failing that, it's better to go with one "kind" of programming, the one that fits the language. At that point, we're hunting consistency, not the platonic ideal of the solution.

Unrelated, but it seems like people here were sold OO as magical or somehow "easy". It's damn hard to get an OO design right, and if it's not working for you, it's probably you and not OO. You probably missed something, which is not only reasonable, but expected. That's where the experience, skill, and expertise come into play.


> The central tenet of FP is that data is only loosely coupled to functions. You can write different operations on the same data structure, and the central model for abstraction is the function, not the data structure.

Is this accurate? I would see it more as FP separates the two abstractions that OOP (for better or for worse) would have us keep together.


I think about using OOP only in terms of the structure of my data. Is my data more easily represented using actual pointer loops or is it a tree structure?

With immutable objects you can't easily create longer loops in data (self-reference is okay though).

Is my data a tree of some notion, I always go for FP.


I think OOP is better suited when state needs to be maintained in memory over time. User interface development is one obvious use case and I think OOP really gained traction during a period when thick clients were the norm.

However, when thick clients gave way to web apps (ignoring SPAs for the moment) the industry still tried to pound the square OO peg into the round backend hole. Most of that backend code is just transforming data between a database and either a browser in the case of a web app or another client in the case of web services, applying business rules along the way. I think this class of software is much better served by the FP model.


FP works much better for UIs in my experience. Take a look at Reagent as an example http://reagent-project.github.io/

With Reagent, you have reactive atoms containing your data and components subscribe to them. Whenever the state of the atom changes, the UI is automatically repainted. You basically end up with a classic MVC pattern where UI components dispatch events that update the sate, and observe the changes via subscriptions.

The advantage over OO is that the state is easily decoupled from the UI, and you can observe the entire state of the UI via data. Since the logic lives outside the UI components, you can also do testing at event level which tends to be a lot more stable.


I love how Reagent works and I do think it makes interactive applications simpler and easier to reason about.

That said, I think FP still has one disadvantage which is performance. This is because the underlying computer is 100% imperative and there are still a lot of work to make FP patterns efficient.

FP works well for interactive UIs because the number of objects that change in GUI is very small, so performance is not an issue. But take a video game, for example, then you have thousands of moving objects that are updated at 60FPS. Once you get that you notice is really hard to write the logic as a set of immutable state changes, object allocation becomes a bottleneck and then you discover that OO is more suitable, at least for now. Maybe later there will be ways to efficiently translate the FP patterns into the imperative CPU models.


That's a bit of a misconception about FP though. Immutability is just the default, and you can opt into local mutability when you need to. Clojure transients are a good example of this. So, in a case of a game, I would create local mutable sections for the tight loop.

It's also worth noting that modern hardware hasn't been using imperative CPU models for a long time, and it provides an emulation layer that languages like C target https://queue.acm.org/detail.cfm?id=3212479


That's interesting. Admittedly, I've not done UI work for many years and I've never used FP for UI development. But what you described is really the way we used to design things with OOP. You'd have classes which represented physical views and would register for events on an event bus. When the events arrived the views would update themselves accordingly.


The biggest difference is in how the state is managed. With OOP, each class is typically its own state machine with its internal behavior. With the FP approach, you can treat your state like a database, and you can easily inspect the overall state of the application. There is some nice tooling for doing these kinds of inspections in re-frame nowadays https://github.com/Day8/re-frame-10x


> And yet, we embrace objects in our applications. Is this just fashion?

I think that's 90% of it, actually. In no case in my life have I seen a company starting a new project say "We're going to examine the available {programming languages, databases, operating systems, ...} and pick the one which is most appropriate to our problem". It was definitely never even considered after more than 3 lines of code have been written.

Even upgrading from one version of a language to a newer version of that same language is, usually, like pulling teeth.

Thinking back over the past several companies I've worked with, reasons for picking a programming language included:

- $(first_dev) wanted to learn a new language

- $(first_dev) didn't want to learn a new language

- Company policy says we must use $(lang) (even though no other software runs on the same platform, or has anything at all in common with this new project)

- $(founder) used to work for $(compiler_company)

- We googled for a library to do $(task) and first one we found was for $(lang)

I don't know what to call this, but it isn't engineering. My only problem with calling it fashion is that I'm afraid it would seem disrespectful to the fashion industry.


If you take pure functions, and "side effect free" mantra, and apply it to OOP you get a pretty good mix. For example, instead of saving a file to the file-system, you save the file to a file-system. A class/prototype can only mutate it's own object. If possible - a function should only use it's parameters and internal variables, eg. not access global/module scope.


I disagree with his conclusion which equates the database to functional programming. Relational algebra is a third paradigm on the same level as, and different from, OOP and FP. I remember seeing an article pointing out that the database is the "mutable global variable in the middle of your functional program". Also both functional programming and object oriented programming make much more use of hierarchical organization and implementation hiding than do relational databases.


This is exactly where typical pure lambda-calculus based programming falls short. Lambda calculus afaik doesn't have the concept of IO, so it has to be bolted on in the programming languages as an afterthought.

I believe more powerful formalism has to be used. For example pi calculus or petri nets. In both you can represent the 'global variable database' as a process sitting somewhere and you communicate with it using channels or as a message passing in general. Lambda calculus is too constraining, because everything is a deterministic function and it can't model exactly this scenario. Another use case hard to model in lambda calculus is time. Eg. run some function after N seconds. Or plain 'sleep' function.

When you now look at programming languages in terms of, let's say, Petri nets, you see clearly how OOP (Alan Kay's message passing) is the part where you send messages between processes and FP is contents of a particular process.

Other than that, there are two types of OOP I noticed. 1. Alan Kay message passing. Originated from people trying to implement simulations 2. OOP as modules (Java, C++), aka 'Dog implements Animal' OOP. In this paradigm people try to come up with ontologies, ie. hierarchical models of data dependencies. Instead of using actual modules, people use classes for some reason. Then you can see module-like features implemented on classes, eg. private and public modifiers of methods.

'Dog implements Animal' should be modeled as 'Module Dog depends on module Animal'


Relational algebra is essentially the same thing as logic programming, which is a separate paradigm. However, typeclasses get you a lot closer to the way normalized databases structure data than OOP does. Not that typeclasses are specifically FP, but they are currently only found in FP languages.


The most useful part of OOP when building large systems, is its inherent support for first-class modules. This is the one feature I miss when working in most FP languages.


Check out the module system of OCaml/ReasonML


Yes it's very good, but still somewhat stratified. Checkout the 1ML project for something even more unified: https://people.mpi-sws.org/~rossberg/1ml/


I find most discussions of this nature to be completely missing the point. Semantic arguments about the specific syntax required to accomplish some goal, or comparisons of which approach works better for a _single_ given use case are silly. We get it, the expression problem exists. The fact of the matter is that FP and OOP work on different levels. Higher-levels favor FP and lower-levels favor OOP. That is, coordination is best-implemented via FP and enforcing rules is best-implemented as OOP. Arbitrarily choosing a single approach ensures poor system design.

For example, as many have pointed out, methods like "collide" or "attack" don't really make sense to have on a single object. This is because this method works at the level of coordination, and therefore _should_ be functional. Alternatively a method like "addDamage" method works at the domain level were it enforces rules/state change.

We can easily apply the above observation to system architecture and conclude that the outer-most layers of an application (services) are best-implemented using a FP approach and the inner-most layers (domain) are best-implemented using OOP.

It does not surprise me that most applications adopt FP. Because most people think procedurally and start applications with a top-down mindset, the outer-most layers (use cases) tend to be modeled first where a FP approach makes sense. This then just carries through the rest of the application.


> It does not surprise me that most applications adopt FP. Because most people think procedurally and start applications with a top-down mindset, the outer-most layers (use cases) tend to be modeled first where a FP approach makes sense. This then just carries through the rest of the application.

Did you mean "OOP" instead of "FP" here? First, I think more applications adopt OOP than adopt FP. Second, I find it hard to go from "most people think procedurally" to "therefore they wind up at FP".


I did not. Simply using objects does not mean one has adopted OOP. OOP means modeling behavior _with_ data, not _around_ data. It is something that must done be intentionally, not a product of your choice of language (although can certainly play a factor).

I would suggest that most systems adopt a procedural style where objects are more or less bags of data being passed around to functions/methods. This generally fits the mental model of how people think. That is, in terms of input -> output. OOP blurs that simplicity because each unit of behavior (method) also has a surrounding context (properties) that may be affected as well.

At the end of the day the difference between Object.DoSomething( data ) and DoSomething( object, data ) is just semantics (look at how python implements classes for example).


This is a good point. Strictly adhering to just one style, either OO or FP, seems very limiting. Instead, this blog suggests guidelines. My favorite line from the post:

"Those that are unlikely to change but are subject to being operated upon by a changing cast of entities should be written in a more functional style, while those things that change relatively often can be written in an OO style"


That's the expression problem. But you can have the best of both worlds with various approaches. One nice one is called tagless final encodings in fp and object algebras in oop.

In haskell this is called mtl-style classes and quite common. Here is a blog post that describes how to split applications into an imperative, oop and functional layer. The oop layer uses mtl-style classes: https://www.parsonsmatt.org/2018/03/22/three_layer_haskell_c...


I can understand that line of thinking, but I don't know that I agree with it. At least not fully. The biggest problem with OOP from a data perspective is that it shleps state around and that state can change in goofy ways, especially when you introduce threads. I think this is where immutable functional programming really shines. I do realize there's a lot more to consider though when architecting a particular solution.


Totally agree, I would introduce immutability in OO to address Concurrency issues. In short, picking the best of both worlds.

Where the conflicts arise is when we have stateless functions. It's a bad practice from OO perspective to have stateless (or static) functions. I am yet to have a good solution to this problem.


Objects shine when they encapsulate complex state in ways that disallow illegal states to arise. The trouble starts once you try to add complex transformation processes (e.g. business processes) to objects. When they work on complex graphs of objects, you can't really put them into a single one of them as none of them is the obvious place.

What I found to work best is to build an OO data model and a (not necessarily functional) business logic layer on top. In pure OO languages like Java and C# this layer takes on singleton characteristics, which is a clear sign that this code should be simply procedural. The individual functions are often not very interdependent and easy to work on and test.


I enjoy using a mix of the two. I find that some things are better expressed in OOP over FP, and will regularly mix them in the same code base. It lends itself to very nice code that doesn't force you to do anything but make it easier for someone who has never seen it to fix it.


The scenario where OO shines is game programming. Each object encapsulates a game entity and it basically runs like a simulator, allowing for complex interactions between objects. But software in general is not like a game and OO doesn't help as much.


That about never happens. Partly because in no mainstream OO language a game entity can "run like an [independent] simulator" - you'd need green threads for that. Partly because the industry has learned its lessons with trying to shoehorn things into inheritance structure, resulting in god classes (see e.g. original UnrealEngine's Actor class, or Pawn class). Partly because mainstream OO is too weak to be useful (e.g. no multimethods). Partly because of cross-cutting concerns - physics, "game logic", rendering, AI and networking - each preferring slightly different view of things. Partly because performance - if you structure things in exactly the wrong way according to OOP, you get better cache use patterns. Partly because OO's "complex interaction between objects" is complex in the wrong directions, and too simple in the right ones - hence the current trend of composition-based "entity" modeling, when you encapsulate behaviors in tiny tags and treat game objects as (mutable) arbitrary collections of those tags.

I still struggle to find areas where full-blown classical, class-based OOP is a good fit for the problem. Game development isn't such area.


That inheritance structures is bad, leading to very highly coupled designs that are hard to change, has been similar discovered by many in line-of-business enterprise software.

Some claim (Like Gosling) that inheritance was never a major point of OOP and that it has been overused.

What do you mean by "tag" here. Is a "tag" implemented as a class in C++? Do you know of an example in real code that is open source, the the curious can study?


> What do you mean by "tag" here. Is a "tag" implemented as a class in C++?

Google for "ECS", or "Entity-Component-System". It's a sorta-pattern for what I described. I say sorta, because everyone has a slightly different idea of how it's supposed to be implemented, but typically, entities are just dumb containers for (instances of) components, the components store data & type info, and systems mutate those components. So e.g. an asteroid in a game of Asteroids would consist of e.g. "kinematics" component, and "sprite" component, and a "collider" component. If you wanted that asteroid to drop a powerup on death, you'd add "drops powerup" component to it, possibly at runtime.

Components may or may not be implemented as classes, and the aggregation may or may not be direct. In small games where I used this (in Common Lisp), components were classes, and each entity object had a list of instances of those components. This was a naïve approach, but got the job done. A more performant implementation might look like this: each component is a struct, and gets a dedicated array of all instances; each entity only has a) tags of the types of components it consists of, and b) indexes into the "components" array. This optimizes for data locality, especially for more isolated systems. For instance, the physics system can now just loop over the array of "kinematics" components, and mutate their state, without knowing anything about actual game objects - and all the data necessary for updating physics for all game objects are close together in memory, reducing cache misses and allowing for other optimizations.

Note how this is a complete inversion of OOP - components are now storing just state, all behaviours get segregated into systems that operate on those components in batch mode, and entities exist only to tell you which instances of components together form a game object.

I don't have any reference C++ implementation handy to show, but if you read up on ECS, you're bound to find something.


You'll find that no one making a game that needs to run well uses OO, especially not AAA. Unity is going away from it as well, and they are arguably the biggest influence on hobby devs and newcomers out there. There are plenty of good talks regarding this such as Mike Acton's CPPCon keynote from 2014 https://www.youtube.com/watch?v=rX0ItVEVjHc And more Mike Acton from Unity's GDC talks https://www.youtube.com/watch?v=p65Yt20pw0g

Personally, at this point the way I write software is so far removed from this discussion that I just do what feels right and what people can generally agree on within my teams. Solve the problem, don't discuss code, discuss solving the actual problem.


https://kyren.github.io/2018/09/14/rustconf-talk.html

Is a good post about how that can fail from the creators of starbound.

Relevant quote:

A new requirement comes in: I have an idea for a special kind of item that when the player holds it and its near a specific kind of enemy, it will glow. The enemies that trigger this should be scared of the item and back away, but have a special animation where they’re mesmerized by the glowing item. This should only work for players that have achieved some specific quest goal.

You throw up your hands in frustration [...]


Game programmers use entity systems, not OO... or do they?


Fun related quote from reddit:

"you can tell if someone started game development as a hobby because they ARE using an ECS."


Haha, ok. I'm even lower on that scale: not even a hobbyist game programmer.

I thought they actually just use structs and whatever data structure works.


What do the pros use? Why is ECS bad in a game/simulation type program?


ECS is the "microservices" of game development : it's touted as the way to go, and in some cases it's the best solution, but it comes with increased wiring complexity cost, and the hype surrounding it causes it to often be adopted for dubious reasons (e.g "deep class hierarchies are bad, therefore I need ECS").

Very interesting article here: https://www.gamedev.net/blogs/entry/2265481-oop-is-dead-long...


ECSs are sort of designed to be flexible and make it easy to extend and mix the behaviors of a game. Most games do not need such flexibility, because professional games are designed and you already know what capabilities are needed, and it is more efficient and less complex to implement those things directly.

However, if you're designing an open-ended game engine for other people to develop in, then an ECS is a godsend, because you have no clue what they plan to make their game do.


I think it means that pros try to use an ECS but often fallback to object spaghetti. But I might be wrong


IMO...

FP: Server Side

OOP: Client Side


After working with Elm, Clojure, React, and friends, I'd say FP also wins for client/UI development.

For example, CoreData in iOS has to be my least favorite approach to a data model yet. I spend a lot of time thinking "I wish this was Elm or React, it would definitely clean this up" when building iOS apps in general. Definitely feels like the state of the art has changed.


I could see that. I just see the stateful objects approach being more useful on an interface.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: