Hacker News new | past | comments | ask | show | jobs | submit login
If Inheritance is so bad, why does everyone use it? (buttondown.email/hillelwayne)
198 points by harperlee 5 months ago | hide | past | favorite | 376 comments



The key is "prefer composition to inheritance" and dates back to Gang of Four.

The word "prefer" is critical to understand. It just means "usually choose A over B" not "B is never the right answer."

Unfortunately, since we - as an industry - like hard and fast rules, we move towards that second explanation and act like inheritance never makes sense. Like any tool, there's a time and place where it is the best tool, other times where it's a reasonable tool, and other times it's a terrible tool.

Therefore "everyone uses it" because either a) it's a reasonable approach or b) the developer doesn't know of or can't use a better one.

We should work on fixing b) instead of denying a) exists.


I agree with this sentiment but I also must say that the time's I've needed inheritance have been few and far between. I have seen really good examples where it works really well (UX is pretty common, but I've also seen really clean cases like data structures with complex interfaces and a simple abstract class).

Where I think inheritance works best is when the state in base classes is limited and the interface is quiet clear. Ideally where you are meant to override is also well defined.

Where it's the worst is when someone tries to use inheritance because they notice coincidentally duplicate code. The worst hell for this is in application configuration. I've seen 5 layer deep inheritance trees to handle really basic things like "what port should this app bind to". It saved no code and introduced a bunch of complication around the transitive dependency baggage it brought on board.

IMO, configuration should always be done via composition. It's more than fine to have a bunch of smaller composable config pieces just so long as you can easily jettison the broken parts.


> I agree with this sentiment but I also must say that the time’s I’ve needed inheritance have been few and far between.

You never strictly need inheritance, but strict need isn’t the criteria for whether something is a good solution (otherwise the fact that most things in programming can be done multiple ways would mean that almost nothing is ever a good solution, since there is almost always an alternative route to the same result.)


I assume the person didn't mean "need" as in "can't do without it", but instead as in "I can make clearer code with it than with the alternative approaches".


> Where I think inheritance works best is when the state in base classes is limited and the interface is quiet clear. Ideally where you are meant to override is also well defined.

What benefit is inheritance providing here? What you described sounds mostly like a struct, at which point the only value the interface provides is possibly some computed fields.


When you scratch deep enough at programming, everything is structs and interfaces defining how you interact with them and how they interact with the world.

The best example of this (IMO) is how `AbstractMap` works in Java. [1]

In order to make a new object which implements the `Map` interface, you just have to inherit from the `AbstractMap` base class and implement 1 method, `entrySet`. This allows for you to have a fully compliant `Map` Object with very little involved work which can be progressively enhanced to provide the capabilities you want from implementing said map.

This comes in handy with stuff we've done when you can take advantage of the structure of an object to get a more optimal map. For example, a `Map<LocalDate, T>` can be represented in a 3 node structure, with the first node representing the year, the second the month, and the final the day. That can give you a particularly compact and fairly fast representation.

The value add here is you can start by implementing almost nothing and work your way up.

[1] https://docs.oracle.com/javase/8/docs/api/java/util/Abstract...


>In order to make a new object which implements the `Map` interface, you just have to inherit from the `AbstractMap` base class and implement 1 method, `entrySet`

Used to think that way, but I now prefer the alternative - passing function(s)/lambda(s) for the necessary functionality of the dependent class.

This way is actually more flexible, as you can change behavior without modifying the override or having a bunch of switches/if-else in your required function.

So instead of 'entrySet' being defined inside MyClass, you would define it outside it, or possible as a static method, and pass it to AbstractMap when you create it.

So you don't need to have every class implement a bunch of interfaces like or Hashable, Orderable, etc. in order get the desired behavior.

Now I guess you would come back about you shouldn't be able to able to do that outside the class, but I also think those are also bad ideas. Python famous gets away with not having private/protected (although there is a way you can kinda of get something similar).


It is just a matter of how flexibility you want to expose through your API. Sometimes a rigid and stricter API is the right choice where you want the API itself as the guardrail against non-standard patterns.


Scopes and closures Give you guard rails and don’t require gluing functions to state unnecessary.


I understand, and what you shared is a perfect example of what I said- but I fundamentally disagree with the notion that it's the same between the two.

I think that in effect, as you associate more behavior with a particular struct(as opposed to what you're attempting to do with said struct), the greater expectation it presents that the struct is what you code around. More and more gets added to state over time, and more expectations about behavior get added that don't need to exist.

Sure, you could say "Well, then just be strict about what behavior is expected in the interface"- but that effort wouldn't be necessary if we didn't make the struct the center of the behavior in the first place.


This works with Rust's traits as well, for example, Iterator. Or Ruby's mixins (which are inheritance, I guess, heh). It is super useful, but doesn't actually require inheritance, even if you can use inheritance to do it.


> The best example of this (IMO) is how `AbstractMap` works in Java.

I think it is fair to say that this is idiomatic Java, and for that reason it is a great example within the context of Java.

But does it translate to the abstract? Given your hypothetical ideal language that allows you to do anything you can imagine as you can best imagine it, is this still a good example, or would you reach for something else (e.g. composition)? Why/why not?


“Prefer A over B” rules usually are problematic because what they usually mean is:

(1) There is a concrete set of cases where A is better than B. (2) But, there is also a concrete set of cases where B is better than A. (3) And, either (a) the cases where A is better than B are more common than the reverse or at least (b) people in the environment where the advice is developed are currently choosing B in cases where A is significantly better.

But, critically, “prefer A over B” does not communicate the set of cases where A is better than B or vice versa, so it tends to lead to the conditions where “prefer B over A” becomes the advice based on (1) and (2) being unchanged but (3)(b) working in the opposite direction.


IMO it's also important to understand the context of when GoF said that - Stroustrup's C++ book was advocating for things like "class WindowWithButtonAndScrollbar inherits from Window, Button, and Scrollbar" at the time!

The GoF book itself is full of examples that use inheritance, but somehow the bumper-sticker-sized advice has taken on a life of its own.


Any abstract advice that's like "do this instead of something else" looks to me like the person had a bad experience and is generalizing from it. This goes both ways, applying to both the original hype about OOP, and this sort of modern anti-OOP hype. It turns out bad code is possible regardless of which design pattern is used.


Prefer IS-A relationships where they make sense, i.e. where you expect the Liskov Substitution Principle to apply. Otherwise use composition. It's really that simple.


Yes and:

I vaguely recall Riel's Object-Oriented Design Heuristics [1996] shared the same advice. https://www.amazon.com/Object-Oriented-Design-Heuristics-Art... https://www.oreilly.com/library/view/object-oriented-design-...

Aside: Young me obsessed over design patterns, methodologies, software engineering, SQA, etc. I had all the books. I started a design patterns study group and hosted it for a while. Now all that "wisdom" is just a bunch of old books. Somehow craftsmanship, me, or both became irrelevant.

Or maybe it was never important.

Like Alistair Cockburn opined about the failure of CASE tools, people somehow manage to ship software successful, without the benefit of all that smart stuff.


I'd be ready to agree if I could be pointed at a time that inheritance actually carries a real benefit- a time you would choose it over composition, if composition is available.


There is no benefit to the "tree of life" single inheritance [1].

What you want to reach for to achieve compositional behavior or polymorphism are type classes, traits, and interfaces. They attach methods to data without the silly "Cat is an Animal, Dog is an Animal" cladistic design buffoonery.

There's nothing wrong with OO, except for inheritance-based OO.

[1] Don't get me started on multiple inheritance. Instead of solving problems, they invented them.


What about a situation where you have an ECS, and if it has say, LifeComponent(), ReproductionComponent(), it can be identified as an Organism (as opposed to say a Crate). Now, you can have inheritance off of that, a composition can be identified as an Orangutan, or a Human, but only if it was an Organism. The Organism is basically a memoization of Life and Reproduction components on the Orangutan.

Now, A human can create a child, but only if the parent object was a "Human." Am I thinking about this in the wrong way?


Yes. I hadn't heard the term tree of life inheritance, but that is the heart of the problem. I think this could be somewhat mitigated by banning access to non-abstract parent methods and all grandparent methods (as well as all relations that aren't directly reachable by going up the tree). But at that point you might as well just use composition anyway.


GreatGrandparent->setCancerChancePercent(10);

Grandparent->mutateCancerChancePercent("+|-", 3);

Parent

Child

By banning access to GreatGrandparent's getCancerChancePercent(); method, I won't know what the starting chance was, which will make it harder to determine nature vs. nurture. Isn't that what ancestry and genome mapping is doing? Going back up the tree?


Inheritance was a premature optimization for computers with small memory. It did its job. However, once memory got big, people forgot to throw it out.

Composition uses a lot more indirection. That's bad on modern CPUs. Pointer chasing throws out performance, so composition is not always preferred, either.


Simple composition, where an entity just has some components on it, can be completely "unrolled" at compile time, essentially removing all indirection as if the entity had all of the data to begin with. You don't need pointers.

For more complex composition, an ECS runtime would likely group all components of the same kind into the same place so that systems that ran with those components get even better data locality.


Composition doesn't imply pointer chasing.

In C++ there is extremely little difference between the code granted for inheritance or composition (unless you use virtual inheritance), so I have no idea what overhead you are talking about.


At least in the case of C#, composition tends to be done via interfaces, which has more indirection than if they were to use an abstract class.

This has been somewhat mitigated in newer versions of runtime.

Not sure what it's like in JVM world, I know they had a better dervirt for a while.


Yeah, but that is a decade later, and also ignores the JIT optimizations like devirtualization, or just like C++ templates, using generics for composition.


> JIT optimizations like devirtualization

JIT devirt can be nitpicky, PGO has improved the situation greatly but until around 6.0 or 7.0 devirt was easy to 'break' in a lot of common scenarios.

Heck, tying into the generic composition bit, ironically static generics still have issues right around devirts. Go figure.


Would you happen to know any good literature with clear composition examples vs inheritance?

The C++ and Python code I see daily is fully inheritance focused, and I would like to understand how it could be done differently (or better) starting from a perspective I understand.



Got a simpler example?


Well in my game everything inherits from GameObject. GameObject has a draw method. Enemies, coins, the player, doors, they're all GameObjects. But when it comes to drawing things, the logic doesn't vary by those subtypes. Rather some things are pixel art with a collection of frame by frame animations. Other things are just a static image. Some things I render as text. Some things are really complex, I render them as composable components with different frames of animation/keyframes, created in a custom character editor program. Each of those render modes I described is a different DrawObject. So instead of having an insanely complex inheritance hierarchy of all permutations of things and how they are drawn (or trying multiple inheritance), I have GameObjects and they contain an property/object of type DrawObject. I have a shallow inheritance hierarchy of GameObjects and a shallow inheritance hierarchy of DrawObjects. And I can compose them together as I see fit.

I even have an AggregateDrawObject class for cases where I need to run two animations at once for a single object.

You can do something similar with UpdateObjects too and have composable behavior.

Note that I still use inheritance, but just don't let it get out of control. Anyone saying "inheritance is bad" is either a moron or really means "deep inheritance hierarchies are bad" or maybe "bad designs are bad".


I feel people don't understand what inheritance and (object orientation in general) is useful for, misuse it, and then it gets a bad reputation. It's not about making nice hierarchies of Cars, Fruits, and Ovals.

For me the main point is (runtime) polymorphism. E.g. you have a function that takes a general type, and you can pass multiple specific types and it will do the right thing. And if you want to avoid huge if-else statements, you should put the code for the special cases in the classes, not in each function that operates on them.

You can get this without implementation inheritance, it is also possible to just have something like interfaces. But I do find it very convenient to put common code in a base class.


    I feel people don't understand what inheritance and (object orientation in general) is useful for, misuse it, and then it gets a bad reputation. It's not about making nice hierarchies of Cars, Fruits, and Ovals.
Agree 100%. It starts from the earliest programming course where we just teach it all wrong; way to abstract (no pun intended).

One point to add to yours: well executed OOP allows for "structural" flow of control where the object hierarchy, inheritance, events, and overrides allow for the "structure" of the hierarchy to control the flow of logic.

I wrote two articles on this with concrete examples and use cases:

https://medium.com/codex/structural-control-flow-with-object...

https://medium.com/@chrlschn/weve-been-teaching-object-orien...


I used to be an OO hater until I started playing with Smalltalk. In particular, I worked through the Pharo MOOC (https://mooc.pharo.org/), which teaches you exactly this: designing the hierarchy IS designing the control flow of the program.

That said, Smalltalky hierarchies are a nightmare in most other languages because another key part of the Smalltalk env is the tooling -- staying inside a running system and being able to edit code from within the debugger is absolutely great and keeps you in a flow state much better than any other workflow I've been exposed to (including Lisp + Paredit + SLIME). The result is that editing class hierarchies in blub-y OO languages is usually a massive pain in the ass, while doing it in Pharo or a similar Smalltalk env is fun and painless. This is why you can't practically write Smalltalkly in Python/C#/Java/etc. even if on paper they have all or almost all of the same features.


Read the first article, great example, I'll have to remember that next time I'm trying to teach OOP


Interesting articles!

I think the first part of the first article immediately introduces an anti-pattern - forcing the user to make an instance of a class just to be able to call a pure function. It's just unnecessary noise, either make them static methods, group them in an object literal, or import the module as a namespace.

Adding the "pattern" as a mutable public field is a bit sketchy, and would make it show up in intellisense, but hopefully nobody will access/modify it. Making the pattern as a `const` at module scope solves that problem but you handwave away that approach saying it "pollute the symbol space for intellisense", which isn't true (non-exported items aren't available outside the module).

The next section on validation is a good example of another anti-pattern. The example of:

    public get isUserValid(): boolean
is not a good way of doing it, because it relies on hopes and prayers that the user of the class remembers to call this. A better signature would be:

    function isUser(input: unknown): input is User
Notice the key difference - you can't get an instance of User without it being valid. There shouldn't be a notion of "yeah I have a User, but I don't know if it's a valid User". Your way allows people to operate with User, blissfully unaware of whether it is valid or not, hoping that they might notice the right method to call. Instead: Parse, Don't Validate [1]

Note that to do this correctly, you have to either:

1. Separate data and functions 2. If you must use a class, then hide the constructor and provide a static constructor with a return type that indicates the function can fail

[1] https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...

_________

Part 2 was an interesting exercise in how the popular OOP patterns from the GoF book vastly overcomplicated code, negatively impact readability, and make following the code feel like Mario (the princess is in another castle)

No need for inheritance, abstract base classes, of any of that complexity, all of that could be done with:

    type ShippingCalculator = () => number
    
    const shippingCalculators = {
      USPS: calculateUSPS, 
      UPS: calculateUPS,
      FedEx: calculateFedEx,
      DHL: calculateDHL,
    } as const satisfies Record<ShippingMethod, ShippingCalculator>
Intellisense works fine of course, and code navigation (via F12) is straightforward and easy to navigate.


> "yeah I have a User, but I don't know if it's a valid User".

This, imo, is one of the big reasons people so easily dismiss OOP. They put whatever data _they think they probably need_ in an object using setters/builders/what have you. This leads to abstractions of data that don't accurately reflect state. They will then let an external entity (service or whatever pattern) manipulate this data. At this point people might as well use something analogous to a C struct where anything can be done to the values. Objects are not managing their own invariants. When you rip this responsibility from an object, then nothing becomes responsible for the validity of an object. Due to this, people wonder why they get bugs and have trouble growing their software.

This also leads to things like "isValid". People don't understand that an object should be constructed with only valid data. The best example I've found of protecting variants and construction of valid objects to be this strategy in F#:

https://fsharpforfunandprofit.com/series/designing-with-type...

I'm yet to find a good way to do this in a language such as Java unfortunately.


In psychology, there is an idea of "locus of control".

I think OOP done well and applied in suitable scenarios results in entities that have internal locus of control; it's mutations of state are internally managed.

OOP done poorly has external locus of control where some external "service" or "util" or "helper" manages the mutation of state.


Except inheritance is the premature optimisation of interfaces.

Inheritance forces you to define the world in terms of strict tree hierarchies, which is very easy to get wrong. You may even do a great job today, but tomorrow such properties don't hold anymore.

Regular composition allows the same functionality without making such strong assumptions on the data you are modelling.


> Inheritance forces you to define the world in terms of strict tree hierarchies,

No, it doesn’t.

Inheritance is the outcome of deciding to model some part of the problem space with a tree hierarchy (that potentially intersects other such heirarchies). It doesn’t force you to do anything.

I suppose if there was a methodology which forced you, as the only modeling method, to exclusively use single inheritance, that would force you to do what you describe, but…that’s not inheritance, that’s a bunch of weird things layered on top of it.


It may not force you directly, but I think it's safe to say that when a language focuses on inheritance (e.g. c++) it does not offer good alternatives (e.g. rust traits). This means that if you want features such as runtime polymorphism, you are _kinda_ forced into inheritance.


Arborescent vs rhizomatic approaches, to get philosophical.


Unfortunately Deleuze's The Fold has little to say about either awk or catamorphisms.


True. Modelling the world as a tree oversimplifies it, as there are also lateral and even backwards dependencies, and at a sufficiently complex level abstractions start to leak all over, making it a mess. But at some low to medium level of complexity it might work.


Unfortunately a lot of developers are conditioned so heavily to believe that inheritance is intrinsically bad, that they contort their code into an unreadable mess just to re-implement something that could have been trivial to do with inheritance.

I'm not saying that I like it everywhere; IMHO it's a tool to be used sparingly and not with deep hierarchies. But it's not reasonable to avoid it 100% of the time because we're soiling ourselves over the thought of possibly encountering the diamond problem.


> For me the main point is (runtime) polymorphism. E.g. you have a function that takes a general type, and you can pass multiple specific types and it will do the right thing.

The runtime part is what I dislike. If I have a fruit which is an apple or a banana, I can't pass that to a method expecting an apple or banana. It can only be passed as a fruit.

> And if you want to avoid huge if-else statements, you should put the code for the special cases in the classes, not in each function that operates on them.

This is common OO wisdom that I strongly disagree with. For example, in my program I have a few types (Application, Abstraction, Variable, etc.), and a lot of transformations to perform on those types (TypeCheck, AnfConvert, ClosureConvert, LambdaLift, etc.).

I prefer to have all the type-checking code inside the TypeCheck module, and all the closure-converting code inside the ClosureConvert module. I'd take the "huge if-else" statements inside TypeCheck rather than scatter typechecking across all my datatypes.


> The runtime part is what I dislike. If I have a fruit which is an apple or a banana, I can't pass that to a method expecting an apple or banana. It can only be passed as a fruit.

Heh? An apple is a fruit, you can pass it to any place expecting the former. Like, this is Liskov’s substitution’s one half.

With generics, you can be even more specific (co/in/contra-variance).


I think GP is talking about something like this:

    Fruit* fruit = new Apple();
    ConsumeApple(fruit);  // Doesn't work; requires Apple*

    fruit = new Banana();
    ConsumeBanana(fruit);  // Doesn't work; requires Banana*

    ConsumeFruit(fruit);  // Okay, function signature is void(Fruit*)


But why would you want to be able to pass an apple to a function expecting a banana? Doesn't even make sense, the whole point is that the type system forces you to consider this stuff.

If OP wanted to be allowed to do whatever and just have the software fail at runtime, JS is right there.


> But why would you want to be able to pass an apple to a function expecting a banana?

I would never. I know that I'm holding a banana, I want to pass it to a method that receives a banana. But what I can't do is put my banana through a rotateFruit function first, because then Java will forget that it's a banana and start treating it as a fruit.


Java has generic functions since forever, something like `<T extends Fruit> T rotateFruit(T fruit)` would return your banana typed object just fine.


Inheritance is just a tool, that you use if it fits you software architecture. You use other tools, like generics, interface, where it makes senses to ise them. I think people wants the one true way to not have to design their software. And when they use inheritance in a ways that does not fit, they blame inheritance.


But then I lose polymorphic dispatch, which is where this thread started.


If you pass your object to another function, you only get static dispatch. What you want instead in this case, is a simple instance method. Then yourApple.rotateFruit() would do what you want, and rutateFruit could be an interface method declared in Apple, the superclass/interface.


> I prefer to have all the type-checking code inside the TypeCheck module

There are ways to have your cake and eat it too, here, at least in some languages and patterns.

For example, in Go you could define "CheckType" as part of the interface contract, but group all implementors' versions of that method in the same file, calling out to nearby private helper functions for common logic.

Ruby's open classes and Rust's multiple "impl" blocks can also achieve similar behavior.

And yeah, sure, some folks will respond with "but go isn't OO", but that's silly. Modelling polymorphic behavior across different data-bearing objects which can be addressed as implementations of the same abstract type quacks, very loudly, like a duck in this case.


> And yeah, sure, some folks will respond with "but go isn't OO", but that's silly. Modelling polymorphic behavior across different data-bearing objects which can be addressed as implementations of the same abstract type quacks, very loudly, like a duck in this case.

It's less "this is not OO" and more this is not inheritance, which is why a lot of people are saying you can find more elegant solutions (like this) rather than use inheritance for no clear benefit.


> If I have a fruit which is an apple or a banana, I can't pass that to a method expecting an apple or banana. It can only be passed as a fruit.

You can by overriding the method on apple or banana. If your method is on some other object, then yes, you cannot do this unless your programming language supports multiple dispatch.


I feel like Clojure-style multimethods accomplish this better than inheritance. I can simply write a dispatcher that dispatches the correct function based on some kind of input.

This is evaluated at runtime, thus giving me the runtime polymorphism, but doesn't make me muck with any kind of taxonomy trees. I can also add my own methods that work with the dispatcher, without having to modify the original code. I don't feel like it's any less convenient than inheritance, and it can be a lot more flexible. That said, I suspect it performs worse, so pick your poison.


> you have a function that takes a general type, and you can pass multiple specific types and it will do the right thing

What does this have to do with inheritance? This is just a generic function.

> And if you want to avoid huge if-else statements, you should put the code for the special cases in the classes, not in each function that operates on them

This doesn't sound any different from how people usually talk about inheritance.

I am not convinced it gives you anything more than composition does. Composition is very easy to setup and easy to change. And there are certainly functional ways to do runtime polymorphism.


> > you have a function that takes a general type, and you can pass multiple specific types and it will do the right thing > > What does this have to do with inheritance? This is just a generic function.

It's a generic function with compile-time compatibility checks.


You can have polymorphism without inheritance and inheritance without polymorphism. The big problem with inheritance is that it is really tricky to write those base classes. Making class inheritable by default in programming language is often considered a big mistake, because only carefully designed and documented class can be safely extended.


If people get it wrong so regularly, what value is it providing as a concept? These concepts are supposed to help us reach something better, if you have to add 30 caveats to every part of it, all it did was hide its own complexity from you, instead of managing it for you.


Because the tree is a nice abstraction for some problems. But sometimes you need a collection of pure functions. Sometimes it’s best to think of your object as data blobs going through a line of map,filter,reduce functions. Not every part of your application is the same, use the right abstraction for the job.


This is why "interfaces" is a better word and and a better concept to describe this paradigm, not inheritance.


Polymorphism is doable in plain old C with lookup tables and function pointers. If that is the only benefit, what is the point of creating a language where everything is an object?


> Polymorphism is doable in plain old C with lookup tables and function pointers

Not without casting. qsort is still:

    void qsort_r(void *base, size_t nmemb, size_t size,
                 int (*compar)(const void *, const void *, void *),
                 void *arg);


Presumably one would write an object-oriented version of quicksort to go with their OO C library design and not use the stdlib functions. For example GObject is OOP in C, and I imagine it has collection types with sort methods.

Inheritance is what will bite hard when hand-rolling OOP in C. For one, you can forget about the compiler enforcing substitutability and co/contravariance for you.


> what is the point of creating a language where everything is an object?

I think that's the ultimate culprit in everyone hating inheritance. If it weren't for Java, I think we'd all have a healthier view of OO in general.

I learned OO with C++ (pre C++-11), and now I work at a Java shop, but I'm luck that I get to write R&D code in whatever I need to, and I spend most of my time in Python.

In C++ and Python, you get to pick the best tool for the job. If I just need a simple function, I use it. If I need run-time polymorphism, I can use it. If I need duck-typing I can do it (in Python).

Without the need for strict rules (always/never do inheritance) I can pick what makes the best (fastest? most readable? most maintainable? most extensible? - It depends on context) code for the job.

Related to TFA, I rarely use inheritance because it doesn't make sense (unless you shoehorn it in like everyone in the threat is complaining about). But in the cases where it really does work (there really is a "is a" relation), then it does make life easier and it is the right thing.

Context matters, and human judgement and experience is often better than rules.


> If it weren't for Java, I think we'd all have a healthier view of OO in general. > > I learned OO with C++ (pre C++-11), and now I work at a Java shop, but I'm luck that I get to write R&D code in whatever I need to, and I spend most of my time in Python. > > In C++ and Python, you get to pick the best tool for the job. If I just need a simple function, I use it. If I need run-time polymorphism, I can use it. If I need duck-typing I can do it (in Python).

Those first two you can do in (modern) Java. The third is a mess to be avoided at all costs. Interfaces and lambdas will cover most reasonable use cases for polymorphism.


C++ templates are a form of duck typing as well, and combined with type erasure give you a lot of the benefits of OO without the downsides.


Yes, you're right. I've done that in high-performance code where I couldn't afford the double function call of a virtual function. I forgot about that.


Likewise this is getting easier with the auto keyword now being sugar for templates when used in function arguments and return values


I learned OO with Smalltalk.

Everything is an object so there's no if-it's-an-object do this but if-it's-not-an-object do that.


Because syntactic sugar and abstractions matter. You can do everything in assembly too, yet we prefer something higher level.


Control flow is doable with gotos. What benefit therefore is structured programming?

Dynamic dispatch is implemented under the hood with lookup tables and function pointers. Sometimes, it is nice for a language to wrap a fiddly thing in a more abstract structure to make it easier to read, understand, and write.



So we might guess that the language designers thought there were other benefits.


if i were writing an intro to programming book, i would introduce OO as a means of building encapsulation. i'd only get into inheritance in later chapters.


You can have encapsulation without oop. Polymorphism is the real benefit of oop imho


> Polymorphism is the real benefit of oop imho

It's pretty much the definition of OOP.

The core feature of OOP is just bundling functions with the state it processes.

When you bundle state and functions together, you can't predict what calling the function will do without knowing both the code and state.

You can say its 'the real benefit', I guess, but that feels like circular reasoning. Its pretty much the definition of what OOP is, so calling it a benefit feels weird.

Unfortunately, designing systems as a collection of distributed state machines tends to become maintenance nightmare. Functions and data being separated tends to make code better, even when working in so-called 'OOP' languages.


> You can say its 'the real benefit', I guess, but that feels like circular reasoning. Its pretty much the definition of what OOP is, so calling it a benefit feels weird.

It's a benefit compared to how people were writing code before it became mainstream (for polymorphism: by jumping to data-defined parts of the code and hoping for the best, and for encapsulation: subroutines working on global variables).


Which kind of polymorphism? Subtype polymorphism I'm guessing?


You’re describing the strategy pattern, which is probably one of the most practical coding design patterns. Ex: each chess AI difficulty gets its own class, which all extend a common interface.


I still find it funny that somehow the idea of runtime function dispatch via function pointer is called the "strategy" pattern.


Hey I didn’t name it!


> I feel people don't understand ...

> For me ...

Sorry, but right off the bat this is just painting you as an example of "There are N camps, and each camp declares the other camp as wrong."

Yes, that's what inheritance is for you. For others it is something else. Why is your way the one that "correctly understands" it?

The article itself covers this - that some languages have lumped 3 different concepts into one that they call inheritance, leading to the different camps and comments like yours. Your camp is specifically mentioned:

> Abstract data type inheritance is about substitution: this thing behaves in all the ways that thing does and has this behaviour (this is the Liskov substitution principle)


I see that a lot - the alternative to inheritance, when inheritance does make sense, is code duplication, which is much worse than inheritance, or first-order functions, which many languages don't actually support or don't support efficiently.


Yes. I have a framework for an embedded system that uses various types of sensors. When changing a sensor, instead of rewriting the polling loop for every new case, I can keep looping through ‘sensor[i]->sampleNow()’ and add the specifics in a class inheriting from SensorClass.


Interface inheritance could do the trick then? Maybe function pointers even.


>For me the main point is (runtime) polymorphism.

But you don't actually care about runtime polymorphism here. You care about polymorphic behavior, which can be implemented in a much more composable way with parametric polymorphism.


You can’t build a dynamic list of objects implementing the same interface in different ways with parametric polymorphism.

As another example, the Unix file interface (open(), read(), write(), flush(), close()) etc. is an example of runtime polymorphism, where the file descriptors identify objects with different implementations (depending on whether it’s a regular file, a directory, a pipe, a symbolic link, a device, and so on).

All operating systems and many libraries tend to follow this pattern: You can create objects to which you receive a handle, and then you perform various operations on the objects by passing the respective handle to the respective operation, and under the hood different objects will map to different implementations of the operation.


So you think a List[Function[Unit]] all does the same thing?


>You can’t build a dynamic list of objects implementing the same interface in different ways with parametric polymorphism.

Yes you can. that's the whole point of type classes.


Not without runtime polymorphism. Parametric polymorphism does not imply nor by itself implement runtime polymorphism. I.e. C++ templates, or generics in other languages, provide parametric polymorphism, but not runtime polymorphism.


Universal Vs existential qualification.

For example in c++ std::function exhibits parametric polymorphism without templates.


Type classes are not parametric polymorphism.


Nobody calls it this, but cascading styles in CSS is just like inheritance and I think should be avoided for all the same reasons. I feel it's a big part of why CSS at scale becomes unmaintainable. There isn't even a built-in way to compose two classes together if you want to avoid cascading/inheritance.

It looks like composition over inheritance has caught on as the better default in other languages, but in the CSS world people still cling to the cascade as a best practice for some reason.


>It looks like composition over inheritance has caught on as the better default in other languages, but in the CSS world people still cling to the cascade as a best practice for some reason

I find this falls into generally two camps, those who want to fight the browser and those who don’t.

Those that tend to hate the cascade tend to fight the browser a lot, whether they realize it or not. Generally speaking, they want things to work a certain way (IE everything in isolation) and prefer to think of styles isolated bits.

The second camp tends to not fight the browser and embrace the browsers methodology. They embrace the cascade because it’s easier than fighting it, but it requires a more sophisticated approach and seeing styles in a wholistic manner, not isolated purely into components (though may be organized in a way that co-located them with relevant components)

Both work, ultimately, and modern tooling and approaches allow both to exist, but I will say, the second group often has a better grasp on keeping project maintainability over time in my experience.


>Those that tend to hate the cascade tend to fight the browser a lot, whether they realize it or not. Generally speaking, they want things to work a certain way (IE everything in isolation) and prefer to think of styles isolated bits.

So, wouldn't that be the browser fighting them, then?

They want something specific, and the broswer forces a paradigm upon them that they don't want.

They are the humans and the browser (well, the styling language of the browser) is the tool that should accomondate them, not the other way around.


This is a sane argument. I have no objection to it. Both things are true, in a sense.

It really depends on how you view the browser, IE should browsers be more adaptive to certain paradigms or should it set a reasonable paradigm and enforce it? I don’t know that there is a 100% right answer in this case, though they have moved to create better hooks for some forms of isolation (e.g. layers, scoping) but they fundamentally haven’t walked away from the cascade aspect.

It’s converging the two paradigms, for sure, but as I said, I don’t think either is wholly incorrect or correct.

Now if you wanted my opinion on the whole thing, I think the cascade is a fundamental element to be leveraged not avoided, but that’s me.


If someone picks up a hammer and struggles to pound screws in with them, it’s not the hammer that’s defective.

I don’t think CSS is the perfect tool for all browser-based styling but it’s the tool that’s there and it’ll probably work a lot better if you use it the way it’s intended to be used. If you want a screwdriver instead of a hammer… you have options (don’t target a browser, propose an alternative to CSS, use something that compiles down to CSS).


>If someone picks up a hammer and struggles to pound screws in with them, it’s not the hammer that’s defective.

If someone wants to hammer nails and they're given a blender, then the blender might not be defective, but it surely is not the right tool for the job, and it's imposed upon them.

Few people ever loved CSS. The majority always either hated it or learned to tolerate it. Most who do CSS today use a few different paradigms on top to make them tolerable like BEM, or use different transpilers to get a better language, or directly control styling from code, with CSS-in-JS libraries or like React does it.

>I don’t think CSS is the perfect tool for all browser-based styling but it’s the tool that’s there

Sure, I never denied its existance. Just its design.


I'd ask what you mean by "fighting the browser"- as generally, the number one way to ruin the performance of your CSS is to introduce depth to it. In general, keeping everything isolated regularly leads to better rendering performance.


Avoiding the cascade at all costs, for example. It can introduce a lot of unintended consequences.

Another anti pattern I have seen is the over use of media queries to force the browser to do certain things rather than embracing relative sizing constraints via intrinsic design and letting the flexbox and grid algorithms do most of the heavy lifting.

Here though I want to point out isolation is relative, as is the cascade. I think it’s important to leverage the cascade wherever you can but that doesn’t mean you are leveraging it from top to bottom per say, but it does mean thinking more wholistic about the context of styling


To be fair, there was a time that flexion did not exist and extensive use of media queries was the only way to create a responsive website.

Those efforts don't just dissappear quickly, since that was the only way available that worked across all browsers for nearly a decade.


Isn’t the browser responsive by default? The only issue I see is taking a layout built for refloyable documents and wanting to create applications and magazine-like design with it. Now it’s possible but it’s always more convulated than something like the tools available on platforms like iOS and Android.


Two camps I see are (1) those who want to hold the program in their head and be able to answer ever more obscure questions about the program using their understanding. And (2) those who try things until it works and then address problems as needed.

Do these two camps align with your two camps? Those who want control fight the cascade, others just fiddle with it until it looks good?

I don't mean to disparage either camp, because we need both approaches in the real world.


I don’t think these are two separate camps. Rather everyone has a value - let’s call it S - which is the maximum size and complexity of a program they can hold in their head. When the program’s size is smaller than S then they use strategy 1, when it’s larger they use strategy 2.

When you are a beginner your S is close to zero, as you gain experience you develop increasingly better compression to be able to grow your effective S, but it’s never infinite. There’s always some size program where you will devolve to strategy 2, and all programs tend to grow in complexity until they reach the point where nobody understands them, like some kind of complexity Peter principle.


> the second group often has a better grasp on keeping project maintainability over time in my experience.

That's probably in the eye of the beholder and a necessity. If you are all in on the cascades then you have to limit the depth of your page structure, otherwise it becomes impossible to predict how things will ultimately render or what the impact of a change at layer 3 will have on the rest of the page.

Technically speaking, isolation is perfectly possible with webcomponents.


Yes, there are more tools for isolation now, but you can’t wholly opt out of the cascade, even if it’s only local relevant, and I think that’s a good thing IMO


It's because people don't understand cascade we end up with pages that have text in 18 different font sizes. Cascade was a brilliantly simple idea and gave great consistency of design easily. At the time it was great. I believe the only reason most people are mad at CSS is because specificity is hard to grasp and order-dependent resolution and you don't control it when you bundle 500 packages. You get scores of megabytes of CSS that can randomly change order. Since then we've got more tools to control cascade: layers, scopes, inherit/initial/revert/revert-layer/unset for every property, etc. But people still insist on inline styling. I get it, it's much simpler. But they will keep being unhappy until they learn cascade and stop fighting tools. Yes, learning takes effort but so does resistance.


And not to forget that this also comes from a discrepancy between the fact a lot of people start with a design and want a CSS that matches that design, when CSS was made more as a theme system where you make a theme and let it handle placement and design. The result is clunky complex CSS to fight the browser into placing things an exact way and giving each element a different look.

In my opinion, this was a failure of communication from the beginning, with a huge divide between "I want my website to look like this because I drew it on paper with my designer friend" and "css is a cool new way of doing decent programmatic ux/ui.


> It's because people don't understand cascade we end up with pages that have text in 18 different font sizes.

Variables and/or utility classes mostly solve this so not sure how it's related.

> Since then we've got more tools to control cascade: layers, scopes, inherit/initial/revert/revert-layer/unset for every property, etc. But people still insist on inline styling. I get it, it's much simpler. But they will keep being unhappy until they learn cascade and stop fighting tools. Yes, learning takes effort but so does resistance.

Similar to inheritance-like features in other languages, I have learned it and find composition is the better default. It's too complex for too little benefit so I avoid it wherever possible.

I find maintainable CSS approaches are all about reducing the blast radius of style inheritance. So people are writing maintainable in sprite of cascading, not because cascading helps.

> specificity is hard to grasp and order-dependent resolution

Specificity is another foot-gun to avoid for me. It's learnable, yet easy to forget later. When you're tempted to rely on specificity there's usually a more readable way to do it that's not going to bite you later.

Do you doubt the developers behind Tailwind library understand specificity and the cascade? They clearly must understand it, use it where it helps, and avoid it where it doesn't.


Cascading does help. When you want to design a document. If you want an application like interface or the same constraints as a magazine page, cascading is the wrong tool for the job. Design tools don’t use it. Layout engines for GUI don’t use it. Flex and Grid is far from the constraints based interface builder in iOS.


> Cascading does help. When you want to design a document. If you want an application like interface or the same constraints as a magazine page, cascading is the wrong tool for the job.

I agree. This sounds like saying cascading helps in simple cases but when it gets more complex it doesn't help.

I don't think anyone really has a big problem writing maintainable CSS for a document. It works fine there. Most CSS methodologies (like BEM) and frameworks (like Bootstrap, Tailwind) are there to help write maintainable CSS code for complex non-document web designs, where a big part of that is taming cascading.

I think a lot of pushback against approaches like Tailwind and JS frameworks are from people that are mostly styling documents. Complex landing pages and UIs are where the real headaches are.


>It looks like composition over inheritance has caught on as the better default in other languages

"Prefer composition over inheritance" was mentioned in an early page of the GoF book (the Design Patterns book) - in 1994.

https://en.m.wikipedia.org/wiki/Design_Patterns


I really agree with this.

There was a company that I worked at 10ish years ago with a SaaS built around Drupal. They had one monolithic css file that was 25000 lines. It was a nightmare. When I took over the front-end rebuild, went through it line by tedious line and broke it out into discrete components.

I think about that from time to time when building logic or other functionality. I'd much rather have a self-contained bit of code than a big thing that is so intertwined I'm terrified to touch anything.


I'm inclined to semi-agree with this, although I'm not sure I'd be quite as adamant about it myself. But I do generally think that CSS is taught in a way that encourages some bad practice around cascade/inheritance. I will point out though that (at least in my experience), BEM solved the majority of these problems for me even without a preprocessor. The language is definitely oriented towards inheritance/cascade, but I think there are ways to avoid it.

There's some movement towards ::part as a proposal to grant some mixin behaviors (https://developer.mozilla.org/en-US/docs/Web/CSS/::part) but I've never messed with it, and it's applicable only to shadow DOM. But mixins haven't been a huge issue for me in even enterprise-scale styling that I've done.

Opinion me, everyone has their own opinions on this, use whatever CSS style works for you. This is not me saying that BEM is the best for everyone, just giving a perspective that as someone who tends to stick to vanilla CSS and who generally kind of hates working with technologies like Tailwind or CSS-in-JS, BEM-style vanilla CSS made CSS pretty pleasant for me to work with; I have a lot more appreciation for the language now than I used to.

So if you're annoyed by CSS but also get annoyed by pre-processors or think that Tailwind is just inline CSS under a different name[0], you still don't need to be bound to the cascade -- potentially look into BEM. No technology or compilation or dependencies, it's literally just a naming convention and style guide.

----

[0]: yes, I have used it extensively, please don't comment that if I used it more something would magically click, I already understand the points in its favor that you're going to comment and I've already heard the style/framework suggestions you're going to offer. It's fine if you like Tailwind, it's great if it helps you write CSS, you don't need to convince me.


> But I do generally think that CSS is taught in a way that encourages some bad practice around cascade/inheritance. I will point out though that (at least in my experience), BEM solved the majority of these problems for me even without a preprocessor.

I think cascading is just a bad default, and I think methodologies like BEM agrees with this by teaching you ways to write CSS in ways that stops cascading from getting in the way.

Cascading styles are fine for styling how basic document content is shown (e.g. h2, p, a, li etc. tags) but outside of this, you generally don't want the styles of parent elements leaking into the styles of child elements. Cascading/inheritance styles is a useful tool to have, but not as the default.

I'm not saying Tailwind is perfect, but it's closer to "prefer composition over inheritance", where you can sprinkle in some cascading/inheritance where it makes sense.


> I think cascading is just a bad default

I'm again semi-inclined to agree, I just don't think I'd say it as forcefully; more that cascading styles tends to have a lot of downsides that people aren't familiar with and aren't taught.

My point isn't to badmouth Tailwind here; but debates about this sometimes boil down to "CSS purists" vs "Tailwind advocates" and my point is more -- nah, you don't have to like Tailwind to avoid the cascade. You can be a CSS purist and still avoid basic element selectors, your choice does not have to be either "do semantic styling targeting only semantic elements" or "jump on Tailwind and stick a bunch of styles inline."

I'm more sticking up for -- look, if you're someone who uses Tailwind, great, I don't have to tell you anything. You are already using a framework that (regardless of any other flaws it may or may not have) discourages you from using the cascade. But if you're someone who's in the position where you dislike CSS-in-JS or don't like using Tailwind, also great! I'm in that position too, I don't like Tailwind. But I still avoid cascade and basic element selectors and there are ways to basically eliminate most cascading styles from your codebase and eliminate most cascade-caused bugs even if you aren't going to use a pre-processor at all, and it's good to at least consider removing those cascading styles.

My only critique of Tailwind I would bring here is that sometimes I get the feeling that Tailwind advocates think that Tailwind invented this idea of component-based CSS, and it really didn't. But that's neither here nor there, and if someone is using Tailwind and it works for them, great. Life is way too short for me to argue with someone using a technology that they enjoy. Honestly, same with the cascade -- I think it can lead to long-term maintenance problems, but if you like it, fine.

However, if you're using CSS and hate it, and you also don't want to use Tailwind, then give BEM a try.


But BEM doesn't interact with the cascade. If you have two BEM selectors (or a BEM selector and non-BEM selector) that match an element and set the same property, the cascade algorithm still applies to determine what to set the property to.


Sure, but the idea with BEM is that you generally don't have situations where the result of that algorithm is confusing or unexpected. Or at least that's been my experience, even on large codebases. I generally don't run into situations where styles overload each other in weird ways when I'm using BEM (others' experiences might vary).

You could throw the same criticism at Tailwind -- Tailwind can still expose you to cascade issues, not all Tailwind classes are single-level selectors under the hood and not all Tailwind classes only target one property. At the end of the day this is all compiling down to raw CSS, so in neither situation have you actually eliminated the cascade. But with both BEM and Tailwind you are much less likely to see those situations, and when they do arise they are less likely to introduce long-term maintenance problems and are more likely to be easy to address/encapsulate. If you run into cascade bugs with Tailwind, it's probably something you fix in like one file, instead of needing to search through five.

BEM doesn't technically interact with anything, it's just a style of writing CSS. There's literally no technology behind it, it is just a naming convention. But in practice, using a naming convention mitigates or eliminates a large number of cascade issues.


Do you have a CSS component framework that you can recommend for use with BEM?


I could link you to a few (I think Bootstrap adopted BEM at some point, Material Design Lite I think uses a variant of it), but I can't recommend them with confidence because generally I don't use extensive style frameworks when I work on large applications. I feel like CSS frameworks are largely useful for bootstrapping projects and for keeping control of large projects where styling starts to break down. A lot of projects I work on are past the point where I need the bootstrapping help, and BEM itself helps me keep control of the CSS code as the projects grow so I don't need to have a strict framework to help me organize everything.

----

In general though, I would actually suggest that you can kind of use anything if you're not planning on forking the component library. Most of these 3rd-party libraries you're not going to be restyling, so long-term maintenance and scalability isn't really a concern, you're never touching that code.

And BEM is just a naming convention, so if you're pulling in a React component and it has a hook for you to attach your own classes, then attach a BEM-style class, otherwise pass in the styling information into the props the way that most JS components want. I've used BEM with React components, with Material design, with Angular components, etc... I don't know, I haven't really run into issues. I've even worked on a codebase that was a mixture of BEM CSS, 3rd-party CSS, and Tailwind. It was fine, I didn't notice any major issues. Typically 3rd-party component dependencies are not a major source of cascade bugs in my experience, but maybe I've just been lucky. Most components I've seen lock down their styles to the point where it's kind of a pain to even try to override them with CSS, and the specificity required to do so forces you to effectively isolate those overrides anyway.

Whatever component framework you're using will have its own customization API, and in my experience that usually won't be handled through CSS. If it is handled through CSS, it will probably be handled by attaching classes, and then when you attach those classes you can use BEM. The major annoyance in my experience is that (for me) BEM often works better than whatever customization system that the 3rd-party components is using and I get frustrated that I'm mixing more straightforward CSS that's easier to debug and design in-browser in with whatever property-based thing that the components expose. But I don't usually think I run into many bugs?

What BEM helps with is dealing with cascade, code organization, naming, and debugging/search. With a 3rd-party component framework, most of that stuff is out of your control, so just use whatever the framework wants and then use BEM for the stuff that is in your control.

If you have to do some kind of CSS-based override of a 3rd-party component that isn't being handled through a component-specific API, wrap it in a BEM-style class for your actual component so that it's a one-time customization:

  .Input__Username .some_component_depencency input {
      /* This is not ideal, but (imo) you'll still effectively never really see cascade bugs from doing it this way, and specificity will rarely be a concern. */
   }
One thing that is nice about BEM is that because your own CSS is being scoped to specific named components, it actually becomes a bit easier to have a lot of 1st-party CSS living alongside 3rd-party CSS and know that your CSS is not going to break the 3rd-party CSS. So I tend to worry a lot less about what other parts of the code/dependencies are using when I'm using BEM, because I more confident while writing BEM that I'm not introducing bugs into the other CSS.


That was really helpful and cleared up some notions I had. Thanks


The cascade is not much like inheritance at all. If anything it's more like composition because the styles can come from multiple sources, eg user styles vs author styles, multiple layers.

The cascade is so much not like inheritance that I wonder if you meant either the concept of selectors in general, or inherited properties?


Yeah, the other poster pointed out https://developer.mozilla.org/en-US/docs/Web/CSS/Inheritance and https://developer.mozilla.org/en-US/docs/Web/CSS/Cascade.

I meant the concept that when you e.g. apply the style "color: blue;" to an element, all child elements get the same style unless you override it. I'm not saying it's identical to inheritance, but it's similar in that changes and additions at the top-level ripple down to lower levels, which I find causes most of the problems when writing maintainable code.


> I meant the concept that when you e.g. apply the style "color: blue;" to an element, all child elements get the same style unless you override it.

This is the first thing you've said so far that I would push back against. Neither BEM nor Tailwind removes this behavior that I'm aware of. I thought when talking about the cascade you meant generic styles on elements like "p", "ul", etc... getting applied across separate components, or specificity of child selectors, or something similar.

If you really dislike styles being applied to children in the DOM that don't override those styles, I don't think there is a way around that other than web-based components and shadow DOM with isolated styles. Or I guess use a bunch of style resets beforehand I guess? Neither Tailwind nor BEM gets rid of child inheritance of applied styles; you can use @layer I guess, but that doesn't get rid of that behavior either, it just allows you a bit more control over style order.

If you're using Tailwind and you write:

  <div class="text-red-400">
     <p>Some text</p>
  </div>
that text will be red. If you're using BEM and you write:

  <div class="Container">
     <p>Some text</p>
  </div>

  .Container { color: red; }
same deal.


> I thought with inheritance you meant generic styles on elements like "p", "ul", etc... getting applied across separate components

Yes, so I mean if you add "color: blue" to "p", it's now going to start interacting with any element that's a child of "p" (which will probably be on all pages on your website so hard to predict and check what will happen).

BEM and Tailwind don't get rid of the behaviour of the color being applied to child elements, but it at least forces you to isolates these kinds of style changes to the component level (vs sitewide) which is what improves maintainability.


Okay, we are on the same page then -- sorry. Yep, I generally agree with this.


I can see how it causes problems, but I don't really see any alternatives for certain properties. Would you want to set the color property on every element that contains text? Or do something with the universal selector?


CSS does actually call it inheritance[0], but it's commonly mixed up with the cascade. Inheritance applies to certain properties, so that when they are not specified on an element, an element inherits the value of the parent.

The cascade[1] determines how rules from multiple sources are merged.

[0]: https://developer.mozilla.org/en-US/docs/Web/CSS/Inheritance [1]: https://developer.mozilla.org/en-US/docs/Web/CSS/Cascade


CSS calls something completely different "inheritance". There is no class inheritance in CSS, because CSS classes are not OOP classes.


It’s not only inheritance, it’s also lack of encapsulation and of separation of concerns, which leads to attack vectors like: https://news.ycombinator.com/item?id=39928558


The cascade was born from a time when the hope was to apply your own styles independently of the site's style, without overriding theirs completely. I think the standard languished and fell out of favor, but that was the hope.

I'm having trouble finding the original video, but Jacob Thorton (@fat) did an insightful talk on the whole affair called "Cascading Shit Show"


Nobody calls it this, because it is not. CSS classes have nothing to do with OOP other than the word "class".


Obligatory: Tailwind


I tried to avoid mentioning that, but Tailwind is showcasing how to write CSS using composition from inheritance. But it can't overcome the knee-jerk reaction from many CSS developers that don't see the problems that comes with CSS cascade/inheritance. I think this is mostly because leaning on cascading rules is seen as a best practice and most aren't going to question best practices much.


I'll never understand why a lot of CSS devs are opposed to it. I can understand the initial gut reaction, but I've been writing CSS for 14 years and I love Tailwind.

The cascade is a wonderful idea that has unfortunately not played out well in practice. Anyone who thinks it's just a skill issue is deluding themselves. In all my years, I've never seen CSS that (a) leverages the cascade, and (b) scales elegantly. It all breaks down past a certain point.

Tailwind has it's flaws, but it doesn't have the scaling problem.


Mainly because nobody is a "CSS Dev". If someone's entire job was developing CSS, you might expect that they would be willing to learn one new slightly different way of doing it. But in reality the pushback comes from full stack developers who already have a million other things to worry about, such that relearning all the basic CSS they already know, to achieve benefits that are rather minuscule in the grand scheme of things, is a low priority. I personally am working on a Tailwind product that is fairly small (~10k lines, perhaps), and the Tailwind isn't an annoyance per-say, but it's definitely less ergonomically friendly than the well-engineered enterprise application I was working on before, which was 100x the size and used exclusively pure CSS.

Personally, my dream would be if inline styles supported the full CSS gamut of pseudo selectors and the child element selector. Then you'd have the admitted benefits of not needing to synchronize the two files, along with the benefit of not needing to relearn all of CSS.

Edit: it's funny, in a way, that all these developers complaining about how CSS "doesn't scale" are likely writing their Tailwind in an large scale application styled entirely via CSS. https://github.com/search?q=repo%3Amicrosoft%2Fvscode++langu...


On the contrary, Tailwind's ergonomics are what attract me to it. With the autocomplete features via the VSCode intellisense plugin, I'm able to create UIs at a pretty extraordinary pace.

As a trivial example, let's imagine we need to apply a border radius to an element. Without Tailwind, it looks like this:

  1. I find the element I need to style
  2. I look at which class it's using
  3. I navigate to my CSS file
  4. I scroll down until I find the selector
  5. I type "border-r", then tab on the auto-complete to fill in "border-radius"
  6. I type the colon character, then space
  7. I think about what unit is appropriate - rems, ems, pixels, percentage
  8. I think about what value is needed for the design
  9. I also look around to see if this style is used elsewhere
  10. If the style is used elsewhere, I think about whether I need to refactor
  11. I type in the desired value
  12. I type a semicolon to mark the end of the statement
  13. I type cmd+s to save the css file
Here's the same example, with Tailwind:

  1. I find the element I need to style
  2. I type `round` and wait for the autocomplete to present my options
  3. I use my arrow key to select which one I want
  4. I type enter to add the desired tailwind class
  5. I type cmd+s to save the html file
The Tailwind interaction path takes less than half of the concrete steps to complete. But it's even more dramatic than that, because several of the steps taken in the first example require enough thought that it breaks your workflow and takes you out of your flow state. Then you have to get back into that flow state to keep working. But this keeps happening, so you're constantly stopping and restarting. With Tailwind, I tend to find myself staying in that flow state, because as I demonstrated above, there's very little getting in my way.


Fully agree with this. The regular arguments against Tailwind like "it's just inline styles", "learn CSS properly", "it looks ugly" and "normal CSS is easy" say nothing about how fast the Tailwind approach lets you make edits and stay focused in comparison.

Normal CSS is usually worse than this too e.g. you hit save, and your edit doesn't change anything, so you have to use the web inspector to hunt down which class is overriding your style then weigh up options for how you're going to refactor while jumping between multiple files. It's exhausting when you're trying to focus on styling.


I also assumed that a class already existed for it. Because otherwise you have to think about whether you use a class or an id or an element selector, you have to think about what the class name is going to be, which file it should go into, etc etc. What I presented was absolute best case scenario lol.


Only if you're refusing to consider inline styles. Which is an odd decision to make if we're comparing to Tailwind.

That said, I agree that this doesn't work for pseudo selectors, very unfortunately, and I wish it would.


I actually just downloaded the VS Code extension earlier today as a result of this discussion, perhaps that will change my opinion. Because for me the two flows would be:

   1. I find the element I need to style
   2. I add/edit the inline style={{}} attribute I have for it by typing `sty<TAB>{borrad<TAB>`
   3. I add a value
VS:

   1. I find the element I need to style
   2. I add/edit the className attribute
   3. I pull up the tailwind documentation to find how to type the CSS I already have memorized in their DSL (I know some Tailwind by heart, but wayyyyy more CSS)
   4. I wait for it to load
   5. I scroll down to find the version of the class name that I need
   6. I go back to the editor and add the class.
Also a very common flow for me is to edit the CSS directly in the browser, it's a much faster devloop than the fastest live reload server. In that case it's far easier to just copy from the `changes` view into the CSS than go through and remap everything from CSS into Tailwind.


That's fair - styling directly in the browser does indeed cut down several of the steps I mentioned, or at least it condenses it to one step at the end. I do think that installing the extension changes the experience entirely, because then you don't need to reference the docs.


You can also edit CSS in the browser and have it automatically sync to your source code without having to do any manual copying, look up "CSS mirror editing"


I'v never experienced that actually working, despite several attempts. Or when it does work, the overhead makes it not worth it.


Inline styles don't work with a strict CSP. Have you ever had to work with such a restriction?


A CSP can be configured to disallow external stylesheets too. I don't see how that's particularly relevant. But obviously if for some reason a CSP was configured as such, and I had no power to change it, I'd work around it.


Tailwind is not a panacea—it has its own problems which withhold adoption, "kneejerk reactions" notwithstanding.


Explain?


Inheritance is a local maxima. When the hierarchy of classes to consider is small, it often “seems to fit”. It allows the programmer to progress quickly and with low effort as a lot of the code sharing behavior is provided by the inheritance mechanism.

Trouble often comes down the line. We keep adding classes, and soon we find that the hierarchy no longer is “shaped like a tree”. A soccer ball is a spherical object but also a sports equipment and a bouncing object, and there’s no way to organize those into branches.

Our reality isn’t “tree-like”, except when we simplify it extensively.


That’s true of all knowledge organizational structures though. Even with composition, you end up with hierarchies that later you might turn on their heads, and also, those structures might end up being cyclical.

Paradigm shifts happen and shake up even loosely organized into structures like human perception, and most people’s perceptions aren’t trees


Hence traits.


My impression is that inheritance is in many common programming languages the easiest way to share code. Sharing code in another way would require some more thoughts and sometimes code, so the lazy programmer takes inheritance.

Traits as a general concept would be very useful in many programming languages, but only some (like Scala) have proper support for it. In principle a Java interface with default methods (implemented methods) is also something like a trait, but very limited compared to traits in Scala. I have no statistics, but my impression is that inheritance is much less used in Scala, because the language has an easy to use alternative.

Another example is Go, where structs with their associated methods can be embedded, what is exactly composition supported by the language. Since Go doesn't support inheritance, programmers obviously needs to use this approach, so you would never see inheritance in Go programs.

So, my conclusion is that the usage of inheritance depends on what the language supports and how easy it is to use.



Rust also has traits, and no struct inheritance. I think it works quite well.


Traits and contracts or interfaces, but even still inheritance has it's merits. Being fast and easy is a benefit, and I find defensive programming in extensible systems can benefit from a foundation of expecting inheritance.


What do you mean by this sort of defensiveness? Can you give an example?


For example in a plugin system, I can say you must inherit from BasePlugin and in code I can enforce that, BasePlugin can then have some protected functions that I can disallow from being overridden.

It's good for frameworks and extensible systems where your code needs to guarantee certain things will happen when running third party code.


It seems like protocols solve 90% of the problems that inheritance does, but with 10% of the headaches. Instead of trying to ensure that two types can both be passed to a function that only knows about the parent type, just have a parameter that says "Whatever's passed has to conform to this, I don't care what it is otherwise."


“Protocols” is Apple-specific (or Objective-C/Swift-specific) terminology. They correspond to types 1 and 2 of inheritance that TFA mentions.



Protocols as a term have other prior art. Which shouldn’t be surprising because a protocol is also descriptive of interface boundaries between software products (such as, but not limited to, network protocols).


The term is unintuitive to me in that usage, because to me a protocol implies an exchange or a sequence of steps between two or more parties, such as in network or cryptographic protocol or a diplomatic protocol. Interfaces, however, only specify one endpoint of an interchange, in an inherently asymmetric way, and they primarily specify operation signatures, where there are often few constraints on the possible sequence of operations.

An interface can specify or be part of a protocol as a special case, but to me the term doesn’t match the general case.


I don’t share your distinction, but it does make me curious: how would an interface representing a state machine fit into your mental model?


An interface representing a state machine usually does not directly represent the concrete state machine, but presents an interface to interact with the state machine. E.g. for a parser state machine, submit the next token that will advance the state machine; or an operation to read out the current state, or an operation to obtain a list of the currently possible transitions.

Of course it's not impossible for the preconditions for each operation of the interface to map exactly to a protocol so that the operations permitted in each state correspond 1:1 to the state transitions of a protocol. But that is not the general case, and not how people usually think about interfaces.

I don't know if you are familiar with the term interface description language (IDL), but generally IDLs are not suitable for specifying a protocol, in the sense of specifying (among other things) the protocol's state machine. It would be misleading to call them protocol description languages.

There are other ways in which "interface" and "protocol" differ. For example, we call HTTP a protocol, and things like SOAP and REST and CORBA. But we call a concrete REST API an interface, not a protocol, and we call what is defined by a WSDL an interface, not a protocol. A protocol can be used to access an interface (I use SOAP to access a monitoring API). An interface may support different protocols (I can operate different protocols through the Unix socket interface). We talk about Linux kernel interfaces, not about Linux kernel protocols (and if we would, we would mean something different by that). The two words are not interchangeable.

An interface describes the boundary between two things. A protocol describes a behavior between multiple entities. By the implementation of an interface we mean the implementation of one specific side, the one that exposes the interface, not the one that uses the interface. An interface can exist even if no one uses it (only one side exists). The implementation of a protocol, on the other hand, is split between both sides. You can't have only a one-sided protocol. If no one uses the protocol, neither side exists, so to speak.


Nice in principle, but doesn't it make it quite difficult to analyze such code? E.g. finding places that use a particular function of a protocol.

I recall Pyright for Python is not able to find such call sites. But perhaps there are better implementations of the concept?


A lot of OOP syntax has that exact problem because you don't use a fully-qualified method name. That's actually an upside of a janky C pseudo-OOP patterns. You have to call a function, that function has one name, one way of being called. So you can usually find all the call sites with grep (modulo macro explanations or something).

I could imagine a syntax that made you specify the protocol at the call site, e.g. for an object Foo that conforms to protocol Bar with method baz, you'd have to write obj->Bar->baz() as opposed to just obj->baz(). Or alternatively, Foo->Bar->baz(obj) if you want to make any given call site discoverable.


Indeed in OOP (or in any kind of indirect call mechanisms, including ones using function pointers in C, right?) you would not find which particular method is being called, but if you are using a nominative type system with OOP, you would at least find which method of which interface is being implemented—and similarly from the call site you would find which interface is being called. So you get a set of functions that may be called, and you would get the precise list of sites they may be called from. And this you get "for free".

In your proposal by stating the name of interface you are calling you do get to see what is being called, and some protocol implementations such as the one in Python/MyPy you can name the protocol of the object you are invoking a method on.

But how about the implementation sites? I was under the impression with protocol based system you would implement a protocol by just having methods of certain names. So the set of functions that may be called would be the set of classes that happen to have a certain set of methods that conform to a protocol, which might or might not be made with the idea that they should be used in the protocol's context. You would need to rely on documentation, which isn't checked by the compiler.

Personally I think protocols are too ad-hoc for general program structure purposes, though they can have their uses. I consider both "class implements these abstract named interfaces" and traits (as in Rust) better.


I'm guessing the closest equivalent in languages like Java or C# are interfaces, right? Underutilized IMO.


Interfaces in C# are used way more often than abstract classes and inheritance. It is also becoming more important as there is more code written in Rust style with structs implementing interfaces for zero-cost dispatch through generic constraints e.g.

    void Handle<T>(T value) where T : IFeature1, IFeature2...


In C++, you can use C++20 concepts[0].

[0] -- https://en.cppreference.com/w/cpp/language/constraints


Code example?


There is no valid use case I've ever seen where inheritance was better than composition (that does not mean a use case does not exist, but evidence is mounting against it). I would say it's used because every language made a poor decision by building that in as the way to encapsulate reusable logic. i.e., a mistake.

Some APIs may expose themselves as requiring you extend a base class, and in that case you might as well do what the library author said. But overall? Yes, some things can work well with inheritance (eg. classical UI frameworks), but these are often equally good with a composition based API.

One job I was at I added a lint rule to flag `extends` as an error. You would need to add a comment to justify why you were using inheritance - it went over more favourably than I expected. I think I would suggest this for any new projects using a language that supports inheritance - discourage it with automation, but allow really good reasoning to prevail in review. It is the worse solution at least 99% of the time because it requires you predict a future that is painful to change.


I have just the one use case: to escape out of framework/annotation hell.

Say I have a well-designed, self-contained BusinessService class which is perfectly unit-testable.

Then someone comes along and sprinkles some @Magic bullshit on it. Now its true behaviour won't be exercised in plain junit - you need custom test runners with class-path scanning and late-binding configuration ("The gorilla holding the banana and the rest of the jungle" in Joe Armstrong speak).

If you want to reclaim your unit-testable class, you can divide it into two: SpringBusinessService extends BusinessService. Move all the Spring dependencies out of the latter into the former.

Now you get the best of both worlds - clean, unit-testable logic, and a service that Spring can obfuscate with middleware to produce 200-line-long stacktraces after a lengthy startup.


I can think of one. For Java at least.

Imagine that I have a class that has 15 methods -- 14 are a perfect fit as is, but 1 of them needs to be completely overwritten.

If I were to do inheritance, I would do `extends` on the class, and then `@Override` the offending method. Problem solved.

If I were to do composition, I could use the Delegation Pattern from the Gang of Four, but that would require me to write 14 do-nothing methods just to be able to override the one.

In general, composition has more use than inheritance. However, inheritance has a non-zero number of use cases where it is a better choice than composition.


It really depends on what the code variation is, and it can often be a code smell, but I like just adding a flag to the method for simpler cases of this. No complexity or overhead, easy to reason and debug, and you can retain the current method signature with a default-case call to the new method, so no other refactoring required.

It can be inelegant, clunky, and terrible practice if you're doing two totally different things in the method, but I've had a lot of cases recently where it was a nice simple solution that made sense in the context we used it.

For me, one time I like that we use simple OO is a 'DeletedMyClass' extending 'MyClass'- it does everything the same, just has a DeletedDateTime property. A few pages will do an 'is' check (we use C#) and render certain things differently, but apart from that, they both get passed around interchangeably. You could argue you could add an IsDeleted flag, but the default case is that most things we deal with are not deleted, and they shouldn't have to even know about that concept. So to me this is a very useful minimal case of OO.


That's fair. Though, my point stands. Once that number starts to grow, that's when inheritance starts to shine. Again, only in Java as is.

And yes, excellent wording with the idea of "Should not even know about the concept". I think that is another good place where inheritance shines.


Composition implies a level of complexity. There may be a near infinite number of permutations of things that can be combined. If in reality there are only a fixed set of variations, inheritance may model your system in a tighter way. The benefit can be simplicity.


> Composition implies a level of complexity

How?


If I combine A and B to make C, I have more than A and B. Now I have A and B and C and the relationship of A to C and B to C. Composition usually creates complexity. Interfaces (via aspects, aspects, traits, whatever) tend to simplify, when the complexity is quantitatively extensive enough to outweigh the cost of the interface complexity creation + the complexity saved by using the interface.

At least, that's how I think of it.


Why are you making this distinction between composition and interfaces? Just because you're using interfaces does not mean you aren't doing composition.

> Interfaces (via aspects, aspects, traits, whatever) tend to simplify, when the complexity is quantitatively extensive enough to outweigh the cost of the interface complexity creation + the complexity saved by using the interface.

That sentence is more complicated than anything that has been done with composition.


Inheritance is more limited in what you can do and implies a finite set of behavior states. With composition you have a system that can, in theory, have an untestable amount of permutations of behavior strung together.


Not really? You can use interfaces with composition that keep you from plugging anything into anything. What you're describing sounds more like functional programming while ignoring the names of functions.


For a simple example, I might have class A that contains an instance of interface/base type B and another field of interface/type C. If B and C each have 4 implementations that means class A can be composed 4 * 4 = 16 ways. But in practice there may only be matching types of B and C that line up such that there's really only going to be 4 distinct permutations of A. I could model this more easily with inheritance.


This has not been a practical problem in my experience. Because composition makes it easier to tell what something does, people are not putting the incorrect implementations of things together. They're essentially just functions with contracts. Very easy to read.

With inheritance, you can't know what something does until you know what everything it inherits does. The little method you override could change the entire execution flow of the code. It's extra steps with no benefits.


Inheritance is unidirectional composition. I'm combining the parent with the child.

Making the relationship simple to create and hard to complicate, is why it endures, afaik.


The article has a nice history review of inheritance, but it would have been useful for there to be some concrete examples.

The worst example of inheritance I've ever found is in java Minecraft's mobs[0]. It is very deep in some places and has numerous examples of sub classes not fully implementing their parent interfaces.

Example 1: Donkey[1]

Donkey > Chested_Horse > Abstract Horse > Animal > Ageable Mob > Pathfinder Mob > Mob > Living Entity > Entity

Example 2: Blaze[2]

Blaze has a bit for 'Is on fire', but how is this any different from Mob 'Is aggressive'? Blaze > Monster > Pathfinder Mob > Mob > Living Entity > Entity

[0] https://wiki.vg/Entity_metadata#Entity

[1] https://wiki.vg/Entity_metadata#Donkey

[2] https://wiki.vg/Entity_metadata#Blaze


Both of those look like perfect cases for composition instead of inheritance.


Anyone not using components for game objects is insane. How do you add a fire breathing horse without that


Yes, now take it further. Anyone not using components for all domain modeling is insane. Why does everyone assume this wonderful practice only applies to games?


Any recommendations for where we can learn about this approach, or recommended search terms? I'm struggling to find anything through Google that isn't about the Unity component system in specific. Is it this? [1]

https://en.wikipedia.org/wiki/Entity_component_system


Yeah, that is it. I found this book to be pretty good https://gameprogrammingpatterns.com/ and heard good things about https://www.gameenginebook.com/


Because the negative consequences are more pronounced in games. You can more easily get away with not doing it in other domains, so there it may be silly, but not "insane".


why is monster vs ageable mob a good cleavage? This is odd


Jesus, that's insane! The rule of inheritance is no more than 3 levels deep. 3!!


Smalltalk having a more or less revolutionary developer experience was likely quite important. A graphical interface to your system where you can traverse a tree and inspect _everything_, and more importantly, change it.

In Smalltalks as I know them, which is sporadically and shallowly, inheritance seems to be used as a factoring tool, when you break down functions into smaller functions there's a structure in which to place them with the implicit incentive to put them as high up as possible.

To me it seems to work pretty well, but it's also a rather weird environment where even classes themselves are objects.

In Java one can get around inheritance by using injection into an attribute in a wrapper class that extends or changes an object API. This leads to a similar decoupling as more functional composition, but you pay by having more objects swimming around in the VM during execution. Extending on the class level (i.e. in declarations of classes, interfaces, traits and the like) instead of the object level might have effects on performance and resource management.

Usually that likely doesn't matter, but it might. In Java-like languages I tend to use inheritance as a quick way to abstract over or extend functionality in a library class, I get the public methods 'for free' in my own class without having to write the wrappers myself. And that's usually where I stop, one level of inheritance, since Java isn't as inspectable and moldable as, say, Pharo, and doesn't give the same runtime ergonomics with large class/object trees.


huh :o) the wisdom since late 90's has been to avoid it like a plague, at least in c++ circles. for example, see numerous articles in 'guru-of-the-week' by herb-sutter.

the same wisdom can be applied elsewhere as well, there is no need to keep discovering the same truths in different contexts again, and again.


>huh :o) the wisdom since late 90's has been to avoid it like a plague

I am sure that hundreds of thousands of students have at some point heard that these hierarchical organization of data is how you should model a system.


Yeah in C++ especially nowadays using compile-time traits is much, much more idiomatic.


I only tend to see inheritance in engines and libraries, where it makes sense to create more generic, reusable and composable code, since most of the functionality in these is defined by technical people.

It makes no sense to use inheritance in the business layer, because a single feature request can make a lot of the carefully crafted abstractions obsolete.


I've never seen it put quite like this, but it feels right and is refreshingly concrete. Trust the abstractions you can actually design/control for, treat all the other ones as suspect. One still needs the wisdom to tell the difference, but at least focusing on "feature request" focuses the mind. This is at least simple even if it is not "easy".

An argument against OOP where you first need to define/explain differences between composition/inheritance, Liskov Sub, compare and contrast with traits, etc is not really that effective when trying to mentor junior folks. If they understood or were interested in such things then they probably wouldn't need such guidance.


I stopped using inheritance when Swift protocols and TypeScript interfaces came along. Although it sometimes means you need to do nasty things with generics, it beats the fragility of inheritance. With ObjC, I eventually came to hate updating superclasses, because I knew it would lead to unexpected behavior in subclasses.


Despite misgivings about the language as a whole, I think that Go does interfaces extremely well, and made the right choice to discard "implementation inheritance".

That said, in my time with obj-c I never really encountered the problem of fragile base-classes - protocols and categories gave us just about everything we needed. (the foundation classes use implementation inheritance extensively, but we don't typically modify those too often :) )


I had the same experience. I’m a few years into using rust now - which doesn’t implement inheritance at all and I don’t miss it. In typescript I barely use classes at all. I prefer an interface paired with a constructor function.

I think it’s almost always better to first think of data as a struct, with associated code that acts on that struct. As the old saying goes, show me your data structures and not your code, and you don’t need to show me your code.


Rust implements interface inheritance out of the box via traits, but you can also replicate something very much like implementation inheritance via phantom types/the typestate pattern. It just finds very little use in general, because it's readily apparent just how clunky that whole arrangement is.


Inheritance on interfaces is fantastic. Inheritance on implementations is fraught with peril. Java developers figured this out like 10 years ago.


I don't think inheritance is bad at all. It is very often the easiest way by far to model a problem. Sure, it's not perfect, but I think it is wildly overhated by a vocal minority.


Have you ever seen wildly over-architected OO code where everything seems to be an abstract class and it seems impossible to find out where stuff actually happens?

Inheritance is like a lot specialised tools - it can be useful in some situations but I think those are rarer than supporters might thing. Completely refusing to use inheritance and always using inheritance both seem like extreme views that should be avoided.


You haven't lived life to the fullest until you've had to debug an issue in a 12-layer-of-inheritance class with the original call ping-ponging a couple dozen times across the layers and overrides everywhere.

I guess one could say this would be workable with proper tools, but the IDEs just aren't there. Move up a level in inheritance, ctrl-click on a call? It was overriden somewhere in the hierarchy, but the IDE will send you to the parent-class definition regardless.


The person who wrote that would have made your life miserable if they had architected it 5 other different ways as well.


Disagree. Some things are just easier to get right than inheritance.


I have actually seen one (1) beautiful C++ codebase with very good use of OOP and multiple inheritance, and author (I was his intern) painstakingly taught me about inheritance and its good uses. He used Design and Evolution of C++ (!) as a bludgeon. It was 2006.


Feels like that time my physics PhD student girlfriend asked me "hey you're a programmer, right" and I was one until I discovered ROOT and suddenly I lost taste for life and anything related to Computers.


Did you become a reclusive mathematician like Perelman or Grothendieck??


More like changed girlfriends...


Sure, but I have also seen C code where random structs with function pointers are passed to god-knows-where, and I will take inheritance over that any time of the day.


This is every OO code I've ever seen, honestly


I've seen that same problem sans a deep inheritance hierarchy.


I think that’s just what you’re used to. Since I switched from Java to Go 7 years ago, I don’t think I’ve missed inheritance a single time. I haven’t needed to model anything with inheritance once. There are definitely things I’ve missed from Java, but not inheritance.


A lot of people who only know inheritance and can't understand why a lot of us slag on it so much don't understand how inheritance conflates implementation and interface. Polymorphism based on interface is a fundamental tool of programming. The way inheritance also drags along an implementation turns out not to be, and to probably be more trouble than it's worth. There are other ways of bringing along implementation, including the basic function, and those work better.

When inheritance is your only tool for interface, I absolutely agree that it looks fundamental, but the fundamentalness is coming from the interface side. Once you separate out interface from implementation, it turns out I have little to no use for the implementation portion.

I have also been programming in Go for a long time now. I sketched out an inheritance system once for an exercise. It's doable, though it's a terrible pain in the ass to use. I've kept it in my back pocket in case it is ever the right solution to a problem even so... but it never has been. And that's not because of any irrational hate for the pattern itself. I've imported other foreign patterns sometimes; I've got a few sum types, even though Go isn't very good at it, because it was still the best solution, and I've got a couple of bits of code that are basically dynamically typed, again because it was the best solution, so I'm not against importing a foreign paradigm if it is the correct local solution. Inheritance just... isn't that useful. The other tools that are available are sufficient, and not just begrudgingly sufficient or "I'm insisting they're sufficient to win the argument even though I know they really aren't"... they really are sufficient. Functions are powerful things, as the functional programming community (it's right there in the name) well knows.


> I don't think inheritance is bad at all. It is very often the easiest way by far to model a problem.

I think the point of the article is that in some (very popular) languages, inheritance comes with a lot of baggage - precisely because classes in those languages support 3 different use cases. It's not saying that your usage is bad, but that your language didn't provide you with the best tool(s).

A lot of how people use classes can be solved via ADTs - which are much simpler to grok.

A lot of how people use classes can be solved via namespaces/modules, but not all languages have good support for them - so they use classes.

Etc.


I have found that single inheritence can be useful but it is very restrictive.

More than once, I have found an inheritance hierarchy that made sense when first created no longer models the problem well. It becomes hard to change it at that point.

Frequently I find I really want to mix in cross cutting concerns across different hierarchies. This isn't really a surprise; most problems do not naturally decompose to only a single view of things.

I don't mind abstract base classes containing common functionality, so it's useful there, but mix ins would be better to be honest.


I have never really seen it work. To me it seems it can work if and only if you are doing serious waterfall projects, with a tightly bounded scope.

If you have to model your data exactly once I believe it can work. But if you are doing any kind of agile, or even somewhat flexible waterfall, you will find out that at some point none of your data models work.


Depends on how you use it, but in Java I prefer to use interfaces and no subclassing. I find classes with abstract methods hard to reason about, and much prefer to leverage Functions/lambdas where possible.


While I don't use the ecosystem, I like the way Go approaches it with interfaces by being minimal, expressive, and flexible. It something works like something else according to a minimal specification, it's the same. There's no sealing off things or top-down mandates that require changes to other people's code. Rust traits aren't exactly the same because they require changes to everyone's code to follow an exact model that cannot be retrofitted to existing code without additional work. One downside (it may not be the case today) with the Go model is requiring a type to fulfill one or more interfaces requires additional hoops such as no-op init() interface assertions.


It seems like there's a huge industry of people critiquing Java and OOP but misdiagnosing the problems as technical rather than sociological.

The immense pain and suffering that is related to Java and its ecosystem is because of the terrible organizational conditions that tend to co-occur with usage of Java. Big slow companies, long boring meetings, arguments about design patterns, "architects" who haven't written code in ten years, etc etc. Java correlates with, but does not cause, those conditions.

Another way of saying it: if Java was good enough for Nutch to use to write Minecraft, it's good enough for the vast majority of business purposes.


> there's a huge industry of people critiquing Java and OOP but misdiagnosing the problems as technical rather than sociological.

I totally agree. I have worked on applications that would have collapsed under their own weight if it wasn't for Java. The strong type system, stable runtime and standard library APIs, as well as great tooling allows code to live far longer than it would in some other language ecosystems.

However, don't make the mistake of dismissing all of the criticism. The truth is often nuanced, and OOP/Java does have some serious downsides and footguns. There is room for both criticism and praise.

Ultimately, great systems aren't created by throwing a language and problems at a collection of random people. Great systems happen when good decisions are consistently made over time, and those are context dependent.


Java complexity is also an artifact of Java being used in large complex systems. People are really complaining about how hard programming is. There are many types of large scale systems where I would only code it using the JVM, everything else is a nightmare. Big tech companies largely feel the same


There is an all-time great quote from Bjarne Stroustroup when asked about how many people hated C++

> There are two types of programming languages: The ones people complain about and the ones nobody uses.


So many of these articles are written by people with no CS background who then had a couple stints short stints programming and then overnight pivoted to being expert consultants. They don't really have experience building or maintaining large complex systems. I am really curious who hires them. Most of them will have significantly less experience than the senior members of a successful team, and often less relevant education.


No one is more confident in their abilities than a boot camp grad.


> There are many types of large scale systems where I would only code it using the JVM, everything else is a nightmare.

I’ve seen more than one lifetime’s worth of nightmare code written in Java. I agree with the GP commenter - I don’t think the problem is Java (the language) so much as the culture surrounding it. The Java programmers I’ve worked with (all smart people) seem addicted to solving all problems my writing more Java. More interfaces (most of which just have a single implementor). More classes. More files. More abstractions. You pay a massive tax for that any time you edit your software - since inevitably you're going to end up unpicking hundreds of lines of code that could have been two if statements.

I’ve never seen this abstraction vomit disease be quite this bad in software written in other languages. You can find a bit of it in C#, C++, go and Python. But not as bad as Java. In typescript and rust, most of the code I work with focuses a lot more on directly solving the actual problem at hand.

I don’t think the problem is the language itself. Java is a fine language. But for some reason, it’s become a magnet for a certain kind of developer that I frankly never want to work with. Developers who would never use 5 lines of code when 100 would do. Developers who abstract first and understand the problem they’re solving later. It’s disgusting.


> I’ve never seen this abstraction vomit disease be quite this bad in software written in other languages. You can find a bit of it in C#, C++, go and Python. But not as bad as Java

That’s just your personal experience with these languages. I would even go as far to say that there is some survivorship bias — certain projects might have survived in java that collapsed under its own weight in some other platform, making the “complex nightmare projects” overrepresented in java, even though it is due to a positive thing.


Sure. Unless you have data, we’re only talking about our personal experience.

But there’s plenty of large, apparently healthy, projects written in other languages. The Linux kernel (C). Chrome (C++). Unreal engine (C++). Postgres. And so on.

My claim (and criticism of the culture around java) is that I’ve seen a lot of large Java projects at medium to large companies which didn’t need to be large projects at all. And wouldn’t be if they were built in a different style. The claim that building verbosely is good is almost never justified with data, and always seems deeply suspect to me. My instincts often say “I could rewrite this monolith with 2 smart developers in 3 weeks”. After 30+ years writing software, I’ve learned to trust my instincts.


I have often observed that Java is great because it solves organizational problems. Strong and static types, interfaces and most of all, javadoc, were miraculous for conveying intent to integrators and maintainers. Even seemingly insignificant things like a naming convention for packages was extremely useful. A lot of the features trumpeted in Java when it was new, like abstract classes or checked exceptions, were adopted religiously at first but fell out of favor a long time ago. Interfaces and interface inheritance is extremely useful.


Yeah, there's nothing wrong with Java the language. Java the community is what sucks. You had a few bloggers get popular and then a bunch of devs trying to make names for themselves would ape what they said, compound a few years and you have the mess people like to complain about.

Happens with a lot of language communities. C#, good grief, not much better than Java. Go(lang) community's favorite two words are "idiomatic Go". You can't read any post without seeing those two words. Go has like what, seven keywords. Let it go, man.


It’s only implementation inheritance that is questionable, and the reason is that it leads to mutual dependencies on implementation details between superclass and subclass. Reasoning about correctness becomes difficult, and the superclass is very constrained in what implementation details it can change without potentially breaking some subclass.

Bertrand Meyer’s open-closed principle can be read that way: The superclass is closed to modifications. But that goes against the principle of decoupling interface from implementation: The superclass interface is now stuck with the existing superclass implementation.


Here is my take on it.

At one point "Object Oriented" became "Blockchain" (or now Gen AI) of those times. You had to be "object oriented" in order to be taken seriously. This applied to everything. Even finished software products were called "built using object oriented".

The esoteric concept of inheritance became popular after that. At some point it became so popular that people figured out that it is not really a good thing.


In the 1990s, it coincided with the proliferation of GUIs and their respective programming interfaces. Most frameworks use hierarchies like Object->View->Control->Button->ImageButton. Then people decided that this is the way for modeling abstract problems that don't have to deal with visual or real-world entities whatsoever.


And you would get nice automatic completion when doing object name, dot, and waiting for the IDE to list all possible methods. This was exploratory programming, the copilot of its time.


Also a semi religion, I'm always seeing people say before OOP everything was terrible, and everything good thing is due to OOP (eg structs, methods, polymorphism)


Pragmatists used inheritance because it gave a quick benefit now, and the longer-term costs were ignored. Some pragmatists became successful because they moved quickly, so the "Programmers from the Church of Purity" used inheritance because successful companies used inheritance. When inheritance was no longer the quickest way to move quickly, the pragmatists moved on. The Church of Purity now bangs on about Functional Programming in the same way for the same reasons.


> Pragmatists used inheritance

Because at some point mainstream OO languages made inheritance easy and composition harder, nothing more.

Composition with Java used to be verbose. Inheritance declaration was simply a single keyword.

If Java had "mixins" from the start, people would have used composition way more.


I saw a lot more composition than inheritance in C++ and C#. I think inheritance and polymorphism have their place. Like anything, they can be misused.


> The Church of Purity now bangs on about Functional Programming in the same way for the same reasons.

FP in the industry is a niche thing, so the two phenomena are not really comparable.


FP rather slows things down though.


What specifically slows it down in your context?


Doing every simple thing that used to be done in "normal procedural" way in functional paradigm instead when using something like Scala or fp-ts in TypeScript.

Causing engineers to completely change their mental model of how the code runs, which I still have intuitive trouble imagining correctly and I see it with other developers as well. A lot of energy goes into trying to understand how to do a simple action. It is much harder to read the code and have a correct mental model as well.


lol it's the opposite? map, flatmap, fold etc are very clearly defined operations with very clear use cases and rules. loops are not, you can do whatever you want (often mutating the underlying, rug pulling you every iteration)

not being familiar with fp doesn't mean it's objectively worse


Would you say it's a retraining cost, or would you imagine still having this issue many years from now, assuming you keep working on it?


It likely also depends on the person so I couldn't speak on behalf of everyone. I have been exposed to it to some extent, not the majority of work for over 5 years and it still intuitively is challenging for me. So similar task would take longer to code and existing code would take longer to understand. Significantly more effort would go to that.

I feel like I spend more time and energy on how to get something done functionally rather than focusing on what the correct business logic should be.


Pendulum swings once it is this way then another way.


Do they? I rarely see people using it directly and I think this is fine


Same here. For me, implementation inheritance is a code smell. But I rarely see it.


Most of us have no contact with Java. But Java offers little else than inheritance to use to organize a system, so Java coders use it for everything. They are not exactly wrong, except in using Java at all. But sometimes is all that is allowed.


This is much less true for modern Java than it used to be, with records, pattern matching, sealed classes, etc.

The trick though is you actually have to use modern Java, which means you need to both be on the right version of Java, and have developers that understand the value/power of these newer constructs. Which is surprisingly rare for a programmer that self-identifiers as a Java programmer.


java offers composition, like pretty much all programming languages

if you have a person class and you have students and teachers

the correct thing to to is NOT to make student and teacher inherit from person

it's to make student and teacher have a person attribute + the other attributes that constitute a student and teacher respectively


"correct" is certainly not the right way to put that. Inheritance and composition here are both fully valid methods for modeling the relationship, and the decision to use either should be dependent on how the models are being used and the expectation for future extension.


By "correct" here you only mean fashionable. Either approach works. Each has its merits and its costs.

Any big enough system will have parts that are most sensibly built object-oriented, and other parts that are more reasonably functional, plus anything else you can think of.


In my mind, the more generic rule underneath these rules is you want to be able to comprehend what a unit is doing without jumping between a massive number or files or contexts. Inheritance hierarchies with shared state and overrides means you need to understand multiple files and their interactions vs interfaces and composition tend to allow for fairly understandable units with clear interactions.

The big problem these deep inheritance hierarchies cause is mediocre programmers often think they're writing good code (often high focus on maximal reuse) but the interfaces and clarity are terrible.


I prefer languages with full OOP, ideally with multiple dynamic dispatch. What I'm not fond of are languages like Java that overuse OOP design patterns, force you into them, or require lots of OOP boilerplate. If you have a class whose sole instance serves as a factory for other objects, you know you're on the wrong path. But I've never seen any coherent and sound arguments against OOP in general. Nobody forces you to use it, and, for example, it would be much easier to develop and deal with GUI frameworks in Go if it had classes with inheritance. As a rule of thumb, if for some reason your language drives you to emulate dynamic dispatch with explicit type switches on one or even multiple arguments, then it probably should have inheritance and dynamic method dispatch.


> But I've never seen any coherent and sound arguments against OOP in general.

How about arguments for OOP? Especially for things which can't be done better elsewhere.


As others have said, GUIs are the place where OOP shines. There are no things OOP does that can't be done elsewhere, what happens in practice is that programmers invent their own ad hoc object systems and start programming in an object-passing style (see e.g. Gtk).


What is an object-passing style?


Whatever it takes to emulate object-oriented programming in languages without objects and classes. Creating structs as objects and passing them as first arguments. But I also meant it to include using closures, function pointers, and type-switches to emulate dynamic dispatch, creating inheritance hierarchies by embedding higher classed object into structs in whatever way your language supports. Sometimes people also resort to code generation. Even explicit dynamic dispatch may be implemented, e.g. by storing class hierarchies explicitly.

The hallmark of it is that "objects" are passed as first arguments to functions in languages that don't support syntactic sugar for methods.


There’s a rule-of-thumb I’ve found about OOP where you want to use composition when extending data, and you want to use inheritance when extending behaviors.

The zoo animal metaphor that you often see used in teaching inheritance (lions are animals, simba is a lion all animals have a length, but lions also have claws etc.) is a bad perspective and you should never use inheritance to model a problem like that in the real world.

The only time I’ve used inheritance, has been on toy problems that didn’t need it. Not to say it doesn’t have its place but when I encounter it, it’s time consuming to peel back all the layers or deal with idioms that don’t quite fit a problem that a developer has enforced through inheritance.


> The only time I’ve used inheritance, has been on toy problems that didn’t need it

See https://news.ycombinator.com/item?id=39999644

It's very useful when a base class provides an incomplete implementation, and the child class completes the implementation.

Sometimes I use it when testing, where the child class alters some behavior in a way that mocking can't do correctly.

In C# / Java, inheritance is used for setting up filters for exceptions


Inheritance is really just a public interface, a private interface and automatic delegation of those interfaces to the base class.

The problems with inheritance are:

- people shove code in the base class to dedup it without thinking about design

- people add public and protected methods without thinking about interface design

- the names of the base class and the public and protected interfaces are exactly the same thing

The last point is that if you have a FooBase class which is public then you can wind up with a bunch of List<FooBase> containers that are coupled to the base class and external methods that take FooBase parameters along with a bunch of concrete instances which are dependent upon the FooBase class protected interface and code implementation. This creates the brittle base class problem.

If instead you had an IFoo interface and only ever used List<IFoo> instead of FooBase anywhere then you could always define FooBasev2 which implemented IFoo as well and FooBase and FooBasev2 can coexist in your codebase without having to break any external consumers (open-closed principle in practice).

If base classes were only allowed to be used in inheritance and couldn't be parameters, generics, etc then that would force users to create base classes and public interfaces in pairs and would decouple their names and by writing the public interface down as its own thing developers would be more likely to focus on that design.

And really inheritance is just syntax sugar around having a component (the base class) which has automatic delegation of of the public interface and protected interface to it in the inheriting object, with so little typing that it becomes easy to not think about what you're doing -- and while reusing the same name for three different things and creating tight coupling, and you can't dependency inject different baseclasses at runtime. If languages had better terse syntax for delegation then composition would get a lot easier to use like inheritance is (which is something that Go oddly enough gets more-or-less correct, dunno why they didn't mandate piles of boilerplate for delegation like they did for error checking).


> If base classes were only allowed to be used in inheritance and couldn't be parameters, generics, etc then that would force users to create base classes and public interfaces in pairs

Which, if its a common thing, suggests that there should be syntax for like:

  interface IFoo from FooBase;
That makes an interface from the public members of a class.

Or, perhaps even better, type systems should enablr.a distinction between:

class ConcreteFoo inherits FooBase; // normal implementation inheritance class ConcreteFoo implements FooBase; // is required to (compiler checked) present the public interface of FooBase, and is type-compatible, but doesn't inherit implementation

So concrete classes can also serve as interfaces, and purely abstract classes are identical to interfaces.


> If base classes were only allowed to be used in inheritance and couldn't be parameters, generics, etc then that would force users to create base classes and public interfaces in pairs and would decouple their names and by writing the public interface down as its own thing developers would be more likely to focus on that design.

I agree and like this principle, however, but what would be the difference between the base class and a mixin class then?


TIL that in Python you cannot use a mixin class to provide implementation for an abstract base class that defines the interface.

In other words, in the inheritance tree, the class providing the common implementation must sit between the interface class and the concrete classes.


It's worth mentioning regarding the IFoo example that in a single project you can always just find/replace FooBase with IFoo or even FooBaseV2.

I feel like a lot of people blow this up like it's so e huge problem when in reality this is a pretty trivial refactor.


Not if you're writing a library/framework that other people use though.

If all you do is consume other people's library/frameworks then yeah, nothing really matters, do whatever you want, refactor at will.


Yeah, that's kind of my point. I agree that api design is important if you're publishing a library, even for internal use - that's why I said "in a single project".

It seems to me that a lot of people obsess a bit too much about this stuff in places where it doesn't matter.


I think it is pretty clear that the reason inheritance is so prevalent is because it is a superficially good sounding idea, which again and again is being put in front of people.

I am sure that many, many people have had the experience of a professor tell them a nice sounding story about purely hierarchical data and went on thinking that this is the way data should be modeled.

Thankfully I believe that nowadays many people have understood what works and what doesn't work in OOP and we can actually try to get away from that model of data.


This is exactly what I've seen and I think a lot came from java and C++ before it. Ironically C++ is a fantastic language now to use now while avoiding inheritance.

In the late 90s, 2000s and maybe even now, people see inheritance and think that it makes sense. They make the vehicle, the car, the sedan and then get stuck, when all they really needed was an x and y point and a velocity.


I don't know about other colleges, but when I teach my students inheritance, I tell them right away that it's a pretty bad idea. And that they will learn better ways in the next semester.


I can't speak for what has happened since I left university, but when I was a student there you got to here about cars, who are also vehicles, dogs, who are also animals and similar stuff. Which has "limited" applicability of how inheritance works in real software systems.


If.......... Why......... Everyone....

click click click click click click click click click click click click click click click click click click click click click bait


As a programmer with likely less experience than most of these commenters, the main question that always feels under addressed in these kinds of posts is that of code deduplication.

Most specifically, often when encountering a situation in which I have two slightly different classes/types that need to be polymorphic with each other, all the standard non-inheritance based approaches seem to require a lot of outright identical code implementing a shared interface, a bunch of boilerplate composition proxy methods that just point to methods in a composed class, or weird and obscure language features that never really feel like the "intended" approach.

Typescript especially seems to demand really awkward and obtuse implementations of basic mixins or abstract classes and the like, I always feel like I'm missing the more "intended" approach whenever I try to share code between similar classes.

Often discussions around this are awash with talk of traits and delegates and other cool features that seem to only exist in a handful of languages, and never the ones I happen to be required to use at that given moment.


Typescript is best when you don't use `class` or `this` at all, but instead return object literals of static closures. In other words, more like this:

    function makeCounter(initial = 0) {
        let count = initial;
        function get() { return count }
        function inc() { count += 1 }
        function dec() { count -= 1 } 
        return {get, inc, dec}
     }

      // works just like a normal object
      const ctr1 = makeCounter(); 
      ctr1.inc()

      // but this works too.  try doing this with a class instance.
      const {get, inc} = makeCounter() 
      inc()
This is the typical style of most VueJS code (though it would replace count with a ref and probably call the function `useCounter`). TypeScript is smart enough that it can infer interfaces and even classes from such literals should you go that way, full support for substitutability checking and all.


It's always possible to 'deconstruct' objects with methods into just dumb structs with functions acting on them. This is the direction Julia takes for instance.

One advantage of this is that it exposes that 'inheriting' a method is really just applying the same base function to all classes that support a particular interface. You can override a method by specialising the function for a subinterface. This interface can just be a marker that says 'this struct has these fields, and semantically is a 'IMyInterface').

Of course object oriented programming languages tend to encourage private and protected properties and such, which force all of this to take place inside the class. At first I though that that was the best way to avoid a mess, but it prevents you from doing this, possibly leading to more code duplication. And after some more experience with python there's something to be said for python's approach of just using name conventions to point out when to be careful.


Kotlin has a few nice patterns here. It allows inheritance but only if you mark your class as open (it's closed by default). This prevents people inheriting from things that weren't designed from that. It also has extension functions, which allows you to add functions and properties to types without having to inherit from them. This is very nice for fixing things that come with Java libraries to be a bit more Kotlin friendly. Spring offers a lot of Kotlin extension functions for its Java internals out of the box. Another nice pattern is interface delegation where you can implement an interface and delegate to another object that implements that interface:

    class Foo(private val myMap: Map<String,String>=mapOf()): Map<String,String> by myMap
This creates a Foo class that is a Map that delegates to the myMap property under the hood. So you side step a lot of the issues with inheritance; like exposing the internal state of myMap. But you can still override and extend the behavior. Replacing inheritance with delegation is built into the language.

The net result of this is that inheritance is rarely needed/used in Kotlin. It's there if you really need it but usually there are better ways.

Scala and other languages of course have similar/more powerful constructs. But Kotlin is a nice upgrade over Java's everything is non final by default (or generally defaulting to the wrong thing in almost any situation). Go has a nice pattern based on duck typing where instead of declaring interfaces, you can just treat something that implements all the right functions as if it implements a type. Rust similarly has some nice mechanisms here. All these are post-Java languages that de-emphasize inheritance.


One of the ways that I think Kotlin and Rust are objectively better than Java and C++ is in that they have saner defaults than their predecessors (like open/final and mut/const).

I've lost count of how many talks I've watched by Kate Gregory where she advocates for people tagging everything they can as const in their C++, but asking people to eat their veggies never works.


Sane defaults are the bomb. Defused bomb.


Same with Swift


IMHO inheritance works and only works for graphics related programming, e.g. GUI and games. Visual objects can be abstracted in the manner of inheritance.


Inheritance doesn't work well for games, that's why so many games take a component based approach like Unity, or full blown ECS.


The implementation of inheritance (more specifically, polymorphism and dynamic dispatch via vtables) is a problem in games, because it adds an extra layer of indirection, and screws with cache locality. But the semantics of inheritance (X ISA Y) still apply. In an ECS, the implementation is different (struct-of-arrays) but you can still think of an entity as "inheriting from" the various components its built from.


That's not really accurate. Component based systems are designed specifically to address situations that traditional inheritance doesn't handle well.

Examples are usually something like having a base entity, a player that inherits from entity and an enemy that inherits from entity. Then you have a magic user and a barbarian inherit from enemy but now you also want your player to be able to use magic. Traditional OOP doesn't make it easy to share the implementation.

Even with composition and interfaces you still have problems with most traditional OOP languages when you want to do things like change the set of components of an entity dynamically at runtime (player gains or loses the ability to use magic during the game).

"Is a" is often not the relationship you want to model. A player and an NPC both have the "has a" relationship to an inventory, not the "is a" for example.


> you can still think of an entity as "inheriting from" the various components its built from.

If it's based on components, wouldn't that mean you would think of it as being... _composed_? I typically don't hear "it's made of many components" and think of inheritance


Well, it does work well for the engine part of game engines, and graphics related code in particular as the GP says - it's just gameplay code where OO falls a bit flat.


Not really, the data oriented design movement in games programming that eschews OOP is driven by performance concerns of traditional OOP code on areas like rendering.

As a graphics programmer for many years I can also say that OOP is not really a good fit design wise for modern rendering engines, it mostly just gets in the way.


I only really know of Bevy that uses ECS for the rendering backend, but iirc even that doesn't really use entities and components, but rather its resource system, such that rendering commands are stored in the ECS "world" but as globally unique resources, rather than entities and components. But, my knowledge doesn't extend to AAA engines, perhaps things are done more extensively there.


Unless it'a role playing game.


I don't follow - role playing games are often most suited to ECS, because you want the kind of emergent gameplay that independent gameplay systems acting on component composition gives you - and the kind of flexibility it allows in composing weapons, spells, buffs/debuffs, and so on, vs a strictly hierarchical OOP approach.


This has more or less been my experience too. Things like UI toolkits or game engine primitives are often solved nicely with inheritance.

I've been doing a lot with LeafletJS lately, and it has some light inheritance around its drawing primitives (things like boxes, circles and lines you draw on top of maps). It works well.

For 3D engines you still need to be quite careful with where you apply it though. It's possible to get into a huge mess if you use it too much.


Especially in games, inheritance doesn't work all that great and had been replaced by composition via components a long time ago where each component implements one specific aspect or feature of a game object and can be (more or less) freely combined with other components.


The picolisp database convinced me that it also can work for databases, where it's used both for data storage, querying and as a GUI backend.


That's strange because I was about to comment the exact opposite. ECS is for games, inheritance is for almost everything else


For games having your character as a type of a general class fits well. Having enemies be a type of a class with a base of abilities can fit well too. Weapons, etc all module well. Game play. Scenes.


and it became popular at just about the same time as GUI programming, coincidence?


My take is that after something becomes sufficiently popular and mainstream, for a lot of people who want to be seen as innovative, the only way is to lash out against it. See all the articles on how hellishly bad agile, php or javascript is. A lot of criticism is valid of course, but that is irrelevant in the larger scheme of things - there is a reason they became what they are today.


There is a lot of things in the world today that are very bad and very popular, not the reason to stop hating them.


My impression is the opposite, especially from when I used to work as a full-time Java dev. I often asked myself:

Is there a place outside of GUI programming where inheritance is used in non-habitual and useful way? I can't think of many.

More often than not you have a final class that you are supposed to use and the petrified hierarchy above that is of not much use.


Perhaps having some abstract class with a real and mock implementation inheriting from it, which can then be injected into your code for testing other components is one example.


I actually think with generics without type erasure and duck typing a language doesn't need inheritance. And when applications are built around passing data between services OOP tends to be less useful. (eggs are passed to the frying pan service and bread goes to the toaster service)


I don't think that inheritance is bad per se. AbstractWindowButtonManagerFactory type of inheritance is bad. I think the reason is that people are not just good at constructing hierarchical relationships between abstract concepts. And once you construct such an elaborate hierarchy, it becomes very hard to correct things after the fact. So you end up with a mess as people add more functionality on top of an already ill-conceived structure.


Hmmm. I have exactly zero occurrences of implementation inheritance in my code base.


Exactly. "Why does everyone use it?" is a false premise. I don't use it.


I only use inheritance for what the author calls ontological inheritance. For example, I find it useful to represent expressions and statements. I prefer composition if I'm trying to reuse data structures.


But ontological inheritance is the worst kind. Because your ontology is wrong. Similar things are kinda different. Different things are kinda similar. Your ontology is based on the linguistic remnants of taxonomic garbage from hundreds of years ago. There's no shelf.


Not the way I use it, which is very limited. If the similar things are kinda different I use something else, as well as design things to expose the differences where required.


OOP is easy to understand and explain and it makes sense initially, plus a lot of people specialize in languages/frameworks that enforce it, so it becomes easy to get trapped in the OOP world. It also usually comes with DDD (which is a solution to a problem you shouldn't have had in the first place) which is a way to limit the damages of OOP into contextual areas. I also think the blanket statement (OOP is bad) does not apply necessarily everywhere. A good mix of composability and a bit of inheritance where it makes sense is the best way imo.


I disagree. Starting with separating class and object have never been helpful when I've tried to teach computer programming, while functions seems to have been more intuitive to those people.

After a while you get to closures, which are pretty much objects without classes, and then factory functions that produce closures and there you have something like a class as well. If I were to design a course I'd probably follow this flow to get to 'OOP' in that sense.


But you go from the bottom up here, I don't like to describe it that way.

I prefer to say that we naturally classify objects around us using fuzzy boundaries like "a house", "a cat", which don't exactly mean anything: they are templates we use later to actually generate an actual house, or an actual cat (on a piece of paper as a drawing for instance).

The world being separated between our internal classes and the interactive instances of them, we can code that way too: we look for what we actually need in a concept, name it and define only what we need, then instantiate several concrete actors interacting together. Each actor is complete, well defined, with contracts for interactions.

You can then start building on top of this more complex systems, talking in English and defining rationally the world of your little model.

If you start talking about functions, you're basically aliasing a memory address for a bunch of code: you'll goto that function address, with some parameters at some other address, do a bunch of things and put the result in another address for the next function to process. You're building at best a suite of pipelines, which ends up being a little bit too technical and static for my taste.

Another way to defend modelling a software with classes and object, in my view, without trying to talk about functions, is the blank page effect: imagine arriving at random at some assignment. You understand vaguely the business problem, now you need to code a solution. You have 6 months, will generate a few millions a year and will require a team of 10 people eventually to test, maintain, operate and debug: you start with defining functions generating functions with callback parameters in Python, or you launch IntelliJ and you do a Java class model ? I'd be terrified modelling complex problems with just functional pipelines, I think it's just way more obvious to talk about objects at all time if you're gonna do something that is not just processing inputs into well defined outputs.


I prefer to start with an interactive programming environment, like a REPL, to keep the feedback loop tight and code short. If I needed to talk for ten minutes about templates and instances and how you need to have a little meeting with yourself and do modeling I'd have lost their interest right away. These words mean nothing to a newbie, unless they're some kind of philosophy nerd so I can hook into ancient greek ideas about Forms or something.

'Here is a way to do simple math, here is a way to glue text to text, now that has grown a bit, you can shorten it for repetitions by giving it a name like this', and so on.

And to me, starting out functionally is easy. Data is pretty much always a given, it's very rare in practice that I first need to model speculatively and then generate data ex nihilo unless I'm creating mocks. Usually there is a data source, a file, a network API, whatever, and then I start building against that. Small, simple things at first, like mapping all the input to an output interface, then add transformations and output for these, and so on.

In general I spend more time thinking about the shape of data I already have than architecting code or inventing models.


I think you identified the selling point of OOP, it's easy to reason about it and logically it makes sense to human because we do think in terms of objects (the user, the order, the item etcetc) and responsibility segregation (User->login(), Order->fulfil(), etc). To your example of just launching intellij and using java (or ruby or whatever) to build something, yeah it works and it works fast and it's easy to add new features, and it's also easy to end up with a >3k loc User class in python that no one can touch (true story).


Not even wrong.

Inheritance isn't necessarily bad.

If anything is common, someone will try to seem like a brilliant iconoclast by writing an essay against it. The particular essay in this case was from a Pythonista (not surprising, given Python's weird love/hate with objects):

https://solovyov.net/blog/2020/inheritance/

And not everyone uses inheritance.

Maybe you have another question, like "why would someone use inheritance?"


IDK, I recently did a big presentation on the topic in my company's Python community and there were tons of people that were genuinely surprised it's not great. And people in my department love throwing random classes that inherit from everything in the universe for no reason at all... for python. Like not even Java or C#. And coming in as the lone new-comer its not going to be easy to convince everyone that has been there for years that the common sense aint great.


Some modern programming languages do not even have classes to inherit from. Instead they have concepts like traits and structs, which decouple structure and behavior.


My situation is different than a lot of others--I work mostly with code-bases I 100% control--but I just don't use inheritance all that often, and if I do, it's never deeper than one or two levels. Now, I do most of my programming in Python and in JavaScript, and I do like using python's ABCs to specify interfaces for interfaces that I expect to be implemented, but that's mostly the end of it.


I remember the original reason for composition over inheritance to be very specific to Java: you can only inherit one class, but compose your class from multiple interfaces, each of which can have default implemented methods.

In python you've got polymorphism, mixins etc, so the reason for this suggestion doesn't really apply.

Django heavily uses inheritance for the controllers for example - and it works very well.


I don't really use mixins either, unless it's required by the libraries I'm using. Partly this is habit on my part (and I came to Python from Java). But there is a lot of functionality in python I tend to make only light use of. Like using decorators. I understand them, but they feel like magic, and imo, make code harder to understand if used too much. Same thing with inheritance and multiple inheritance.


It can be useful in the situation where you want to override functionality in a class without making the functions public.

If you're composing a class out of multiple chunks of functionality then all of the bits you want to call need to be public. If you're sublcassing and overriding then the bits you're overriding can be internal, and not part of your public API.


Depends what language you're using and what polymorphism features it has...

C - You don't use inheritance because it literal has none, but you can handroll dynamic dispatch (interface style Abstract Base Classes)

Rust - Also doesn't have it and people seem perfectly happy with generics, sum-types, and `dyn Trait`


I look mostly at JS and some Rust codebases and almost nobody uses inheritance!

I believe it doesn't make any sense, you can use different patterns to have the benefits without creating a complicated type hierarchy where state and functions are mixed across different files.

Whenever I would use inheritance I use the builder pattern.


I quite like inheritance personally, assuming that it's done reasonably well and isn't a confusing mess.


Inheritance is not so bad for native UI stuff where it makes sense for the class libraries.

But in work code I don’t see it much. If it is used it is light weight. Often used incorrectly for splitting code into different files rather than actual inheritance (giveaway is one one derived class!)


I find that it helps most if we make algorithms the centerpiece of the design and then use inheritance to facilitate coding and make everything succinct. The days of unwieldy inheritance structures coming out of big design patterns are clearly over.


Nerds make things complicated, no matter what. It is their fundamental impulse that cannot be tamed, like the scorpion who must sting the frog that carries it across the river.


Many of the headaches associated with Rust come from an inability to reference from owned to owner. Affine types have much more impact on design than inheritance vs. composition.


Meyer also thought (for a while) that inheritance was more powerful than genericity. Later he admitted his "proof" of that was weak.


I think there’s a tension between the tinkerer, blogger and hobbyist side of the field, and anonymous 9–to-5 workhorse line of business programming in this discussion.

The core reason everyone uses it seems to me to be essentially the same reason the former groups hate it - it’s what everyone else does, it’s an incredibly square and “day-job”-esque approach to one’s tooling and architecture, it often reeks of bureaucracy, compartmentalization, and organizational superstructures often unrelated to the technical work, dictated primarily by managerial fiefdoms and needs to organizationally coordinate, and it’s a well-defined space with extremely predictable solutions and headaches, and therefore has very little excitement or novelty to offer.


Because it's not actually so bad ;)


Lots of design patterns have their uses, but can suffer when applied too overzealously.


Like almost anything in IT, if you are good at using a hammer, you treat everything like a nail.


if you're good at using a hammer, the nail will be hammered in perfectly and without faults.

The only problem is if you're bad at using the hammer, and you only know how to use a hammer.


We don't use inheritance anymore. In all the years building small and hughe software solutions with small and larger teams I've never experience any real scaling, maintenance or resilience benefit. Even for documentation it is more a pain for new devs onboarded than an advance. Patterns,DRY, DI, IOC, MS, ES, IO are best to build with these scopes in head.


One thing that bothers me is when people refer to languages like Go and Rust as "not object-oriented". Of course they are object oriented; Go and Rust programs consist of struct definitions to which you associate methods, giving rise to individually instantiatable objects with custom behavior, and you use these objects to model entities in your problem domain. This is object-orientation.

Where they differ from OO languages such as Java and Python is of course that they avoid traditional inheritance, offering more restricted inheritance-adjacent features (typeclasses/traits/interfaces and struct embedding).

Like many others I find Go disappointing in many regards. (Although Go's channels, goroutines, select, and implicit interfaces are beautiful). One disappointment is the way that methods are not indented or nested within the struct definition/interface that they are defined on. This is, IMO, a silly fig-leaf that is attempting to make the language look "not OO" in a superficial manner, but all it ends up doing is making it hard to see where your implementation of one interface ends and another starts.


Oh, darling, strap in because the Hacker News catwalk is serving us a spicy mix of opinions on inheritance in programming!

First up, we've got the crowd who treats CSS like it’s the ugly stepchild of inheritance, preaching the gospel of "composition over inheritance" as if it's the latest fashion trend.

Then there's the old guard, clutching their pearls and insisting that inheritance isn't the problem—it's just misunderstood, like a moody teenager.

Cue the functional programming aficionados, sashaying in with their "functions over classes" mantra, ready to throw shade at OOP's entire existence.

And let’s not forget the star of the show, the claim that inheritance is the VIP at the GUI and game development party, although some party poopers argue that the cool kids moved on to ECS systems ages ago.

Meanwhile, the language innovators are flaunting their Kotlin and Swift ensembles, dripping in modern features that promise to make inheritance feel so last season.

In the midst of this fashion war, there's a heated debate on whether we should be dressing our newbie programmers in OOP gowns or functional frocks from day one. And, honey, let’s not even get started on the industry’s trend chasers, who once hailed OOP as the next big thing, only to ghost it faster than you can say "blockchain."

In the end, it’s clear that in the world of programming, inheritance is either a timeless classic or a faux pas waiting to happen, depending on who you ask.


It's funny how easy it is to spot a ChatGPT reply


Do you think it is because it is too verbose? Long winded? A human wouldn't bother to write that much?


For me, it was taking the joke too far. The catwalk comment intro was actually really good, but then it just kept running with it the whole way through. That is not in and of itself a sin, but unless there's a good reason to do so, it dries quickly. You need something to keep it flowing, otherwise it just feels like you really needed to write 8 flamboyant lines mocking HN about inheritance with catwalk references stuck in for all of the lines. Feels forced, almost like you are trying to meet a quota. Which is what makes it feel AI-ish, to me at least.

Definitely one of the better ones though, very impressive. AI really is great at making some very impressive things. But since AI doesn't "get it", a lot of what it produces just feels off or misses the point.


A lot of it goes back to Java. Java has .class files, not .instance files.


I found multiple inheritance kind of fun when I was first exploring python.

Then, madness...


I work on a very large codebase. We use inheritance and composition. I totally agree that inheritance should be used carefully. However, there are times when composition requires a lot more code and downstream maintenance.

Let's stay I have a base class A. Let's say I have 70 classes that inherit from A.

These classes must serialize/deserialize from disk.

My choice here is that I can add a new data member to A and then update the read/write method in one place, or I add a new data member to all the classes that are derived from A, meaning that I get to update 70 classes. Seriously? Who would think this is a good idea? You'll have to write 70 times as much code... there will be times when that's definitely a very bad idea.

Maybe don't use inheritance. Then give every class its own "Name" data member like std::string c_nName. Every class can have its own function to set the name, or we could use an unencapsulated free function and do things like C. Except then the experts will say things like "well, now you're doing C with classes". Then you get a ticket that the user can enter bad names. You can then figure out some way to validate the names, right? That could be a free function, or maybe write some class called NameValidator. Except now the experts say you're writing C++ like Java. Too many classes, too few classes, too many objects, too few objects, too may free functions, too few free functions.

C++ expert of the month may say X, Y, Z, but look at Microsoft's APIs.

Inheritance is everywhere.

Look at any open source C++ project.

Inheritance is everywhere.

Does that mean that you should make something like:

A

--B : public A

----C : public B

------D : public C

--------E : public D

----------F : public E

Or this:

    A  B
    \  /
     C

Not unless you have a damn good reason. But the point is that inheritance is a valid tool and it can reduce bugs, code duplication, maintenance.

Right tool for the job, yeah? I once read something in the C++ FAQ that said something like "Don't take advice from people who don't understand your business problems."


> Let's stay I have a base class A. Let's say I have 70 classes that inherit from A.

The alternative case is that you have 70 classes that CONTAIN A. You still only have to change A.


Yes, but in my example, in the 70 classes, all the ::Read and ::Write methods must still say:

    a.Read( ... )
    a.Write( ... )
And depending on the use case and design, they also need methods such as:

   A& GetA() { return a; );
   const A& GetA() const { return a; }


The answer is that you retain A as a pure interface and maybe you add a convenience base below A that adds the data member, then you inherit from from the convenience base for your 70 leaf classes. You can also use CRTP for the convenience base, if it's appropriate. You can still avoid confusing interface and implementation inheritance.


When I learned programming as a teenager, I was using one of the C++ classics which, of course, taught OOP and Inheritance as a magic silver bullet.

When I then finished the book and its examples, I wanted to do my own exercises and went through about 1-2 very painful years trying to model my things using Inheritance and totally blamed myself for being too stupid to do software development.

Only then I started searching and finding essays and blog posts of people criticizing OOP and esp. Inheritance for exactly the things I was struggling with. This felt like a relief!

My question: Why is Inheritance still taught as a silver bullet? I'm seeing university courses using Inheritance both for Domain and infrastructural code reuse at the same time. Esp. in Germany I see a lot of stupid 90s alike "programming" courses and, no shit, according code bases. It's as if they were living in a gigantic bubble.


Use inheritance as vomposition with parent ?


It's not bad. People like to feel smug by saying OO is bad and hence inheritance, meanwhile using some construct in their FP that is really the same thing.


Can you explain how any construct in FP conflates subtyping and code sharing the way OO inheritance does?


you seem to be misunderstanding different terms. You can do OO without inhertitence.

inheritance is the worst as you grip with you types.

just use strategy patterns instead.


Nowadays I have rarely seen it if ever


Black and white extremist views.

Sometimes inheritance is extremely useful. Sometimes it's harmful. Sometimes it's not the best but the most practical nontheless.

Language concepts are tools. These people with extreme black and white views live under the delusion that there can be some "Perfect Language which will cause the program to never develove/get technical debt TM".

Pure delusion.

Real life is an approximation where you do what is best in practice, not in some delusional daydream of perfection which breaks the moment you try and bring it to reality.


For a really neat implementation of OOP and inheritance, I would recommend checking out Racket's class system (not to be confused with generics). It mostly solves the multiple inheritance problem by mixins and also has a great interface system.


I find Racket's OO stuff to be awkward and cumbersome to use, and rarely a good option. One of these days I'm going to port a CLOS-inspired Scheme OO system like Guile's GOOPS to Racket...


I love the way people take their narrow range of experience and over generalize it to everything. Bonus points for having a blog or maybe a patreon to tell everyone how it really is.

We have a million flavors of the month but in the end it doesn't really matter. Pick a flavor and there will be teams that are successful and teams that are not. No silver bullet will make the non-successful teams magically turn into successful ones.

"I/We failed with <insert language feature>" never generalizes to "<insert language feature>" is bad.

Then the community constantly sits there and acts like a bunch of geniuses because some programming pattern has been discovered.. except it's 50 years old.


> "I/We failed with <insert language feature>" never generalizes to "<insert language feature>" is bad.

I thought the same for decades, then I met Groovy and Grails. Of course, it’s not bad in an absolute sense (IMHO that doesn’t make any sense), but when some problem can be found only at runtime, when any proper compiler would catch that compile time, it’s hard to argue that it’s a good direction. Especially when TypeScript made it quite obvious what’s possible only with simple type checking.


> TypeScript made it quite obvious what’s possible only with simple type checking.

Typescript's type system is anything but simple.


Ha ha, yes, happens all the time.

In this case, published in a book 30 years ago, at least:

https://news.ycombinator.com/item?id=40005520


Ahhh good 'ol XP/Gang of 4. Never ceases to be referenced. (for good reason)


Pretty sure it was in Effective Java and Design Patterns in Java 20+ years ago as well since Java is the language that gets picked on so frequently for this stuff.


He he. Java and patterns, Java and frameworks (mumble Spring mumble IoC mumble dependency injection), Java and factories and builders, mumble <buzzword> ...


Speak for yourself, I don't use it.


Boring answer: Same reason we do a lot of stupid things, haha.

“If C is so unsafe and C++ is so unwieldy why does anyone use them?”

Because that’s what they learned, or because that's how the codebase is structured when you get there. When all you have is a bike you’re going to ride that shit everywhere.

You rarely if ever get the chance to start from scratch at work (where most code is written) and your job is to follow the pre-set pattern most of the time unless it becomes completely untenable. Sure, inheritance is worse in pretty much every way, but if everyone you're hiring does it that way, your codebase is built that way, etc, you're going to make the best of it because that's your job.


Main reason I use c++ is because it has the most mature libraries.

I gave Rust a try, and lots of nice things about it. But many of its libraries are not very mature.


What libraries do you miss from C++ that you can’t find in rust? ML is the big piece for me - there’s no mature equivalent to CUDA or PyTorch.


Any kind of usable GUI library. Plus I am doing 3d graphics, and rust is not there yet.


All the company specific libraries we have debugged of the past 30 years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: