Hacker News new | past | comments | ask | show | jobs | submit login
Inheritance was invented as a performance hack (catern.com)
237 points by signa11 on April 30, 2021 | hide | past | favorite | 252 comments



Inheritance is static composition. Everything we do statically is for two reasons:

1. Static invariants (not subject to runtime-defined conditions).

2. Performance (AOT compilers know more about the system and can elide more code and devirtualize more calls, etc.).

I think the characterization of performance features as a "hack" is misleading. The article builds a bit of strawman, being dismissive of a performance feature for the fact it was for performance, then does a 180 in the Conclusion section saying "it's not bad" to look for performance.

Did we forget inheritance vs. composition is also for performance? Honestly I've not seen such a strong characterization of inheritance as being purely semantic. If anything, today we see inheritance as the thing a junior dev reaches for first, in order to share code, because they don't know better.

Sure, there's the "Cat extends Animal" shtick that refuses to die. But everyone refers to this kind of talk of inheritance ironically these days.


> Honestly I've not seen such a strong characterization of inheritance as being purely semantic.

As articulated elsewhere in the discussion, classical inheritance has a great affinity for the "specialization" design pattern, which is everywhere. Classical inheritance is not just a performance hack, it is semantically compelling, as illustrated by the enduring popularity of "Cat Extends Animal"!

Furthermore, single inheritance has a really elegant canonical implementation: "extending" structs and vtables in subclasses by appending their member variables and new virtual method pointers.

Using composition may inoculate against certain brittleness problems in code sharing, but if "has-a" is cumbersome to code up, no amount of scolding is going to change mass behavior.

Emphasizing interface inheritance as an alternative to classical inheritance seems to be more effective at breaking people away from the problems of implementation inheritance, even though interface inheritance is still "is-a" and not "has-a". And "fat pointers" with two words, one for the dispatch table and one for the object, are an elegant canonical implementation for interface inheritance.

> Sure, there's the "Cat extends Animal" shtick that refuses to die. But everyone refers to this kind of talk of inheritance ironically these days.

This sort of "everyone knows" dissing of inheritance is why I tend to dislike HN discussions of it.

For what it's worth, the rust-by-example for traits uses the Animal "schtick" unironically:

https://doc.rust-lang.org/rust-by-example/trait.html


> Classical inheritance is not just a performance hack, it is semantically compelling

I think often it's compelling for misleading reasons. For example, is a square a rectangle? Mathematically, yes. But in mathematics, we don't mutate values (we would describe an entity's evolution as a series of values).

If you are allowed to mutate the dimensions of a rectangle object, then for a square to be a rectangle, it must set both dimensions when setting either, or otherwise cause an error if its dimensions get out of sync. If you can, say, get the area of a rectangle, Liskov's principle of behavioral subtyping suggests that such a square would break the expectations of a client of rectangles ("I changed the width but now I'm getting the wrong area!"), so a square is not really a rectangle. You may recover behavioral subtyping if you explicitly limit the kinds of reasonable inferences a client can make from a rectangle, but that may limit your use cases for actual rectangles.

I like this phrasing from one of the answers to this SO question [0]:

> The problem is that what is being described is really not a "type" but an cumulative emergent property.

> All you really have is a quadrilateral and that both "squareness" and "rectangleness" are just emergent artifacts derived from properties of the angles and sides.

Put differently, it's very tempting to treat "square" as a specialization of "rectangle", but that has very little to do with their intrinsic definitions and far more to do with what can be observed of them by the program in context.

[0] https://stackoverflow.com/a/1030559/159876


That's cleared up by understanding covariance and contravariance and the fact that mutability is an attribute.


I frequently hear people malign inheritance, and while it can obfuscate code in some circumstances, it can also produce code that is easily and clearly extendable. For example, a class with a static method that uses class properties to control behavior is cleaner than a function factory that takes a config object. Interface inheritance is also quite useful.


> it can also produce code that is easily and clearly extendable

It can also produce code that is not easily and clearly extendable, if the behavior you're trying to extend is buried a layer or two further down the inheritance hierarchy.

Unfortunately, due to the First Law of Kipple, the rate at which you run into this problem is proportional to the age of the code base. And so we grow frustrated with implementation inheritance. Other extensibility mechanisms probably have similar pitfalls, but they haven't been the dominant way of doing things for long enough to accumulate the same volume of clutter.

Don Knuth has mentioned in a few interviews that he isn't so hot on code reuse, and prefers code that's designed to be easy to edit over code that's designed to be easy to extend. I'm starting to see some wisdom in that idea. With the one, being able to keep things tidy is a primary goal. With the other, eventually tidying becomes a frightening enterprise, because you have to avoid upsetting the inheritance hierarchies that are precariously balanced on top of the code you're trying to tidy up.


I think the practice of not using a tool because people could potentially misuse it is misguided in this day and age. Linting and static code analysis are quite capable of enforcing good usage patterns.


My concern is not exactly that it could potentially be misused. It's more that it seems to set up forces that subtly push projects toward becoming resistant to change over time. And, while it's possible for individual programmers to exercise discipline in order to push back against these sorts of forces, at a larger scale the guiding principle seems to be, "water flows downhill." So we should seek to find ways of building things that generally set up forces guiding us toward good design in the long run, without having to drill people on large laundry lists of best practices. That approach only works for as long as the code is owned by a sort of benevolent dictator who is able to, by hook or by crook, keep all their teammates on (their version of) the righteous path. And that approach itself is unstable; it has a tendency to degrade rapidly whenever the leader decides to spend a week at the beach. If they should ever leave the company, it's likely to be lost forever.

I can't say that I know a software development idiom that reliably creates a more stable equilibrium point. But I don't think we'll ever find one unless we're willing to examine the failure modes of existing paradigms.


The problem is "developer" spans a gigantic range of capabilities. Someone who wrote Hello World in Chrome's Console is a developer, and someone writing kernel code professionally is also a developer.

It's hard to define a "good practice" for this wildly heterogenous group. What's good for a beginner (training wheels on a child's bike), is completely counter-productive to a pro (motorcycle sports driver).


I think thanks to Java opting to use "implements" for interfaces, people no longer associate "inheritance" as the thing we do when we write a fully abstract class (i.e. interface) and then "inherit" this abstraction to implement it. Interfaces are, of course, crucial.

Not sure I understood your example about the static class vs. function factory tbh though.


As a quick example...

class ServiceWrapper:

  service = ...

  v1 = ...

  v2 = ...

  @static

  def my_job():

    return service.call(v1, v2)
vs.

def create_service_wrapper(v1=..., v2=..., service=...):

  def f():

    return service.call(v1, v2);

  return f
The class can scale to multiple methods sharing parameters, but the semantics of the factory fall apart if you want to return more than one parameterized function.


That's not inheritance. That's polymorphism.

Java doesn't let you have inheritance without polymorphism, but it is possible, see "private inheritance" in C++.


But I don't think a "fully abstract class" is the same thing as an "interface", at least in Java? As far as I know, you can implement multiple "interfaces" but you can still only "extend" one abstract class, even if it is "fully abstract" in that it has no concrete member variables and all methods are abstract.


In Java it isn't but this is specific to Java (and clones of Java like C#).

This is because Java has single-inheritance enforcement for classes.

C++ for example has multiple inheritance. So the way you do an interface is you just write an abstract class, then extend it to implement it.


In my experience, whether you use an abstract class or an interface class as your reference depends very much on the problem you are trying to solve - specifically, is there common code shared across all implementations?


A C++ pure abstract class is equivalent to a Java interface.


I'm sure you mixed that up.

Inheritance is the dynamic variant of composition.

You can dynamically extend objects by adding fields and methods to it's class, but not to composed objects, because they are statically closed. (Well in the good languages. javascript and friends still allow to extend objects, to violate their types).

You can also dynamically mess with the inheritance search path, or fall into the diamond trap. Not with static (or maybe call it lexical) composition.

Composition wastes a lot of space and disallows dynamic extensions, like plugin overrides, test mocking, ... But is very good with proper structural typing, and is a good performance hack, because it doesn't need to chase pointers at runtime, and neither do the inheritance search and linearization dance.


This writeup is a bit unclear. It actually mentions two problems. The first is functions out-living stack-allocated arguments... this isn't a GC issue but a compiler issue that could be solved by escape analysis (or more powerful variants, like Rust's lifetime analysis). I guess that maybe they're implicitly talking about using a spaghetti stack instead of doing escape/lifetime analysis, and then needing the GC for the stack frame to support interior references. In any case, as they state the first problem, it's a compiler problem, not a GC problem. As they state in the article, inheritance doesn't solve this, and Simula just forbids by-name/by-reference arguments that are allocated in the caller's call frame.

The second problem they describe is linked lists. A Java-like linked list means an extra level of indirection, where the list element holds a reference to the held object. For both performance and space reasons, they wanted to remove that layer of indirection. That meant either having their garbage collector support interior references (references to any field inside the object) or else making a reference to a linked list element be the same as a reference to the held object. They chose the second option, by way of making the element and the held object one and the same, using inheritance. A Simula linked list has as much indirection as a C++ std::list<T>, one fewer indirection than a java LinkedList<T>.

They mention reference counting, but a mark-and-sweep, copying, or tricolor tracing collector would still have this same issue if it didn't support interior references. As sophisticated as Oracle's latest JVM is, I don't believe any of its several garbage collectors support interior references. Physically in the JVM's memory, LinkedList<T> really has an extra layer of indirection as compared to C++'s std::list<T>, and the JVM uses escape/lifetime analysis in order to stack-allocate some objects that would otherwise need to be heap-allocated.


> A Java-like linked list means an extra level of indirection

I guess you mean java.util.LinkedList that has virtually no use case that's not outperformed (both space and time) by another datastructure.

Writing your own linked list with prev(+next) pointers ain't hard by it's nothing like C++ templates. The removal of layer of indirection does work in Java [pretty well] as well - like extending AtomicReference (or AtomicInteger), if you need a CAS and some other data, of course it comes with false sharing.


Yes, I mean java.util.LinkedList. Obviously, one can do manual writing of Java (or even use m4 macros or hijack the C preprocessor... the preprocessor doesn't know anything about C beyond tokenization) for anything C++ templates can do. It's just tedious and potentially error-prone. Here's hoping Java eventually gets reified generics.


Obviously you can do

public class Node<T>{ Node prev, next; //stuff comes here .... }

Then extend and have the overall code that deals with modifying the datastructure, but it's overall ugly. Personally I have written enough datastructures where linking nodes is useful. For example: red/black tree + insert order 'next' makes for a decent implementation of a moving median. Yet, that's not what almost any developer would do normally.


But isn't that just what you were suggesting in the GP post in order to get rid of a layer of indirection?


Perhaps, most of the time I have not created a general use data structure - just highly specialized ones. Extending would allow no references/indirection but it's rather ugly and it has to provide another function to instantiate (I'd consider reflection, i.e. Class.newInstance() not to be a good way to handle the case).


> can do manual writing of Java (or even use m4 macros or hijack the C preprocessor... the preprocessor doesn't know anything about C beyond tokenization) for anything C++ templates can do

Sounds broadly true, but I'm not sure it's precisely right. How could std::is_same be implemented?


> How could std::is_same be implemented?

You would perform manual static context-sensitive text expansion, just like the C++ template expander does. In this case, you end up just manually expanding it all the way out to a true or false by hand.

I'm just saying that the C++ template expander only performs static context-sensitive text replacements, ending in valid template-free C++ code. The template engine doesn't introduce any magic that can't be done more tediously in template-free C++ by hand, so you can do the same tricks in Java by hand.

Now, Java doesn't have non-primitive value types, so you'd have to manually flatten some classes in order to get the same layout without the class boundary forcing an indirection. Java also doesn't allow interior references while C++ absolutely does, but that's outside the C++ template expander / template sub-language.


C++ templates cannot be implemented purely by text replacement as most of it is type driven.

It is true that you can do anything that templates do by hand, but that's true for any language construct (at the limit you could write asm).


> C++ templates cannot be implemented purely by text replacement as most of it is type driven.

The static types (and constant values in some cases) are the context. That's the "context-sensitive" part of "context-sensitive expansion".

> It is true that you can do anything that templates do by hand, but that's true for any language construct (at the limit you could write asm).

Yes. That's my point. I was replying to someone saying that they could remove a layer of indirection from a linked list by writing a custom data structure that had the next and previous references, and I was pointing out that they're essentially running C++ template expansion by hand. There's nothing surprising there.


> You would perform manual static context-sensitive text expansion, just like the C++ template expander does.

Sure, if you move all your code into your new language that transpiles to C, but that was my point, the lack of language integration means there's plenty it can't do that C++ templates can do.

An example from [0]:

    std::is_same_v<int, std::int64_t>
In C, I don't know that there's any way to determine whether int and int64_t are the same type.

[0] https://en.cppreference.com/w/cpp/types/is_same


> a compiler issue that could be solved by escape analysis

It could be solved today, not in the sixties when Simula has been designed. It's fascinating how much scientific progress we're taking for granted, assuming that is has been that way forever.

> A Simula linked list has as much indirection as a C++ std::list<T>

Barring the same comment on the progress in generic types, intrusive lists are on a bit different scale. The difference between a T which can be a part of intrusive list and List<T> is in how T gets into a list. For intrusive list the links are already in your T. If you have an element and want to add it to the list then it's a bunch of pointer assignments. For non-intrusive list you have to allocate a new list node, and maybe even move your element there.


The original inspiration for the idea really has nothing whatever to do with its subsequent architectural role, nor with its value as a mechanism.

As a performance optimization, inheritance demonstrated value at a time when performance improvements were at least three orders of magnitude more important than they are today.

People here like to disparage OO and inheritance, but the distaste clearly is just a reaction to Java and its unfortunate lack of any other organizational mechanism, requiring abuse of the one feature for everything.

In C++, it is sometimes directly useful, with or without a vtable, independent of "OO"; and even occasionally is exactly the right thing, explicitly as an OO tool. As one tool in the toolbox, it is simple, understandable, and causes no trouble unless badly abused. It is far from the only tool, so is not often the best choice, but probably most programs of any substance have one or two places where it is better used than not.


Implementation inheritance is indeed problematic, no matter how it's used. The defining feature of implementation inheritance is that any code, relating to any class in the hierarchy, can rely on methods that may then be overridden in unpredictable ways further down in the hierarchy. If you don't need or expect this behavior, you can use composition and delegation instead - which come with a far simpler semantics and do a way better job of preserving modularity.


Implementation inheritance is absolutely no problem within a project or component that entirely controls both base and derived classes, where it amounts to, simply, a notational convenience. It is not, then, a "good OO design"; but there is nothing sacred about OO. Ultimately, any combination of well-specified mechanisms may be correctly used to achieve an elegant design, regardless of formal architectural conventions. The reason a feature was introduced into a language has no necessary connection to reasons for using it.

Implementation inheritance is a problem when crossing organizational boundaries, as reasonable changes upstream can impact correctness downstream.

Ultimately, there is no substitute for taste. Substituting fetishism produces unfortunate results.


IME the problem with inheritance in C++ is that it is so easy programmers reach for it instinctively, even when another tool might be a better choice in the long term. I definitely use inheritance in my c++ code, but as I've grown (and as the language has grown) I've found myself using it less and less. Perhaps that's the way it's supposed to be.


It is a problem with teaching. If you are taught to imagine yourself as an OO programmer using an OO language to produce OO designs, you will shoehorn in OO mechanisms even where they fit poorly. C++ has other organizational tools for these cases, so that there is little natural temptation to abuse OO for them.

People coming to C++ from impoverished languages like Java often fail to recognize bad habits they have internalized.


I'm a little confused. Inheritance, types and all that stuff are just language concepts right? Simula may have been one of the first to implement inheritance as a means to an end, but how could they have "invented" inheritance? I'm sure the idea of inheritance was already there in language proofs.

[I might be really really wrong here]


Theory usually comes after practice. People create something that works, then theorists formalizes it into a well defined concept.


I hold the view that programming patterns were invented as solutions to technical problems. It was then later that some theoretical "usefulness" or "elegance" was attached to them.

One should know the technical benefits and drawbacks of programming patterns before applying them. Just because some language or environment or majority of programmers promotes it, doesn't mean that its ok to apply it. Without knowing the technical reason for applying a pattern is just cargo cult programming.


I hold the view that programming patterns were invented as solutions to technical problems. It was then later that some theoretical "usefulness" or "elegance" was attached to them.

IIRC, there was impetus from Christopher Alexander's architectural "pattern language", and the desire to do something similar in software, which was a bit of an unguided mess at the time.


Simula 67 is (surprise) from 1967, Smalltalk-76 from 1976. Both supported objects and inheritance (http://progopedia.com/language/simula-67/, http://worrydream.com/refs/Ingalls%20-%20The%20Smalltalk-76%...)

“A pattern language” is from 1977; its application to programming from 1987 (http://c2.com/doc/oopsla87.html)

So, in the context of “Simula may have been one of the first to implement inheritance as a means to an end, but how could they have "invented" inheritance?”, I think this discussion is diverging a bit.

I would guess the idea of inheritance was already used in some assembly programs. It would start with people using structures that shared initial fields, and passing them to functions that didn’t care about the other fields (functions handling intrusive linked lists, for example). Passing a pointer to a function that _does_ know the exact type of the fields would be the next step.

The next step would be to allow heterogenous collections and use logic to discern between the various variants (many lisps do that by going through an “is it an atom?, is it a number? etc. chain, and likely did that years before Simula 67 appeared)

Chances are somebody also embedded a function pointer or two inside such objects, but I guess that would have come fairly late, as a function pointer in each object would have been expensive, memory-wise in the ‘60s and ‘70s.

Anyway, I think ‘inheritance’ indeed was invented/discovered, but gradually, and without explicitly naming it.


> which was a bit of an unguided mess at the time

Oh, come on. There was https://en.wikipedia.org/wiki/Structured_program_theorem and https://en.wikipedia.org/wiki/Modular_programming. There was a discussion about functional programming, for example famous John Backus article - https://dl.acm.org/doi/pdf/10.1145/359576.359579.


I was really thinking lower-level, specifically OOP, though yes I should've said it. OOP was going to save the world but no one knew how to design properly with it. GoF patterns were a set of off-the-shelf tactical designs intended to get you started in the right direction.


The other day I got into a heated debate with a C# developer on the merits (or lack thereof) of adding an "I" prefix to interfaces.

Turns out in C# there's no syntactic difference between implementing an interface and inheritance, so it makes sense in C# to explicitly state that something is an interface but, arguably, only there.


The argument against “I” is that the user (client code) of an instance shouldn’t have too care whether the type of the reference is an interface or a class. Concerns of the implementor shouldn’t determine the naming visible to the client code.

Of course, it’s an established C# convention (and inspired by the naming convention for COM interfaces) so one better sticks with it, but I think the convention was a bad choice for the reason stated above.


Yeah, IMV its a similar problem to the convention of prefixing DB objects with T (table) or V (view), but not as bad because refactoring between class and interface should be rare.


100% agreed. My day job is in front-end development and we unfortunately had this convention in a few projects.

Problem is, sometimes the are changes in how types are composed and what used to be an interface can end up as a type alias and vice versa.


Many language features followed that route, yes. Languages themselves are typically first introduced, and then (maybe) formalized. Meanwhile, some other "patterns" or features, like higher-order functions or recursion, were originally introduced in the context of theoretical CS (e.g. they can be described in the lambda calculus), and it took some effort to implement them in practice.


> It was then later that some theoretical "usefulness" or "elegance" was attached

This Chicken v Egg pondering is interesting. I mean, surely the term "inheritance" was used in this instance because it already resembled something that had been established in theory?


Just like some artists create art? First they do then they explain.


idea > practice > formalization or idea > formalization > practice


I share your confusion to a degree. To me it always seemed that the concept emerged from the work on Semantic Networks [0] by Collins and Quinlan.

What's interesting is, looking at this now that these two different aspects of what we nowadays consider to be a fairly well defined computer science concept both seem to have emerged around the same time.

Could it be C & Q were inspired by the Simula guys, or vice versa maybe Simula adopted the term because the implementation "resembled" the semantic definition. Or perhaps they independently emerged from concepts that were popular at the time!

[0] https://en.wikipedia.org/wiki/Semantic_network


I would put OOP's solid theoretical foundations at the Liskov Substitution Principle invented in ~1988. This seems relatively late compared to OO's popularity. It was less than ~10 years between "structured programming" and the structured programming theorem (and lots of useful formal steps on the way), vs. 20+ for OOP and LSP (and at least at today's distance, LSP along with Meyer's contemporary substitution principles seems more like a sea change).


Off topic, but one of my CS professors was part of the original Simula group. Had long sinced moved onto more formalized approaches, i.e., denotational semantics. On more than one occasion was heard to say "the only good object is a dead object."


Subtyping with inheritance is kinda complicated from a theoretical perspective, so the trade-off in java-like languages seems to usually be to neuter the type system, by restricting type-inference and rely on nominal types specified by the programmer.

After spending way to much time in Java, and OOP-style php and python, I just struggle to see the advantage of inheritance. It usually takes a long time to look through the object hierarchy to figure out where the behavior is really coming from.. and the loss of type inference outside of localized expression statements is devastating in comparison.


This comment may get too meta, but technically everything we do, ever, is a performance hack.


You're not wrong. Aren't we all just manifestations of the Principle of Least Action?


tl;dr:

> Simula created inheritance instead of using composition because it allowed their garbage collector to be simpler.

In C++, inheritance (and other OO languages I assume) is implemented as a composition and when multiple inheritance of the same class happens then the members of the twice parent is inherited twice unless the virtual keyword is used. ( in other words, if B : A, A, C then sizeof B = A + A + C + B's stuff, or if B : virtual A, virtual A, C then sizeof B = A + C + B's stuff). This is why declaring on the stack Parent parent = Child() "works" by just truncating the content of the child that was appended at the end of Parent.

Personally I like to use private inheritance as a composition shortcut since for all intent and purpose it's achieves the same goal with less boilerplate. Outside of that I do use it if it strictly follows the substitution principle with other the members of the base class being used just as much as the child's. Which happens very rarely but does happen. It's like an Allen's key, you probably won't need it except when you must disassembled your IKEA furniture when moving out.


Doesn’t Parent parent = Child() work because Parent’s constructor is run by implicitly casting ref-to-Child to ref-to-Parent?


I don't think that's enough, because Child() still results in an object that is sizeof(Child). In order to store that object in a variable that is sizeof(Parent) it has to be rejigged somehow.

I think what TeeMassive is saying is that this rejigging is a simple truncation because the memory layout of (non virtual) inheritance in C++ is a concatenation of Parent and Child objects. I'm not sure whether this is implementation specific though.


That’s certainly enough, if you can call a constructor, you don’t have to think about how it works and that the classes are related.

The part about “rejigging” and simple truncation happens before the constructor call when you cast ref-to-Child to ref-to-Parent.


There is no casting of references at all in Parent parent = Child().


Sorry for being afk and replying once in a while :/ Parent probably has a copy constructor Parent(const Parent& p)

Parent p = Child(); calls it by implicitly casting Child& to const Parent&. That’s what happens.

I see that you must have meant this casting step, so indeed the constructor has no control over the rejigging.


I see what you mean, but let's go back to the orginal issue and why I think this implicit cast is insufficient as as definition of what happens here.

Say you have the obvious minimal implementations of

  struct Point ...
  struct Point3D : public Point ...
And then you do this:

  Point p3 = Point3D(1.1, 2.2, 3.3);
  Point p = p3;
On a 64 bit machine this results in sizeof(p) == 16 and sizeof(p3) == 24. So, at some point someone has to decide how to transform p3's 24 byte representation into p's 16 byte representation.

In C++, this happens simply by truncting p3, i.e copying the leading 16 bytes of p3 into p and throwing away the trailing 8 bytes (I'm not sure whether or not this is part of the standard).

This is not the only imaginable way in which this could possibly happen though, which is why I'm saying that the whole process is not sufficiently defined by stating that a reference to child is cast to a reference to parent.


Reference counting and a last resort GC? If you had asked me which language that describes, I would have said python. I had no idea it owed that design to simula.


Seems unlikely to be true that inheritance was "invented" in Simula since it shows up all over the place outside of programming languages, for example in network protocol design.


I think discussions of OOP and inheritance often miss the influence of Knowledge Representation on OOP: I’m not entirely clear of the history myself, but I’ve gotten the impression that knowledge representation research (things like frame systems) was a semi-independent influence on the design of OO systems



How is it that c2 has remained so quality?


It turned write-only before the Eternal September of wikispam set in


I am trying to start a wiki project and I have no idea what sort of world of stupidity I’m about to step in to, do I?

I had thought of building the history system on top of a theory of patches system would be a nice stretch goal. But it seems to me that having an infinite undo function drastically changes the account creation situation. I only have to be passingly sure you’re not a bot, and I can decide after the fact if you’re approximating a decent human being or not.


I'm not really sure, I think a fork and merge-request system might be useful? Or requiring registration for all editors. The big issue is figuring out how to reduce the moderation burden to a reasonable amount.


You should consider event sourcing for all given information. Then, you can undo/redo as much as you wish.


Write-only is a great place for spam... no need to read it!


Oops, I’m going to leave it because it’s funny.



It didnt? there is a _lot_ of "no u" arguments on C2


> I’ve gotten the impression that knowledge representation research (things like frame systems) was a semi-independent influence on the design of OO systems.

Definitely! See [0]. Personally, I came across Frame-based knowledge representations in the mid-1980. For reasons that were frankly insane, I was trying to code a frame-based knowledge representation in distinctly non-OO K&R C as part of a collaborative university research project. I moved into industry in 1988 and used OO for the first time (the Common Lisp Object System) and finally started with C++ in around 1993. Obviously this was my personal career trajectory but I encountered knowledge representation research before mainstream OO languages.

[0] https://en.wikipedia.org/wiki/Frame_(artificial_intelligence...


I think you might be looking for this https://en.wikipedia.org/wiki/Semantic_network

I believe it was Collins and Quinlan did the seminal work on the conceptual side, but what's interesting is it seems to emerge around the same time ... considering how inextricably bound the two are in computer science it's an interesting chicken/egg problem!


>discussions of OOP and inheritance often miss the influence of Knowledge Representation on OOP: I’m not entirely clear of the history myself,

Fyi... The "knowledge representation" aspect of OOP inheritance is emphasized by computer science professor Andrew P. Black. He wrote a long paper[1] about it and also has a video[2]. It's unfortunate that Black's alternative perspective (which he shared with Philip Wadler) is not discussed as often as "Object-Oriented Programming is Bad"[3] videos.

His paper is a long read (over an hour) that covers the intellectual history of OOP starting with Simula/Smalltalk. The following is an excerpt from https://www.sciencedirect.com/science/article/pii/S089054011...:

Most of us can grasp new ideas most easily if we are first introduced to one or two concrete instances, and are then shown the generalisation. To put this another way: people learn best from examples. So we might first solve a problem for , and then make the changes necessary for 4 to approach infinity. [...]

To illustrate the power of inheritance to make complex abstractions easier to understand, letʼs look at a case study from the functional programming literature. In Programming Erlang [19, Chapter 16], Armstrong introduces the OTP (Open Telecom Platform) generic server. [...]

To make sure that the message about the way that the OTP server works does sink in, Armstrong presents us with four little servers … each slightly different from the last. server1 runs some supplied code in a server, where it responds to remote requests; server2 makes each remote request an atomic transaction; server3 adds hot code swapping, and server4 provides both transactions and hot code swapping.

Each of these “four little servers” is self-contained: server4, for example, makes no reference to any of the preceding three servers.

Why does Armstrong describe the OTP sever in this way, rather than just presenting server4, which is his destination? Because he views server4 as too complicated for the reader to understand in one go. Something as complex as server4 needs to be introduced step-by-step. However, his language, lacking inheritance (and higher-order functions) does not provide a way of capturing this stepwise development.

In an effort to understand the OTP server, I coded it up in Smalltalk. [...] First I translated server1 into Smalltalk; I called it BasicServer, and it had three methods and 21 lines of code. Then I needed to test my code, so I wrote a name server plug-in for BasicServer, set up unit tests, and made them pass. In the process, as Forsythe had predicted, I gained a much clearer understanding of how Armstrongʼs server1 worked. Thus equipped, I was able to implement TransactionServer by subclassing BasicServer, and HotSwapServer by subclassing TransactionServer, bringing me to something that was equivalent to Armstrongʼs server4 in two steps, each of which added just one new concern.

Once I was done, I discussed what I had learned with Phil Wadler. Wadler has been thinking deeply, and writing, about functional programming since the 1980s; amongst other influential articles he has authored “The Essence of Functional Programming” [21] and “Comprehending Monads” [22]. His first reaction was that the Erlang version was simpler because it could be described in straight-line code with no need for inheritance. I pointed out that I could refactor the Smalltalk version to remove the inheritance, simply by copying down all of the methods from the superclasses into HotSwapServer, but that doing so would be a bad idea. Why? Because the series of three classes, each building on its superclass, explained how HotSwapServer worked in much the same way that Armstrong explained it in Chapter 16 of Programming Erlang. This was an “ah-ha moment” for Phil.

To summarise: most people understand complex ideas incrementally, by starting with a simple concrete example, and then taking a series of generalisation steps. A program that uses inheritance can explain complex behaviour incrementally, by starting with a simple class or object, and then generalising it in a series of inheritance steps. The power of inheritance is that it enables us to organise our programs incrementally, that is, in a fashion that corresponds to the way that most people think.

[1] https://www.sciencedirect.com/science/article/pii/S089054011...

[2] https://www.youtube.com/watch?v=Rmg_trKnanU

[3] https://news.ycombinator.com/item?id=19407599


> discussions of OOP and inheritance often miss the influence of Knowledge Representation on OOP

SICP doesn't. :) Which is why it's such a great book, of course.


Inheritance in the OOP sense can be simply implemented in most languages without OOP. In Javascript:

  var o1 = { a: 1, b: 2, c: 3 }
  var o2 = { x: 1, y: 2, z: 3 }
  o1 = Object.assign(o1, o2)
  o1.z // 3
o1 has now inherited o2. I don't see much difference between this and classic untyped OOP.

Edit: copying functions over, not values, is what I’m getting at. Values used for simplicity of example


This work-around copies the values (in your examples), or the references (if the props are functions or objects), which is just wasted cycles and memory bloat. It's harder for runtimes to optimize, because with that mixin `o1` changes its shape. It's harders for IDEs to infer the type of `o1`, which will hurt navigating and searching through your codebase with confidence.

Implementing OOP in JS with hacks like this is worse, in pretty much all ways, apart from the intellectual kick many devs get of avoiding OOP at all cost, as opposed to just using built-in language features.


I agree copying values is not massively smart. (Used for simplicity of example)

But copying function pointers seems negligible. I can’t see this being avoided even in traditional oops.


The whole point of "traditional OOPS" is to avoid per-instance boilerplate code.


While it's true that it's easy to implement inheritance in dynamic languages, it's not quite this easy. The most important feature of inheritance is the function override support / virtual dispatch, so that o1.foo() and o2.foo() can do different things, but o2.foo() can also access o1.foo() (by using something like super.foo() in Java or BaseClass::foo() in C++). Ideally this would also be optimized so that each object doesn't have to carry a function pointer for each member function (this is why virtual function tables are generally used).


Override support, no?

  function f() {
    function g() {
      return 1
    }
    return { g }
  }

  function f1(f) {
    function g() {
      return f.g() + 1
    }
    return { g } 
  }

  f1(f())


Sure, but that's a whole bunch more boilerplate than just .assign(), which was my point (it's easy, but not quite as easy as just calling assign). Especially since f needs to be written a certain way to be overridable, while in the previous example o1 was just an ordinary object.

Edit: Not to mention that storing function pointers with each object is a huge waste of memory, versus storing one class pointer in each object, and having that class object store function pointers and parent class pointers. That way, you guarantee that each function pointer is stored exactly once, and each object has a single sizeof(pointer) of overhead (and even this can be improved with more complex implementations).


Seems to me to be very different from the OO concept of inheritance: You're copying values between variables, instances of a type. The OO idea is about the type definition; it affects all instances of a type without any per-instance boilerplate code at all.


Seems like a cool era to be working on computers. For lack of a better phrase, the world was your oyster.


Yes, I agree - you had much fewer pre-existing ideas to guide you, and really good new ideas were coming out all the time and you could pick between them - you could do all kinds of innovative things and they would really be novel.

Although today we have a much better grasp on formal semantics, which makes it easier to make up new ideas. But those new ideas aren't likely to ever displace the old ideas popularly, and it's hard to come up with them in the first place since old ideas live in your head...


The GoF should have added a part 2, "Prefer thinking over mindlessly repeating slogans without context or nuance"


After reading all the comments of many confused and curious, here is when inheritance is bad and why so _in the absence of any performance considerations_.

First, inheritance from an interface/trait is totally okay. The problem is class inheritance, meaning implementation inheritance.

There are two cases:

1) you inherit from a class and only add methods but don't overwrite anything. This is the good case, you can do that

2) you inherit from a class and overwrite one ore more of its methods. This is the bad case, no matter how you look at it. And here is why.

Look at the following classical simple Stack class:

    class Stack(...) {
       def push(element) returns nothing = ...

       def pushAll(elementlist) returns nothing = ...
       
       def pop() returns element = ...
    }
Now imagine you want to keep a count of the number of elements, so you decide to extend the class:

    class CountingStack(...) extends Stack {
       override def push(element) returns nothing =
           count += 1; super.push(element)

       override def pushAll(elementlist) returns nothing =
           count += elementlist.size; super.pushAll(elementlist)
       
       override def pop() returns element =
           count -= 1; super.pop(element)
    }
Looks good no? Can you spot the problem?

If the Stack class implements pushAll as iterating over the elementlist and calling push for each element, then the CountingStack will count twice!

We could try to just not overwrite pushAll in CountingStack - but what if the Stack class is later changed to implement pushAll without calling push? Then CountingStack would now miss counts.

This problem is impossible to fix and it is a general problem - when overwriting a method, you can never be sure that semantics could break when the base class is changed.

So what is the solution here? Simple: use composition over inheritance.

    class CountingStack(underlyingStack) {
       def push(element) returns nothing =
           count += 1; underlyingStack.push(element)
     
       override def pushAll(elementlist) returns nothing =
           count += elementlist.size; underlyingStack.pushAll(elementlist)
       
       override def pop() returns element =
           count -= 1; underlyingStack.pop(element)
    }
Problem solved - no matter how Stack is implemented, our CountableStack will always be correct. The drawback (if you want to call it one) is that we need some common interface/trait that both Stack and CountableStack will implement.

I hope that sheds some additional light on why (implementation) inheritance is often considered a bad practice.


> This problem is impossible to fix and it is a general problem - when overwriting a method, you can never be sure that semantics could break when the base class is changed.

I'm not sure what's so special about overwriting methods here. There are many cases where semantics can break as a dependency changes. One example would be whether a callback is executed on the same thread (or event-loop tick) or in the background. A function might assume that the callback should be pure and thus in a later version switch where it's being executed. If your code depended on it being executed in a certain way, then it'll now be broken. The problem here is that there were certain constraints of the function which you weren't aware of (possibly because it wasn't documented).

A base class will have a similar constraints that you, as a subclasser, must be aware of.

Your stack example is also a good example of where inheritance is actually useful:

    class CountingStack(...) extends Stack {
       override def push(element) returns nothing =
           count += 1; super.push(element)
       
       override def pop() returns element =
           count -= 1; super.pop(element)
    }
Assuming that the Stack class specifies that all modifications will go through the push and pop methods, this ensures that the count is maintained regardless of how many other utility methods that exists on Stack (i.e. clear()). You don't have to extend the CountingStack with extra methods as the Stack methods gain more methods.


> I'm not sure what's so special about overwriting methods here. There are many cases where semantics can break as a dependency changes.

The problem with implementation inheritance specifically is that there is no single responsibility for establishing this semantics. Every method call on overridable methods incurs a dispatch step involving the whole of an arbitrarily extensible inheritance hierarchy, and every method can be called by an unknown extent of user code. You just can't give a proper semantics to it without looking at the whole-program level, which is antithetical to modularity.


> There are many cases where semantics can break as a dependency changes. One example would be whether a callback is executed on the same thread (or event-loop tick) or in the background. A function might assume that the callback should be pure and thus in a later version switch where it's being executed.

Rust's Send and Sync traits solve this particular problem by putting the thread-send safety in the type signature. If the library changes to use a background thread and you have passed it a non-threadsafe closure then you'll get a compile error.


Very good point actually.

If we cannot assume synchronous execution, then semantics can break exactly as you said. That a rather orthogonal problem from the inheritance one and my suggested solution is to make use of pure functional programming (hence purity can be assumed unless the method signature indicates that an effect might happen). Here is an example of how a potentially async Stack might look like:

    class Stack(...) extends Stack {
       def push(element) returns IO[nothing] = ...
       
       def pop() returns IO[element] = ...
    }
The IO would describe an action that can be ran at a later point, which means when the method has being called and returned, nothing has happened until you call `run(io)`.

That way you will be immediately aware of the behavior and adapt your Countstack as follows:

    class CountingStack(underlyingStack) extends Stack {
       override def push(element) returns IO[nothing] =
           underlyingStack.push(element).andThen(nothing => count += 1; return nothing)
       
       override def pop() returns element =
           underlyingStack.pop(element).andThen(element => count += 1; return element)
    }
That ensures that order of events is being kept and also that failures (e.g. a concurrent thread dies in the middle) are handled.

> Assuming that the Stack class specifies that all modifications will go through the push and pop methods

This is a severe restriction on how the Stack can be implemented then and it bears the risk of someone violating this specification by accident (think about the famous equals/hashCode specification in Java).


You can push things into the type system only so far before you drop off a cost/benefit ratio cliff, unfortunately. Fundamentally, even in FP languages that expose side effects in the type system, you can still easily make APIs with undocumented semantics and in which users will break as the underlying component evolves. In fact Haskell is quite notorious for an ecosystem that seems to believe Haskell's type system obviates the need for good documentation, an illusion the Java world fortunately never labored under.

I think in recent years there's been an uptick in clever sounding attacks on common PL features like inheritance and exceptions, many of which look suspiciously like motivated reasoning. At the very least the arguments are extremely weak. This article ends by saying:

"Personally, for code reuse and extensibility, I prefer composition and modules."

But these are orthogonal. Languages like Kotlin have built-in support for inheritance, modules and composition. It's not an either/or approach, and there are lots of high quality, highly successful codebases that use inheritance extensively which would be pretty unimaginable without it. I use libraries that use inheritance every day and it's very rarely a problem: only in cases where someone made a bad API with it, and you can get bad APIs that rely on composition or badly modularised APIs too. I don't feel like one problem is more common than another.


There are lots of bad uses of implementation inheritance, but it's not all bad. One pattern I use a lot is the "just these 5 missing methods". The base class might be complicated and large, with a lot of logic driving the process, but it needs to have 5 specific functions that it calls. One way is to have that big base class have almost all the logic and then 5 abstract methods and expect a subclass to implement those 5 methods and not override anything else. While the base class could in theory accept 5 first-class functions, or an object with 5 methods, in some cases those 5 methods are intricately tied to the operation of the whole thing and it doesn't make sense to separate them. That's a perfectly safe and clean use case of implementation inheritance.


I think for the usecase you mention, there is a different solution that I personally prefer.

Optimally, if your language supports it, just define an interface with these 5 methods and then define extension methods that work on any type that implements the interface. The reason why this works is that all the other functions are usually helper functions / convenience functions and they only need the other 5 functions to work, so no need to access any inner/private properties.

That is by far the most lightweight solution, and it works even for 3rd-party libraries where you can't control the types.

If your language does not have extension methods, then you can still use composition just like in the example I gave and delegate to the base class. That is a bit more code to write (because you have to delegate to the non-5 methods if you language doesn't automate that for you, some do) but I find it cleaner than hoping that no one overwrites any non-5 method.

But yeah, essentially what you are saying here is to ask people to follow the rule I gave by convention - depending on that context that might work as well.


Yeah, I generally like composition better, but sometimes the coupling between the base class and those subclasses that provide the 5 methods is just too strong to ignore, and if you break out the methods into another interface, then you are struggling to find a place to put the shared logic (which you proposed to do with extension methods).

Your example with the counting stack made use of "super", which is always a red flag to me. "super" is such a bad smell to me, that I literally never use it. In fact, in Virgil, my language project (http://github.com/titzer/virgil), there is no super construct at all, nor static methods or interfaces for that matter. 15 years of writing it, 200k lines later, and I can say that personally, having delegates, first-class functions, partial application, and tuples go way further than more complex trait/interface/extension method madness.


Not sure if I understand you correctly here. With "the coupling ... is just too strong to ignore" you mean that if someone has one of the subclasses at hand, they should automatically have all the extension methods at hand not having to look for them somewhere?

> Your example with the counting stack made use of "super", which is always a red flag to me. "super" is such a bad smell to me, that I literally never use it.

I think that's a sign that you have developed a good intuition of that it can lead to problems! :)


Re the shared logic, Java and C# have the concept of "interface default methods" which is what I'd use for your case, if I understand it right.


This pattern is also called the template method pattern, I believe:

https://refactoring.guru/design-patterns/template-method


Yep, that's it. Thanks for the reference!


This is only a problem is the methods on Stack are virtual. If they are non-virtual, inheritance works like your composition example: inheriting from Base basically creates a Base member variable, except missing methods can be automatically forwarded to the Base member. This saves you from having to explicitly wrap all the methods of Stack in CountingStack.

Virtual should be used sparingly.


Agreed. Of the OOP languages I use, only C++ by default, and I think this is one of the few defaults in C++ that is correct. In Java terms, every method should be `final` except those that are explicitly intended as customization points.


> This problem is impossible to fix

People write working code using inheritance, it's not impossible, just requires extra care. One example:

    class CountingStack() {
       def push(element) returns nothing =
           count += 1; super.push(element)
       
       override def pop() returns element =
           count -= 1; super.pop(element)
    }
Another way is to use non-virtual methods in the base class where you don't want people to modify your code. In C++ it's even the default. So you should consciously think when you make a method virtual.

    class Stack() {
       def nonvirtual private push_internal(element) returns nothing = ...
       def push(element) returns nothing = push_internal(element)
       def pushAll(elementlist) returns nothing = ... // use push_internal if needed
       def pop() returns element = ...
    }
Anyway - I agree inheritance is bad, and I dislike OOP in general, just wanted to nitpick.

IMHO the main problem is encapsulation. People expect the code to work one way and don't check their assumptions because of the whole tower of abstractions and hiding what actually happens behind layers. And then instead of writing straightforward code to do what has to be done - you have to tiptoe around all possible implementations :)


Your first example will still break if the base class changes behavior, so I think my point holds.

I agree with the non-virtual methods though - they are pretty what I would call conceptual composition.


Thank you, this is exactly it. The problem with inheritance is specifically overloading, and not anything else. Composition as it is being used currently has its own drawbacks (need to manually delegate every method, no clear model for instance construction, the need to duplicate fields across classes, etc.). I wish overloading was just removed from inheritance or that composition had better language support.


Why create virtual methods in a class if you have no intention of it being a parent class? And if you intend for it to be a parent class you can design it so that these things aren't an issue.


Could you give a code example of how you would design things so that they aren't an issue?


The same way you design interfaces. If you can write good interfaces then you can write good base classes, its all about the contract between methods. Inheritance gets a bad rap because most programmers used it with classes that were never written to have interface contracts and therefore they create unintended coupling. Intended coupling is a good thing though, so as long as you have the same discipline when creating base classes as you have when creating interfaces then inheritance isn't an issue.


The difference is that in one case, it is impossible to cause this error and in the other case (with contracts between methods) we now have to rely people on understanding what they are doing. Looking hat Java's hashCode&equals, I think history has shown that it is maybe not a good idea to rely on that too much.


Excellent example. Let me point out that Go's approach to inheritance ("embedding") doesn't suffer from the problem you illustrated. A method on the embedded type can't call a method on the type into which it is embedded. Go's approach to inheritance is a big productivity win, and it's easy to understand


I am being downvoted and I would really like to hear from people why that is - e.g. if I said something that is not correct, please point it out so that I can learn from it.


In your view, would this critique also apply to a trait-based version in Rust which supplies a default implementation for `push_all`?

https://play.rust-lang.org/?version=stable&mode=debug&editio...

    trait Stack {
        fn push(&mut self, element: i32);
        fn push_all(&mut self, all: Vec<i32>) {
            for element in all {
                self.push(element);
            }
        }
        fn pop(&mut self) -> Option<i32>;
    }
As far as I know it's impossible to invoke `super` to get at the default version of `push_all` from an `impl` which overrides `push_all`.

This is still "implementation inheritance", because the implementing type inherits the default implementation of "push_all". But it seems less brittle than classical OOP implementation inheritance.

• No "super" invocations.

• Shallow inheritance hierarchies.

• No direct member variable access from the trait (interface) code.


Your last point is the relevant here in this context. If the defauklt "push_all" has no access to member variables (how could it - it's defined on the trait) and only calls public methods (such as push) then it is no more powerful than a free function or extension function.

Hence it is okay.

However, if you where to have another private function "private_push_all" and "push_all" could decide whether to call "push" or "private_push_all" then we would have the same situation again.


An example I often link to is http://okmij.org/ftp/Computation/Subtyping which uses a similarly simple example (bags and sets) to show that supposedly "internal" changes can still break subclasses.


Breaking the implementation of CountingStack by changing the implementation of Stack is the kind of stuff that unit tests should pick up. It's easy to write a test case for "CountingStack.pushAll() counts once per given element"


Tests are not bullet proof and subtle errors may go uncatched. If you can completely eliminate a source of errors, why not?


Seems like just a buggy implementation to me. You can make buggy implementations of any concept; if that invalidated the concept itself, there would be no valid concepts at all.


Pretty off topic comment, but I read the title as "inheriting money is a way avoid being a high performer", which happens to be true, although completely irrelevant to the actual article. ;)


I read it similarly as "inheriting money".

And it's true that inheritance IS a performance hack--not for the children, but for the parents. Inheritance incentivizes parents to accumulate wealth not just for one lifetime but for multiple lifetimes. (Granted it may paradoxically have the opposite effect on the children as you mention.)

Here's Milton Friedman on the matter: https://www.youtube.com/watch?v=km9OCw3f5w4


If you're "not able to spend it in a lifetime..."

Then you're selfish and no good for the money anyway.

One can only buy so many jets and sports teams, sure.

But, if society endows you with that sort of reward, it becomes your responsibility to move society forward with that blessing.

Anything less is exactly why we live in Trump world and not some sci-fi future we all imagined as kids.


Life is more than just self indulgences, I don't actually see a point in working or contributing to society at all if I can't build a foundation that will benefit my children. I'm certainly not working so I can drive a fancy car or whatever.

Honestly not sure I'd even bother getting out of bed most days if the future of my children was just in the hands of the state, what would be the point.


Sounds like you may be depressed. My mom would tell us of how, similarly, us kids would be the only thing that helped her get through periods of depression.

Life itself, even for those of us without children, is an incredible experience.


I just work myself to my limits but I do it for a reason, if that reason isn't there why not just be comfy instead?


What's wrong with someone wanting to provide for his offspring?


Ted Kazinsky was talking about it in his manifesto.

https://unabombermanifesto.com/#THE%20POWER%20PROCESS

It depends on how much you provide for your offspring according to this deranged terrorist. And I've read stories from billionaire kids not living "the good life" because they never have to fight/work for anything.


That's just bad parenting.

I think rich people tend to be people obsessed with work, giving access to money is easier than teaching their kids to earn stuff - it is a different problem.


Exactly. There are many rich people who have properly raised their children.


An atheistic/secular lifestyle tends to result in such outcomes.


Inheritance is a natural phenomenon.

If all my money were stolen when I die (let's say, 100% inheritance tax), I wouldn't have much incentive to grow my business more once my business makes more money than I can possibly spend in a lifetime.

Sure, some people may still do it because they love working or because they love providing for their society.

The main reason I want to accumulate money, is so I can offer them a job instead of them going through the corporate bullcrap I had to do when needing some starting capital.

Bigger businesses are capable of bigger bets (think Elon Musk) which benefit everyone, removing an economic incentive doesn't sound wise for innovation and growth.


> Inheritance is a natural phenomenon.

Biological inheritance, property inheritance, however, is not.

> If all my money were stolen when I die (let's say, 100% inheritance tax), I wouldn't have much incentive to grow my business more once my business makes more money than I can possibly spend in a lifetime.

Even if one agrees that that is true, that’s not an argument that inheritance is natural, that’s an argument that it is a feature of society chosen to incentivise people to accumulate wealth surplus to their personal needs.


property inheritance seems like it is also a natural phenomenon. For example animals can "inherit" territory.

https://ideas.ted.com/humans-arent-the-only-ones-that-help-o...


>wouldn't have much incentive to grow my business more once my business makes more money than I can possibly spend in a lifetime.

>going through the corporate bullcrap I had to do when needing some starting capital.

The corporate bullcrap you had to go through is a direct consequence of someone growing their business beyond what they could spend in a lifetime.


Your business doesn't operate in a vacuum. It is licensed and supported by society. The same society that benefits from inheritance redistribution.


By that logic, why let the business owner make any money at all? It's really society's business after all.


The business owner won't exist if that person doesn't have the potential to make some money. There is a surprisingly large amount of possibilities between 0% and 100% inheritances tax rate.


No need to make a dichotomy out of a spectrum. Business owners pay some tax and receive some profit - we need only adjust the relative percentages.


Correct- it should go to the workers.


It's also an extension of basic property rights. An owner of property may will it to whomever he pleases on the event of his passing. It's no different from simply gifting property while alive.


Most discussions of inheritance tax (perhaps particularly discussions where US Americans are involved?) seem to start from this perspective, "the deceased person is being taxed". I find that a little weird. All these weird objections go away like a puff of smoke if you see inheritance tax the other way around: It's a tax on the recipient of the inheritance. Why shouldn't inheritance (or gift!) income be taxed, when any other income can be?


Especially when you consider inheriting money is the ultimate in unearned income.


You imply an interesting question: Do the dead have all the same rights as the living? I'm leaning towards no. So I think it's a little different from gifting something while alive, for starters, a gift while alive is typically at personal cost unless you no longer have any use for the item. Once dead you have no use for any items, as you're definitionally out of the "able to use things" population.


Personally, most gifts I've given or received have been items which I didn't need and not at any great cost. I still gave them to a person of my choosing. So when dead and in need of nothing, one's wishes to give to a particular person, charity, or even hypothetically anyone should be respected.

If John owns an item, he can say what happens it. So if he says to an notarised person 'when I die X becomes property of my son James' then that's simply respecting his wishes over his property.

Is it much different if John choose to give something away moments from death, or if he asks someone else to arrange the giving on his behalf after death?

We respect the wishes of the dead because it's part of the continuance of life and civilisation (which is itself fundamentally the store and passing down of value, I.e. inheritance). The same reason we'd honour someone who died for freedom, or honour the funeral wishes of a family member. Their wishes are worth much to everyone who knew them. The same should apply to personal property.

I think I'm stabbing in the right direction. A better answer might come from a better read or more philosophical mind than mine.


Also, there are probably numerous ways to avoid inheritance tax, so a 100% inheritance tax wouldn't make much sense anyway (even if I agree ideologically with such a tax).


I thought the same and tbh was quite excited to read about that. Less interested in OOP.


I read the title as "new hypothesis about evolutionary theory and gene inheritance".

This is why I'm not a biologist (or OOP fan). :)


Omg I can't believe I didn't see it this way first, this is quite some title!


Exactly how I read the headline on the front page too. In fact, I clicked through to this page mainly out of curiosity as to which meaning was the intended.


Heh. Have we all been trained to expect increasingly medium-style lifestyle content on HN ? This title was a nice Rorschach test.


> "inheriting money is a way avoid being a high performer", which happens to be true

By what metric? Or do you mean "a way to avoid having to be a high performer in order to achieve good results"? Isn't the whole beef with wealth inequality that given otherwise equal luck and faculties, a person who starts with more capital will gain more capital?


The two aren’t mutually exclusive. The children of the very wealthy often (usually?) share their parents’ economic advantage, thus perpetuating the inequality that originated due to economic factors. But that doesn’t mean they’re driven or that they’ll develop the same qualities that drove their parents to be successful.


> But that doesn’t mean they’re driven or that they’ll develop the same qualities that drove their parents to be successful.

That's the point: a leasurely trust-fund baby will likely be more "successful" than a hardworking and "driven" poor person.


Well yeah, that's why I asked.


How to improve your TC with this one trick (other relatives hate this)


Huh, I was thinking along the lines of inheriting genes.


I first thought it was about genes and evolution...


I was thinking the same thing.


Monetary inheritance is likely bad for the species from an evolutionary standpoint.

For example, if a "manipulative anti-social" gene was increased in prevalence because it maximized net worth.

The problem here is a common issue in game theory, optimization without regard for process. Your inheritance doesn't care how it was acquired.

Sure this could be offset for positive genes, but that's making one big assumption.

The assumption that capitalism is the only reward function for our species. That is, what other forms of inheritance reproductively compete with money?


In my experience, 90% of the time people use inheritance but they really only cared about composition, and their language simply does not have any convenient facility to compose types and re-export their methods.

With a good type system that includes traits, you almost never need virtual dispatching, as any Rust developer may tell.


Recently experimented a bit with Rust and I found the reverse to be true. You cannot compose types in Rust. You compose behaviors not types. Very important distinction as I found out the hard way.

Take the following example I have found on the net: https://play.rust-lang.org/?version=stable&mode=debug&editio...

In that example, both the bicycle and the car have the property `speed`. Imagine you have multiple types now that need to have the `speed` property. You would need to copy-paste the same code for each new type in order for you to be type safe.

Apparently it is called *Monomorphization*: https://cglab.ca/~abeinges/blah/rust-reuse-and-recycle/#mono...

From the article:

* But if I want a single queue to be able to handle different tasks, then it's not clear how that could be done with monomorphization alone. That's why it's called "mono"morphization. It's all about taking abstract implementations and creating instances that do one thing. *

Which was exactly what I was experimenting with: A single queue worker that can handle different cases. Honestly, it made Rust almost not worth it for me. Sadly, I was too deep to turn back so I wound up doing the whole thing in Rust. I have tons of copy-paste code. It is ugly and it is bothering me.


You actually can have a single queue that handles the different cases. You want dynamic dispatch for that, the syntax looks like this (quick addition to the playground sample): https://play.rust-lang.org/?version=stable&mode=debug&editio...


... which does put both cars and bicycles in the same queue, but doesn't eliminate the copy-paste for each new type completely; 'car' and 'bicycle' still wind up with separate 'get_speed' impls, which are textually identical aside from the type names.


Sure, traits talk about functions, not properties/ strict fields, so you need to provide trivial getters if you want to abstract over properties.

True, but not that interesting? Sure we could have some Ruby :get_attrs magic or whatever.


Usually the way people solve this in Rust is with macros.


I've heard "monomorphisation" to refer to something a compiler does, but not something a programmer does. I think this is just "repetition"!

The need for something to solve the problems inheritance solves has been known in Rust for a long time. Mostly it's been motivated by the need to implement the HTML DOM, which is fundamentally an inheritance hierarchy, in Servo. There's a longstanding RFC about it:

https://github.com/rust-lang/rfcs/issues/349

Inheritance is one possible solution. There are others.


I do hope they adopt one solution or the other. Rust for me seems incomplete because of this.

Another issue I have with it is perfectly described in this article: https://theta.eu.org/2021/03/08/async-rust-2.html


For reference I have over a decade of JavaScript experience in industry and my async Rust rewrite of a large JS project was *more* concise then the heavily refactored and polished NodeJS version (a language I consider more concise then most). If you are having to copy and paste excessively in Rust that is an issue but it is not necessarily intrinsic to the language.

For what it's worth traits largely prevented copy and paste and where traits fail there are macros. The classic inheritance example you link to is a tiny percentage of my code and an orders of magnitude smaller time sink when compared to the code maintenance problems I faced in other languages.


Monomorphization is not a user action of copy pasting, it's something the compiler does with parametric code. If we're talking about Rust it means when you write:

    fn id<T>(t: T) { }

    fn main() {
       id(String::from("foo"));
       id(1_usize);
    }

The Rust compiler will generate a method of `id` that works for both String and usize, so there will be two copies of the function with a slight variation in your binary. That process is called monomorphization.

> Which was exactly what I was experimenting with: A single queue worker that can handle different cases. Honestly, it made Rust almost not worth it for me. Sadly, I was too deep to turn back so I wound up doing the whole thing in Rust. I have tons of copy-paste code. It is ugly and it is bothering me.

You can easily use generics, there is no reason to copy paste code like this


Would that copy-paste code ever amount to more than some getter methods, like the get_speed() example you provided? If the speed field had some special behaviour, couldn't you wrap it in its own Speed type, with its own methods, and use that from the enclosing types?


I think the solution you've proposed is probably how you'd do it - but isn't inheritance more elegant in this case (i.e. using a language which supports inheritance if this is important to you)


No, inheritance is not an elegant way to mix in shared data and behavior. It can seem like it is in a simple case where there is only one set of data and behavior you want to mix in, but it scales very poorly. Nothing is fundamentally a single kind of thing, and multiple inheritance is a mess. It is more elegant to mix the behavior in through delegation, because it scales to however many things you want to mix in. Carrying on the synthetic example in this thread, you can mix in Speed and Pedals into Bicycle but Speed and Cylinders into Car.


Maybe not on every situation, but I do agree that inheritance is more elegant in a lot of situations, in the sense that it gets some behaviour "out of the way".


Sorry, I'm not getting it -- could you please explain further?

> You cannot compose types in Rust. You compose behaviors not types.

But I thought in OOP behaviour is type? (Or, IOW, type includes behaviour.) To me, judging only from this, it feels like Rust isn't quite OO... Is it perhaps just not quite finished yet?

Another aspect: That whole "Rust has something called 'monomorphization', which leads to lots of copy-pasting" reinforces that impression. Is this the same problem that C++ tries to overcome by the "select which inherited implementation to use" operator, and other languages by allowing inly single inheritance?


Beyond some small changes I would make to use new-types all over the place, I personally like to rely on Deref impls to mimic inheritance (although not everyone agrees this is a good idea to do too often): https://play.rust-lang.org/?version=stable&mode=debug&editio...

As you can see I also used a macro by example there to remove some of the duplicated code that you would otherwise have.


You can use a macro instead of copy and pasting.

You can also do this:

  struct Vehicle
  {
    speed: f32,
    type: VehicleType
  }

  enum VehicleType
  {
    Car(...),
    Bicycle(...)
  }
Or this (although this is the least common):

  struct Vehicle
  {
    data: VehicleData
    type: Box<dyn SpecificVehicle>
  }

  struct VehicleData
  {
    speed: f32,
  }

  trait SpecificVehicle
  {
    fn quack(&self, data: &VehicleData);
  }

  impl SpecificVehicle for Car {...}
  impl SpecificVehicle for Bicycle {...}


That's also been my experience. I've also seen programmers use composition and delegation solely to avoid inheritance (in a language that makes it natural), and the result is often less maintainable than a design that just uses inheritance.


Traits are virtual dispatching, albeit the dispatch table is not bundled with the object. The meaningful distinction is between implementation inheritance vs. inheritance of interfaces, abstract classes or traits.


Trait objects (via `dyn`) are virtual dispatching, but traits themselves are basically agnostic to static vs. virtual dispatch. If you use a trait method through a statically known type, the method is invoked statically -- no runtime lookup.


Very true. I'm checks cloc 14,000 lines into a personal project and I have yet to feel any need to use `dyn` (virtual dispatching for non-Rust folks).


"I've designed this project in Go following Go naturally imposed design patterns and found that I did not need inheritance.", said the Go programmer.

"Well, mmm, duh?", thought programmers of other languages.

"I've designed this project in Rust, following Rust-imposed design principles, and found monomorphism sufficient", said the Rust programmer.

"Well, mmm, duh?", thought programmers of other languages.

"I've designed this project in Python, using Python-style design and I've found that I did not need strictly typed functions", said the Python programmer.

"Well, mmm, duh?", thought programmers of other languages.

"I've designed this project in language X, which impose Y and as such I've found that you actually don't need Z", said the X programmer.

"Well, we got the picture already", said the HN reader.


Absolutely, but the interesting bits are in the metrics. How many lines of code does it take in each language? How many distinct concepts are required for the solution? To what degree is the mastery of each concept required?


I spent way too long trying to figure out how giving your kids your lifetime's earnings would be a performance hack beyond, you know, giving them a head start, before realizing what the headline was talking about.


"Jane/John Doe, eat your carrots and finish your homework before 9pm, or who knows what might happen to your interest in the family trust" is a conversation that has probably happened somewhere in history.

On a more serious note, thinly veiled threats by patriarchs to their children working in the family business regarding their eventual share in the business is something that probably happens routinely around the world, especially in areas like Europe, East Asia, South Asia, and South East Asia, with many multi-generational privately held family businesses.


People care tons about their kids -> Working hard and creating value results in a larger inheritance for their kids -> Exploit biological desires to protect offspring to get yourself to be more productive?


"Working hard and creating value" and "biological desires" are multiple inheritance, which is outside the scope of this article.


Is this the diamond problem?


You just invented society!


It's a hack for the kids, not the parent. Since "performance" is measured (in large part) in money, getting a bunch for free when your parent croaks is a super hack.


I figured cause it was on HN that there was no way that was the article


Fine. But obviously it caught on for other reasons, like code reuse and an intuitive mental model.

The HN appetite for posts ragging on inheritance will never be sated.


"Please don't sneer, including at the rest of the community."

https://news.ycombinator.com/newsguidelines.html


I take your critique, Dan, and in the future I'll try to find different ways to make similar points.

I'm a bit deflated though, that now that this subthread has been demoted and dropped to the bottom, the usual "everybody knows inheritance sucks" comments are accumulating up at the top. I really enjoyed having a discussion for once that tried to suss out why inheritance, despite its flaws, was historically so semantically compelling.


We downweight generic subthreads when we see them stuck at the top of a page. I've downweighted that one now. Obviously they're a problem (a big problem - maybe the biggest - because the most generic tangent from any topic is also the one that people are most familiar with, and familiarity breeds both repetition and upvotes). But the solution is not to have a generic tangent which is also a flamewar! However, you already agreed on that point and I don't mean to pile on.


dang, amazed at your parsing abilities at hn comment scale I checked your profile to make sure you are not a machine and read that passage you quoted which assures me you are a Human being with exceptional taste. Beautiful.

For others who may be interested, poked around and found that it is a quote from The Street of Crocodiles and Other Stories, by Bruno Schulz.

https://www.goodreads.com/book/show/1576188.The_Street_of_Cr...


My roots are in OO but for the life of me I could never figure out why people found inheritance intuitive beyond some automatic delegation (like Go’s struct embedding). I certainly don’t miss the guesswork of trying to figure out which class’s virtual method is being executed out of the dozen in the hierarchy (especially when one virtual method is calling other virtual methods).


Because some people understand "is a" better than "has a".


That's just a restatement of the original problem. Inheritance just is "is a" and composition just is "has a", so if a person doesn't understand why someone would find inheritance more intuititve than composition, they wouldn't understand why a person finds "is a" more intuitive than "has a".


"Catching on" doesn't mean "is the right way to do things". Code reuse is in fact a prime example of a misuse of inheritance


Correctly using a language feature often means using it for more than one reason. Extending superclass code can be well advised in case it also makes sense semantically. Now, I have seen a "mixin" class that just served to provide one relatively independent method, and I turned it into a free function. Much better. In other cases, an extended class works on the whole state of the object. In that case it usually makes sense.


> Code reuse is in fact a prime example of a misuse of inheritance

Does that depend on how inheritance works in a particular language? For example, in Python it’s widely believed that inheritance is simply “a tool for code reuse” [0].

[0] https://youtu.be/EiOglTERPEo?t=190


It's the main reason why inheritance became so popular and so useful.

Specialization remains a very common design pattern that is incredibly useful and trivially and intuitively solved with inheritance. No other programming concept (HKT, ad hoc polymorphism, functional programming, etc...) comes close to its elegance.


> Specialization remains a very common design pattern that is incredibly useful and trivially and intuitively solved with inheritance.

Beautifully put. That matches up with my own experience learning classical OOP, even if I'm now comfortable with other models.


Code-reuse via "Implementation Inheritance" is completely unnecessary for specialisation or polymorphism.

When someone says that "inheritance is bad for code reuse" they're not talking about interfaces, or using inheritance for polymorphism. They're strictly talking about sharing code using implementation inheritance, which is the thing that has been widely criticised for more than 30 years now.

One can argue that even the "Template Method Pattern" doesn't fall into "implementation inheritance", since the implementation lives in the subclass.

If you read the posts, discussion is way more nuanced than "inheritance bad vs inheritance good".


Reusing implementation is the point.

Here is more specifically what I meant with my comment about specialization above. There is a class with four methods, three of which are exactly what you need but the fourth one, you need to modify.

Solving this with inheritance is trivial (extend and override).

Solving this with any other paradigm is... much harder and requires a lot more boilerplate.


In functional programming you'd just create a new function that takes a value of the same type of those other 3 methods.

But anyway, creating a new method without changing any of the methods of the super class I think it's generally ok. The problems arise from modifying methods that the super class already implemented.


But code that uses the old function wouldn't magically start invoking your new function instead, something that inheritance and polymorphism give you for free.


Nothing will magically start calling your new function. If you are defining a new function that didn't exist before, then you'll have to actively call it somewhere. What you're describing instead is overriding an inherited function. However, that is full of pitfalls, I would not call that "for free" by any means. There are example of the problems in this very thread. Anyway, that's distinct from polymorphsim, which is present in functional programming.

You may be missing the point that's being made, though. No one is arguing against interfaces, but overriding concrete methods from a concrete class. Those need to be well thought out as extension points for you to have any chance of having stable software. Not quite for free.


But that's what polymorphism does.

Somewhere deep in the code is calling a.foo(), but when you pass a subclass of A that overrides foo(), then this code "magically" calls that new implementation.

This is where specialization shines and no other paradigm allows this so elegantly and so simply.


That is not exclusive to OO and it wasn't invented by OO either. However, the common pitfall of changing the implementation of a method that was not designed to be changed I would argue to be more common in OO than in other paradigms. That's a common pitfall, though. Not a place where OO shines. Though, to be fair, that's a problem in Java, Python and other popular OO languages, but not inherently a problem of the paradigm per se. C++, for instance, avoids that common issue by not making methods virtual by default.


Why is code reuse your prime example for bad inheritance? What should people do instead?


Interfaces with default implementations are better than classes for a number of reasons, including the diamond problem, and that as systems grow large they almost never fit cleanly into an inheritance tree. Fat pointers with two words, one for an interface vtable and one for the object, work really well.


In your view, how is "interfaces with default implementations" different from "inheritance for code reuse"? At a minimum it looks like a virtual function with a default implementation in the base class. If a derived class doesn't override it, isn't that code reuse?


> In your view, how is "interfaces with default implementations" different from "inheritance for code reuse"?

The latter is a more general classification?

A type which inherits a default method implementation from an interface is an example of "inheritance for code reuse".

A type which inherits a method implementation from a parent class under classical OOP is also an example of "inheritance for code reuse".

However, I argue that "interface inheritance with default implementations" is superior to "classical OOP" because it avoids tight coupling with memory layout, problems with implementing multiple inheritance in classical OOP, etc.


Maybe he meant something similar to Go interfaces, but they are abstract and coupled loosely with implementations after the fact.

That or abstract class/pure interface. There's no need for default implementations introducing assumptions in code.


I believe that both Go and Rust use two-word fat pointers for interface types. I was actually thinking of Rust traits first and foremost; in Rust, traits can supply default method implementations which invoke other methods defined in the same trait.

While I take your point about introducing assumptions, I'm reluctant to give up the convenience of default implementations; they seem to present fewer problems than classically inherited methods because shallow hierarchies are more common with interfaces than with classical inheritance, and because default interface methods cannot directly access member variables because they do not know about object layout — unlike methods inherited from a parent class under classical inheritance.


"Interfaces with default implementations" are a thing in Java and in some other languages, but most people don't know about them:

https://docs.oracle.com/javase/tutorial/java/IandI/defaultme...


I knew that Java added "default methods" to its interfaces a few years ago, but wasn't sure whether Java uses "fat pointers" like Rust and Go.


Because interfaces don’t confer a relationship between implementations, only that they have some common method signatures.


> Because interfaces don’t confer a relationship between implementations

Having the interface implement functions does exactly that, so your point only applies for interfaces without implementations.


> Interfaces with default implementations are better than classes for a number of reasons

it's literally the same thing in practice

> including the diamond problem,

the diamond problem has only ever been a "problem" in OOP textbooks, in years and years of working on OO system I have never saw it be a problem in practice


Do you know if the mechanism in described by Stroustrup in 1989 for supporting multiple inheritance is actually used by modern C++ compilers?

https://www.usenix.org/legacy/publications/compsystems/1989/...

The plan seems so complex that it makes complete sense to me why languages would avoid multiple class inheritance (where each class is afforded direct access to member variables).

In contrast:

• With single inheritance, you don't have to resolve different member variable layouts because there is only one.

• With interface inheritance (with or without default methods), interface methods don't get to access struct members directly and know nothing about object memory layout.


> Interfaces with default implementations are better than classes for a number of reasons, including the diamond problem [...]

"The diamond problem" only occurs with multiple inheritance. At a (wild-ass) guess, only a minority of "traditional" (=inheritance-based) OOP languages have that; certainly not all of them. So as far as "the diamond problem" is concerned, interfaces with default implementations are no better than single-inheritance classes.


In OO languages, inheritance gets the most bang for the buck when used for polymorphism.

All code reuse can be expressed with composition and delegation, bringing more flexibility, testability, cohesion, decoupling, etc, etc, etc.


Inheritance is just a tool to mostly avoid. It tightly couples different class implementations. So will make later refactors error-prone and tend to scatter logic. It's misunderstanding OO as messaging between objects.

Using composition or lay out data differently may yield designs with more desirable properties. There's no one answer and it depends.


How is it a misuse? And what are the alternatives?

To be clear, I do agree with your assertion, however I'm having a hard time actually putting "how/why" into words.


It’s a misuse because (if you really are only using it for code reuse) it’s creating an essential relationship between different objects that is actually incidental.

A list containing two different subclasses of your avoiding-copy-paste base class is now most likely a logical error.


> A list containing two different subclasses of your avoiding-copy-paste base class is now most likely a logical error.

So don't allow those? Supporting inheritance doesn't mean you have to support heterogeneous lists.


You could just as well argue that there should never be any code reuse ever, code reuse always couples objects, code reuse should never be done if the two different parts doesn't have the same purpose. So the same rules for all code reuse also applies for inheritance, if you follow them then there are no problems using inheritance. If you have a set of data kinds which all needs to implement the same fields that has the same purposes then having a list of that parent class doesn't lead to bugs.


I don’t think it’s true that reuse means tight coupling. With composition you can swap out the composed object at runtime and as far as compile time coupling is concerned you can embed an interface to decouple the outer object from the inner object.


All code reuse creates coupling on some level. Creating more abstractions in between reduces coupling but creates code bloat. There is a trade off.

For example, if you call the same function from many locations those locations are now coupled since if you change the function you change all of those locations behaviour. Many times that is desirable, in which case it is good coupling. The exact same rule applies to inheritance.


Can you elaborate a bit? How is there coupling between a thing which uses an interface and a thing which implements the interface of the two don’t know about each other? Especially if the interface is implicit à la Go interfaces or structural subtyping?


Every class implementing the interface needs to follow the same interface contract, that is coupling. If you want to change the interface contract for one of them you have to go and change it for all of them or you have buggy interfaces. Every time you create an interface you create a contract for it, if you have the same discipline with inheritance then inheritance isn't an issue.

The only problem with inheritance is that people can use it for classes that weren't written to be base classes, and therefore a ton of bad programmers use it for code reuse without thinking about the contract it is supposed to implement at all. With that in mind, allowing inheritance only for abstract classes isn't an issue at all.


how is that different from a list containing 2 implementations of an interface?


There is no state to track. If there is, 99% composition has worked out way better for me in the long run since I don't have this really strong coupling between two objects.


So you don't have a need for code reuse then since you don't have many types with the same base structure, just say that instead of saying that it is bad.


There are often more straightforward ways to reuse code than to use inheritance. ~Two~ three advantages of composition for code reuse:

1. Composition is often more explicit than inheritance. Inheritance is overly magic.

2. Composition avoids incidental coupling of methods. With inheritance this is unavoidable.

3. Composition is more difficult to misuse. Both composition and inheritance have their purposes. In the wild, I’ve seen inheritance misapplied much more often than I have seen with composition.


yup, totally agree. imho, composition is the right (only ?) way to code reuse.


What benefit does inheritance provide over importing files containing functions? Ruby and loads of other languages let you import files and those functions work like they were part of the class itself.


Many benefits (e.g. attaching behavior to instances rather than modules, though there are other ways to accomplish the same goal), many drawbacks (fragility, yoyoing, etc.). C could be described as "files containing functions", and C++ was originally "C with classes", so it's not like "files with functions" is a revolutionary idea. This is all extremely well-traveled ground.

I'm happy these days using languages like Rust and Go that avoid classical inheritance, so I'm not here to argue in favor of inheritance. I'm just irritated by the pile-on.


I’d also add that live systems like Smalltalk avoid a lot of the fragility and dynamism of inheritance by the way the browsers show all the currently active extensions to the class, and all the current subclasses in a fairly compact view. It’s a lot easier to find and modify affected child classes in this way than it is with “code in files” systems.

Also, the way CLOS does inheritance avoids a lot of the drawbacks too by decoupling classes and inheritance from behavior.


These are those different features. Delphi / ObjectPascal for example support either.

As for what benefit - your example with Ruby can be reversed and I can ask you the same question.

Anyways large part of a programming is an art and different people have different approaches. If you do not like the concept do not use it. Just do not assume that what you think is the "right way".


Is this ragging on inheritance? I don't see much in the way of value judgements mentioned. The closest is an assertion that doing things for performance is not bad, which seems more like a way of heading off comments like yours than anything else.

It's a bit of interesting history. I don't know why it would get a "Fine. But unrelated stuff" response.


I suspect that many will read the title as derogatory, even if the post is not a firebreather. Also, from the conclusion:

> Personally, for code reuse and extensibility, I prefer composition and modules.


Every once in a while one concept goes out of fashion and another one becomes current fetish. Often the new is actually an old one and sometime is masquerading under new name. And then you have army of prophets that are here to show you "the right way". Rinse and repeat. Personally I read and try various things and use whatever I think is appropriate for given situation. Do not really give a fuck about what is the official point of view on subject at the moment. It changes of course if doing job for client and client insists on things done in particular way. Luckily I mostly design and build complete products so clients mostly care about the end result and do not interfere in the internal plumbing matters.


The best practices I've read mostly say that inheritance should be limited to one non-interface parent. I.E. obly one parent with an invariant. For everything else composition. That mostly lines up with my experience, but I mostly deal with problem spaces that aren't terribly OO like DSP.


[flagged]


This article discusses object/class inheritance in programming languages, not wealth inheritance between people.


[flagged]


In case you aren't joking, inheritance in the context of computer programming (for simplicity's sake) is just a metaphor that's supposed to make computer programming easier and has nothing to do with economics.


life is not comfortable for those who get things given to them.


I was once one of the "luky 10.000"[0] about this: https://www.snopes.com/fact-check/program-management/

[0] https://xkcd.com/1053/


If it has a Snopes article debunking it, it's the inverse of the lucky 10,000 since "everyone knows" the wrong fact




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: