Hacker News new | past | comments | ask | show | jobs | submit login

> you cannot use a square everywhere you can use a rectangle (for example, you can’t give it a different width and height)

Can someone come up with a better example here? Intuitively, I would say, "Yes, if you ask me for any rectangle, and you reject a square, you are wrong." If you say you can use any rectangle to do your thing, you should absolutely be able to also use a square.

Why am I not convinced with the given example? Because fundamentally, I don't think "set the width and height" counts as something you can do with a rectangle. A rectangle has a width and height, and you can't just will it to have a different width and height and expect it to obey you through some force of nature. What you can do is construct a new rectangle and destroy the old one in the process.

In other words, if you expect to change the shape of the item, you should not permanently identify the item by its shape. The given example is a bit like saying "The superclass Animal has a method becomeCat which turns any animal into a cat." and then feigning surprise when your code breaks for any non-feline animal.




The square vs rectangle example is a great one for showing how to not use types at all, IMO.

> As a type, this relationship is reversed: you can use a rectangle everywhere you can use a square (by having a rectangle with the same width and height), but you cannot use a square everywhere you can use a rectangle (for example, you can’t give it a different width and height).

Which is a good thing (yes, I'm a proponent of strong typing, like in Rust). If you want that square to be used in places where only a rectangle can be, you should either be explicitly required to cast it to a rectangle type or have a language that can define and handle contravariance for a consumer method, where it would be OK to accept a square as a rectangle.

EDIT: As this is getting upvoted and I said it slightly wrong: "where it would be OK to accept a rectangle where a square (or it's supertype, the rectangle) is acceptable." Goes ways to prove that strong typing is the better solution, though, I guess. :-)


> and you can't just will it to have a different width and height and expect it to obey you through some force of nature

I think you have let functional programming and immutable data-structures bias your world view.

The real world is mutable and the wonder of the digital mutable world is that it is pretty much just will-alone that can set attributes as you describe. It is immutability that is a trendy but artificial layer on top of this reality.


> The real world is mutable

Only if you ignore time. It's not possible to change the state of the world at some previous instant (as far as our understanding of physics is concerned). The mutable world model is an artificial construction that aligns with our human perception.


Something that changes over time is mutable. Time exists and we can’t escape it, even with time-indexed immutable data structures that implement explicit mutability. The only question is does the thing change internally (an object) or is it replaced with a new one (a value).


I think it's not at all evident that reality is not a value replaced with a new one all the time.


Even if that were the case, it wouldn’t be useful since we experience time with continuity anyways. Bob at time t is still Bob at t+1 even if his state (like position) has changed. If Bob were a value, then he would be another person, we would have to add a persistent ID to the bob values so we could see them as the same object.


> Bob at time t is still Bob at t+1

I guess we're getting more into philosophical issues now. If I leave an ice cube on my counter, at exactly what time is it no longer an ice cube.


Mutability is mutability, it also applies to ontology even if most OO languages don’t model dynamic ontology with inheritance (unlike say Self or Cecil).

Your ice cube was never just an ice cube in the first place, it was just some water that happened to be frozen as a cube...once heat was applied to the water, it’s state changed so that it eventually could no longer be classified as an ice cube.


> Your ice cube was never just an ice cube in the first place

That's my point. This idea that there are objects with mutable state is a myth. Even at the smallest scale, what we call elementary particles, are abstractions. There is nothing except the state of the universe at a given instant in time.


Again, we are unable to work or perceive at that level. So the abstraction of state is incredibly useful to us non-sub-atomic beings.


Agreed, but if we're accepting an abstraction instead of reality, than arguing for mutability because you think it's reality doesn't make much sense now does it? You're saying it's okay to abstract things away from molecules, but it's not okay to abstract things away from mutation. Why?

(Note that I'm not even persuaded yet that mutation is reality. I'm just saying that even if mutation were reality, it doesn't follow that mutation is the best abstraction with which to model reality.)


> Bob at time t is still Bob at t+1 even if his state (like position) has changed.

Again, I don't think this is at all evident.

What if we remove a limb? Give him a dose of LSD? Give him a brain tumor? Replace most of his cells as happens to each of us every few years? What change is large enough that it's easier to represent with:

   bob = getNext(bob);
...rather than:

   bob.foo = bar;
? Is it even harder to represent change one way rather than the other?

Indeed, socially, the concept of identity is a leaky enough abstraction to cause problems; for example if we are criticized for actions that we took years ago, it's easy to get defensive even if we are a different enough person now that we would never take that action now.

Bringing this back to inheritance: if it would make sense to model Bob as an instance of Child one day and as an instance of Octogenarian another day, why not create a new instance of Bob?

You want to argue this from the standpoint of "this is how things are" but I think that there are multiple ways to model Bob and which is appropriate actually depends more on the needs of the system than the true nature of Bob.


Bob’s state might change, but he is still Bob. That would also go for the Ship of Theseus. However you represent it, change is still change, it is still mutability.

We can argue if identity is useful in real life, but most people cling to names and identity, it isn't a controversial subject.

Dynamic inheritance is extremely useful in a programming language, though only a few have it. It is a very OO concept, as are the languages that have explored the concept (e.g. Self, Cecil, among others, even Javascript has this aspect, though not focused enough to use very well).


"Everything changes and nothing remains still; you cannot step twice into the same stream"


Ironically, the latter part of what you quoted shows the solution to representing change in an immutable system: create a different stream object instead of mutating the same stream object.

(Streams in this case not being streams in an I/O sense, just an example of a class).


Meh. Once you mutate an object it is not the same object it once was either; you've only maintained a stable memory reference and alias, without the overhead of making a new one.


Yes, that's exactly what I meant. Sorry for not being clearer.


If you're modeling the world, you can do so immutably. If you're interacting with the world (I.e., I/O), then you cannot.


No. Mutation essentially breaks subtyping. The infamous ArrayStoreException in Java is the story of how references (which should be invariant) are treated as covariant and therefore causes runtime exceptions.


If your array type (the only reified generic type in Java) is going to be parameterized by a single type, then you're right that it needs to be invariant.

One alternative would be to keep track of two types for an array: the most general type that can be stored into it and the most general type that might come back out of it. Any type cast that makes the types storable more strict or the type expected coming out less strict should be allowed. If you throw in a bottom type in your type system, then you get immutable arrays for free.

Of course, such a type system would probably cause a revolt among the majority of Java programmers for being too complex.


Which is only because Java doesn't understand contravariance. In Scala, that problem no longer exists.


No. When references or arrays of references are treated as contravariant, you would have the opposite problem of ArrayReadException. You can now store things into arrays safely but you can no longer extract things from arrays safely.


Consumers are contravariant in Scala, but producers are covariant.

EDIT: Good reference: https://www.atlassian.com/blog/software-teams/covariance-and...


This is not wrong, but it ignores that in terms of reliable software architectures, it’s very often still better to model change using strictly immutable data types. Often, not always.

As usual the truth is somewhere in the middle, which is why I don’t like the periodically recurring discussions why OO is bad, FP is good or vice versa. I think the article splits up the issue quite nicely, the conclusion I get from it is that the answer to the question whether to use inheritance or not is ‘it depends’.


The real world is mutable in some sense, but our thinking, even in the most trivial exercises, works by detaching _objects_ them from their physical nature and building immutable _ideas_. Thinking is a struggle trying to find what is necessary in a contingent world, trying to find essence in a world of form. That is why values and immutability are so useful for modeling, architecture, and building resilient software.

I talked more about this in this MeetingC++'17 talk: https://www.youtube.com/watch?v=NMol_5-2owo


In real world you stretch a square and it may become a rectangle. So in OO even type of an object should be mutable if you want to model real world closely.


Is the real world "subtypeable", in your opinion?


Sure. I want a vehicle to go to work. A car, preferably, but a motorcycle or bus will do too in a pinch.


I don't know about this. Is a teleporter a vehicle? What about a vehicle which doesnt have wheels, in a world built around assuming that vehicles have wheels?


>I don't know about this. Is a teleporter a vehicle?

Depends on your definition on vehicle. For many purposes (e.g. going somewhere) it could be, especially if it also existed.

Subtyping just means "having a taxonomy of things, where we can substitute things lower on the taxonomy -- more concrete, more specialized and so on" with things higher up when wanting less abstraction (and vise versa).

And this is pretty much the case with the world -- Plato's "ideas", scientific taxonomies (e.g. of animals and plants), or RDF and semantic taxonomies, and so on are all based on the same concept.

(That doesn't mean they're perfect descriptions of the world, like our class taxonomies in OOP don't lead to perfect descriptions of most problem domains either).

>What about a vehicle which doesn't have wheels, in a world built around assuming that vehicles have wheels?

That's a problem of definition of what should belong in the class of things we call "vehicle" (and you could have the exact same questions when doing OOP).

As such it's not an argument in whether the real world has or doesn't have subtyping.


Will a bicycle? A plane? A boat?


Most of them. But that's not the point. Not all subtypes are applicable to any task in OOP either.


Then I'm not quite sure what's the point of them. Surely instead of using a type you're allowed to use any of its subtypes.


Values should be immutable even in OO languages. A point’s x and y coordinates can’t be mutated because then it would be a different, just like a number’s value can’t be mutated. The question is then is a rectangle a value or an object? I would say no and then stack allocate it like any other value, recomputing it if a rect property ever changed.

The real world has both values and objects, any ideology that doesn’t acknowledge both is just fooling itself.


> if you ask me for any rectangle, and you reject a square, you are wrong

No, no: "If you ask me for any object that can have independent width and height, and you reject a Square, you are..." right, of course. It depends on the properties that define what a Rectangle and a Square are in your code.


This matches my experience: ontological inheritance gets in the way. What we actually want is traits. What can something do, not what it is.


This seems like structural subtyping? Are these synonyms?


I think traits (as I know them) are still nominal subtyping. E.g.: a Rust struct implementing a next() method does not implement the Iterator trait even if they are structurally equivalent. You have to explicitly implement the trait.


More like interfaces. A lot of OOP languages just did it in a limited way where interfaces for a class are closed. Meanwhile in Haskell or Rust, you can define your interfaces, and then provide implementations of it for existing objects like String. Or the Scale workaround, which basically is implicitly converting to wrapper classes that implement the interface. With the goal being extending existing types with new shared behavior, without the limitations of full inheritence.


The only sub typing in Rust is lifetimes; other types have no sub typing.

Traits are “ad hoc polymorphism”.


Which is another way of saying that the domain model is what determines the properties of polygons that actually matter. You may have two classes, Square and Rectangle, and they may be mutually incompatible because you need to represent the shape of a hole and the shape of a plug. In that case your square isn't compatible with a rectangular shape even though it's a type of rectangle.

I think the author is saying that sometimes people are unclear about the domain model, and OO doesn't really give you any tools to reason about the model that you don't have in imperative languages, so it's not really an improvement. Fancy type systems are not a substitute for understanding the thing you're reasoning about.


> Fancy type systems are not a substitute for understanding the thing you're reasoning about.

And now you got me wondering about all the times I wrote a Haskell type, some bugged code, mindlessly changed code until the compiler stopped complaining, and it worked flawlessly.

But I guess your comment was about that "writing the types" stage. There were a few times I mindlessly wrote them too based on where they would be used (I could have the compiler give me them, but I'm not used to it). I can remember a few times I did both on the same functions, mindlessly write the types and the implementation.


> No, no: "If you ask me for any object that can have independent width and height, and you reject a Square, you are..." right, of course. It depends on the properties that define what a Rectangle and a Square are in your code.

I think you nailed an important distinction here, but maybe used the wrong words in doing so. The correct definition of a rectangle, with this interpretation, is any object that can change its width and height independently.

I'm just not sure that this is such a sensible definition of a rectangle.

In my mind, the word "rectangle" is a description of how things are in this instant, not a description of in which ways something can change in the future. The latter is a valid concept and something we need a word for, but I'm not sure "rectangle" is such a good candidate.

This also opens up for a solution to the problem:

- In the first sense, rectangle > square. A rectangle has four straight angles, and squares are rectangles where both lenghts happen to be the same.

- In the second sense, square > rectangle. Squares are things which can be scaled, and rectangles are squares which can scale each axis independently.


I don't think the "that can change" characteristic is important here.

You can declare that "for all values of width and height, w does not have to equal h" to be valid. Mutability is irrelevant.


But a rectangle as traditionally defined can have equal dimensions. Consider the rectangle whose width slowly grows to be longer than the height. Surely it does not skip a value because it happens to equal the height?


"does not have to be equal" != "can't be equal".

The set of all possible rectangles have w and h that can be different (or the same) values.


Yes, the problem is that mutable and immutable objects have different methods and therefore fit into different inheritance trees.

If you construct an immutable rectangle and pass in the same width and height, you have a square. (An object implementing the same API could be implemented by a subclass whose constructor just takes a width.)

If you construct a mutable rectangle with the same width and height, you have a rectangle whose shape is temporarily square until a setter is called. The "square" invariant doesn't hold for the object's entire lifetime, so it's not a square. Instead you could write an isSquare method and sometimes it would return true, depending on the object's state.

The inheritance relationships you expect from thinking about math only hold for immutable objects.


I actually don't think the problem is one of mutability/immutability – it's just that a mutable setting very clearly exposes the problem.

I think the core of the problem is that the object model has permanently and irreversibly associated an identity to an object based on attributes that are actually malleable. Someone else made a good connection to pastry dough, which can be shaped even more freely. We would never hold up a clump of pastry dough, however it is shaped, and proclaim "Down at the core, this is fundamentally a rectangle," because we know the rectangleness is just a temporary description of its spatial attributes, which may change the very instant we accidentally drop the dough and it hits the floor.

Similarly, if "being a rectangle" in our model means "being able to change the width and height independently", then the square should never have been allowed to be a subtype of the rectangle.

I guess this touches more and more closely on what the submitted article discusses, and my initial confusion stemmed from the fact that I have never before heard the "can change width and height independenly" definition of a rectangle, because from my maths background a rectangle is just a description something that happens to be, not something that changes itself.


Yes, and that's directly related to immutability. Immutable objects are timeless; their initial state is their final state. Any property that holds for their initial state can be encoded in the type and placed somewhere above it in the inheritance hierarchy.

You can think of a mutable object as a collection of immutable and mutable properties, where only the former can be moved into the type and become part of the type hierarchy.

This distinction isn't really about inheritance; it's more about what a type is in the presence of mutability. The type represents the invariants.


Very well put.


I think I get what you're saying here. On some level, an object method is a function which takes an object of a particular type and returns a new object of the same type.

So if you have a function like

Rectangle GetRectangleWithWidth(Rectangle, double) {...}

Which is equivalent to a setter method like

Rectangle::SetWidth(double) {...}

You can do:

Rectangle r, r';

r' = GetRectangleWithWidth(r,2.0);

But in that case doing something like

Square s, s';

s' = GetRectangleWithWidth(s, 2.0);

makes no sense, and it's pretty clear why. The return value is wrong. You can use a subclass as an argument to a function, but you can't return a superclass and just assign it to a subclass. And if you have some function

void Operate(Rectangle& r) { r = GetRectangleWithWidth(r, 2.0); }

then passing in a square should be a syntax error and not compile. Although, I'm not sure what C++ would actually do in this case.


>Why am I not convinced with the given example? Because fundamentally, I don't think "set the width and height" counts as something you can do with a rectangle. A rectangle has a width and height, and you can't just will it to have a different width and height and expect it to obey you through some force of nature.

Sure you can. You can both dictate it its width and height at construction time (e.g. the table maker who makes an X by Y board), or at anytime later (e.g. the same table maker who cuts the a 3X by X board into a 2X by X board).

Even more so with other materials, where we can just bend them and squash them into the dimensions we want again and again (e.g. pastry dough).


The carpenter has certainly altered the dimensions of the board, but in so doing has he not destroyed the old rectangle and created a new one?


No. Even less so with the pastry dough squashed into new dimensions.

Or when a citizen turns into a soldier -- an army camp is like a soldier factory/constructor. Hmm, how's that for a mixed metaphor?


I disagree. If you remold a square piece of pastry dough into a circle you haven't "circled the square", you have destroyed the square entirely!

I don't quite get the army metaphor, but I don't think the relationship of "citizen" and "soldier" is analogous to that of "rectangle" and "square" or even "rectangle" and "rectangle with different dimensions".


>I disagree. If you remold a square piece of pastry dough into a circle you haven't "circled the square", you have destroyed the square entirely!

Well, the physical thing is all still there, just in a different shape.

You have destroyed only an abstract quality, it's squareness, not the thing.

So you haven't "destroyed it" entirely anymore than changing the width of an object to something equal or different than it's height its 'destroying" the object. It just sets a value for a field of the same struct in memory.

Whereas your argument was "A rectangle has a width and height, and you can't just will it to have a different width and height and expect it to obey you through some force of nature".

If by the rectangle you mean the abstract quality, then you're right, you can't.

But we're talking about an object in computer memory that's an instance of a square or a rectangle, or about some physical thing that is one or the other shape.

And if you mean those, then yes, you can very much will that object to a different width and height.

Which is the one of the distinctions TFA makes: the abstract quality can't change, but the thing that embodies that quality can change qualities (and, with them, attributes, like width and height).


> You have destroyed only an abstract quality, it's squareness, not the thing.

Here we are in agreement.

> But we're talking about an object in computer memory that's an instance of a square or a rectangle, or about some physical thing that is one or the other shape.

Everything in a computer program is an abstraction, and of course you can arrange these abstractions however you like. You could create an instance of a Square that inherits from Paper and has dimensions of "A8". I'm merely arguing that when building an object whose shape may change, using a PastryDough class is probably preferable to using one called Square.


Agreed this example is deeply flawed! As you point out, there is a difference between what you can do with an object versus how you can create one. And "you could use SOME rectangles anywhere you could use a square" is not even a claim about type relationships... It would have to be "you can use ALL rectangles everywhere you could use ANY square" to reverse the type relationship.

In reality, abstract and usage typing are highly aligned. The incompatible sibling is code inheritance. Knuth put I more simply: "Don't inherit to reuse code, inherit to be reused."


> Yes, if you ask me for any rectangle, and you reject a square, you are wrong

Intuitively it makes sense, but practically... What if your function wants to set the width and the height to different values?

This is valid for a rectangle, not for a square.

From a mathematical perspective, a square might be a rectangle. From a software standpoint, it's not.

The reason for the discrepancy between mathematics and programming?

Mutability.


To fix it in real world you make factory abstraction which you feed with initial data and then factory knows if it is rectangle to be produced or a square. So developer should not be concerned with cats or dogs, developer should be able to call Feed() method.

But we all know how this ended in Java.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: