Hacker News new | past | comments | ask | show | jobs | submit login

Yes! The thing that ends up breaking in the squares and rectangles examples is mutability. Remember that math things are immutable by default. This thing is a square, and by definition also a rectangle, and since its properties and identity are immutable, that will always be true.

However, programming takes those immutable concepts and tends to make them mutable. So now we have a rectangle, and we can change its identity by, say, scaling its width. And it's obvious that scaling just the width of a square will make it no longer a square. So now a square cannot support the same operations that a rectangle can. So now a square is no longer Liskov-substitutable for a rectangle.




Perfect! This is the covariant/contravariant/invariant distinction, right?

"Read-only data types (sources) can be covariant; write-only data types (sinks) can be contravariant. Mutable data types which act as both sources and sinks should be invariant." from https://en.wikipedia.org/wiki/Covariance_and_contravariance_...


Absolutely, and that's another excellent way to explain the problem and come to the same conclusion. The mathematical relationship between rectangles and squares is covariant, because squares have more specific constraints than rectangles. But when you make rectangles mutable, those mutations are contravariant, because they only guarantee to preserve less specific constraints. So you can't make squares from mutable rectangles!


Excellent point! I think it's fair to say that substitutability can only be satisfied (in general) for a fixed set of operations. This is why I believe Haskell's (and by extension Rust's) approach to bounded ad-hoc polymorphism is the best that I've seen in a production language. Going beyond Haskell, Rust also allows you to choose static dispatch, making bounded polymorphism zero cost. Dynamic dispatch is still available, when desired or needed (but it's almost never needed).


I'm a huge fan of the Rust and Haskell systems for exactly this reason. I sometimes play with the idea of a language that has that kind of type system, but with a more forgiving environment Rust or Haskell. Something like C#, but swap out the type system to feel more like Rust.


> The thing that ends up breaking in the squares and rectangles examples is mutability.

Mutability isn't sufficient.

Breakage requires more than mutability. In dynamic languages, a mutated object can retract its squareness (smalltalk `become`; javascript __proto__ assignment; python object self-mutation). And predicate types (it's a Square iff its sides are the same length) are even graceful.

So I'd rephrase that as "if a specific implementation can mutate in ways that invalidate the laws expected of it, and you can't mutate the laws expected of it to match, your implementation might not match expectations". "Might not", because relaxed and pruned expectations may be locally sufficient.


Good point. I would expect the immutable version of scaling a side to return a new rectangle, thus allowing a scaled square to also return a rectangle. I hadn't thought about mechanisms that would allow a type to basically upcast itself in a mutable operation. Of course, for typing proposes, the signature looks the same: This operation yields a rectangle. It's really just a matter of where the return value lives.


> for typing proposes, the signature looks the same: This operation yields a rectangle. It's really just a matter of where the return value lives.

Well, it yields a type union of square and rectangle. A runtime, whether the original language appears mutable or not, can play games with that. Like resolving types lazily. If for example some object is only drawn, and draw happens to dispatch it independently of squareness, on say a predicate type DistantObjectThatLooksLikeADot, then there's no need to resolve whether it was square. No one will ever know. Or ever have to pay the cost of finding out. It has say a MightBeSquareMightBeRectangleHaventHadToCareYet dictionary. :) Which can become important as types get more expressive, and expensive to prove/test.


Sum types! I don't get to use that kind of stuff very much, so I tend not to think about them off the top of my head :)

How do languages with sum types handle scenarios where the type is A+B, but B is a subtype of A? So you're really guaranteed to have an A, but you may or may not have a B? Do they allow transparent reference as type A, or must you disambiguate first?

That is, given a function that takes an A, can you pass it a type A+B, given that B is a subtype of A?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: