Hacker News new | past | comments | ask | show | jobs | submit login
Goodbye, shitty "Car extends Vehicle" object-orientation tutorial (canonical.org)
232 points by sebkomianos on Aug 22, 2011 | hide | past | favorite | 128 comments



Heh, I agree somewhat that the Duck>Animal is a bit annoying.. but this:

  Here’s an example that I think would be better to use instead: the
  `Visible` hierarchy in [Pygmusic][], which is a kind of software drum
  machine.  A `Timer` is a horizontal strip on the screen with a stripe
  racing across it. A `NumericHalo` is a spreading ripple on the screen
  that fades. A `Sound` is a thing on the screen that makes a sound when
  a `Timer`’s stripe hits it. A `Trash` is a thing that deletes `Sound`s
  when you drop them on it. They all inherit from `Visible`, which
  represents things that can be drawn on the screen and perhaps respond
  to mouse clicks or drop events, but they do different things in those
  three cases. In addition, the `Trash` and `Sound`s are subclasses of
  `ImageDisplay`, because the way they handle being drawn is simply to
  display a static image, so that code is factored into a superclass.
Is by far worst.. The goal of the Person or Duck is to pick a trivial example to explain some concept. Of course you're not building Sim city and that it will be totally different in real apps.

Also, it's true that inheritance is often the wrong solution and could arguably be said that it create more problems than it solves. However, before understanding all the pitfalls and the why/why not of inheritance, you first need to understand what inheritance is!

So, instead of shooting someone who give an example with a Duck to explain OO, let just say that this tutor should say beforehand "Let's take a trivial example to understand the concepts of OO. Later on, we'll see why this is not always the preferred solution."


I agree that the proposed replacement is terrible.

I'd suggest files are a great example for people who have a programming background because you can quickly show problems (e.g. streams considered as a subclass of files) that entail the kinds of tradeoffs and decisions the author rightly points out are shortcomings of the duck example (most of the shortcomings listed are in fact more a case of the author trying to show off and not problems with the example at all).

But, you know what, duck, bird, and penguin are actually just fine. As the author himself (I assume it's a him) points out, penguins can't fly. And let's say that over hear we're doing vehicle, car, plane, and helicopter. How can we avoid implementing flying twice? And is it a good idea to treat flying like a plane, a helicopter, and a bird (but not a penguin) as the same thing. Oh, and the way penguins swim is amazingly analogous to flying, how can we leverage that?

So, bye bye shitty drum machine example. I'm going back to duck and bird.


As someone who is only a couple years past my first encounter with car/vehicle, duck/bird examples I can say that they did nothing useful for me. I think the reason they were so useless is not because they were trivially simple, but because they emphasize the wrong type of focus when dealing with inheritance.

Presenting inheritance in terms of duck/bird emphasizes thinking in terms of similarities between nouns, which is misleading. I would have much rather seen some contrived example which does something other than print "quack".

This, however, is a criticism with most tutorials that I've read. Most would benefit from having more substantive examples and letting the syntax and concepts fall out of them.


On the one hand, using examples which do nothing more than print "quack" is a bit of failure of imagination on the teacher's part. But then, _needing_ an example to do something other than print "quack" is a failure of imagination on the student's part.

You had, I assume, seen functions before.

If I were teaching such material I would keep the content of the functions themselves as simple as possible to allow the student to concentrate on the novel material. Once I had established that the student understood the principle we could then examine or at least discuss more "real world" cases.

If we go back to the original article for a moment, I can far more easily see an immediately accessible problem-set emerging from bird / duck / penguin (e.g. programming the behavior of animals in a virtual world) than a particular drum machine whose operation might make no sense to many people. Are ducks a "subclass" of bird? How do we handle the face that penguins and ducks can swim and penguins can't fly? Is it useful to consider be the penguin as a "bird" at all. Perhaps we shouldn't have a "bird" class but an "animal" class. Should we do multiple inheritance? Mixins? Templates? Perhaps we'd be better with an "animal" class and allow some animals to fly and some to swim? Are complex classes like this bad? How could all these different approaches to OO design be applied?

Now explain to me how a drum machine illustrates these problems in a way everyone can immediately appreciate.


>> Now explain to me how a drum machine illustrates these problems in a way everyone can immediately appreciate.

A drum machine may not be the best example (as the author mentioned) but it is a concrete example. You argue that an animal based example illustrates all these different possibilities in an object oriented approach.

>> Are ducks a "subclass" of bird? How do we handle the face that penguins and ducks can swim and penguins can't fly? Is it useful to consider be the penguin as a "bird" at all. Perhaps we shouldn't have a "bird" class but an "animal" class. Should we do multiple inheritance? Mixins? Templates? Perhaps we'd be better with an "animal" class and allow some animals to fly and some to swim? Are complex classes like this bad? How could all these different approaches to OO design be applied?

The problem is that this is completely divorced from anything that any sane person learning programming would want to do.

Sure, printing "quack" is interesting for the first page of a tutorial, but I would expect that the usefulness of the examples in a tutorial would scale with their difficulty. If you're using multiple inheritance, mixins, and templates, you should be using code that either requires these techniques or benefits from them.

Imagine you're a novice programmer. You've used a computer your whole life, and do awesome things with it every day. You want to learn how to make it do cool things. In and of themselves templates, mixins, and inheritance are not cool things. They are only cool if you know what they can help you do. A novice programmer will have no idea what they can help him do.

So, for a tutorial aimed at teaching someone their 12th programming language, use birds/ducks. But if you're aiming at teaching a novice, and you use a birds/ducks example you have almost certainly failed to keep your student engaged. Instead, go with something that doesn't cover all the bases, but starts to show how much work can be accomplished with some simple OO structures.


> The problem is that this is completely divorced from anything that any sane person learning programming would want to do.

It sounds to me a lot like a typical game-programming kind of design question. Games are pretty popular. Can you think of a more generally popular problem domain?

Certainly, one can imagine very similar examples (car / vehicle / truck etc.) which would bear directly on very popular game analogies.

> So, for a tutorial aimed at teaching someone their 12th programming language, use birds/ducks.

I think your point here is well-taken. If the student knows about conventional programming, introducing OO coding is a different task than otherwise. But the whole 'print "quack"' point is pretty silly -- of course you'll initially teach about inheritance using trivial functionality that doesn't distract from the teaching point. I've seen OO tutorials that talk about point, square, and circle classes that never actually draw a point, square, or circle. Is printing "width = 100" more profound than "quack"?

In the end, you teach a topic by focusing on that topic, not adding a bunch of distracting detail. The distracting detail may be useful for motivation, but you need to get it out of the way to actually explain the point.

I might finally add that teaching a specific person you're talking to is a different task than writing a book. When you're teaching a specific person you can find out what they're interested in and use that as both motivation and example. But, if you're writing a book you want to find analogies that are are universally accessible and appealing as possible.


Probably if you have a trivial piece of code, you don't need inheritance to help you structure it. But you're right that we need a smaller example than the entire class hierarchy of Pygmusic. What do you think would be the smallest sensible example?

If your objective is only to explain what inheritance is rather than why someone would use it, you could probably write two classes and three methods:

    class A:
        def z(self):
            print "z " + self.y()
        def y(self):
            return "Ay"

    class B(A):
        def y(self):
            return "By"

    A().x()
    B().x()
But I really think that most people will find a program easier to understand if it's not completely abstract like that — if it reflects some kind of intentional process where a programmer was trying to achieve something. But in order to explain inheritance in such a context, you need a program where inheritance makes things better instead of worse. What do you think is the simplest possible piece of code that would do that? I am thinking of a very-much-cut-down version of Pygmusic with only two drawable objects.


Device drivers. You have a base class, device, which supports at the minimum, read() and write(). Then you subclass to block and character devices, and so from there. It's a real world example (Unix does it, although in C) and it's simple enough to explain the concepts.


But delegation is a better solution than inheritance for this, i.e. the printer driver class takes a character device instance as a parameter. Delegate the character device's read and write methods to the printer driver's class, make the printer driver do the same interfaces, and done. Now you have something that composes better than inheritance and substitutes anywhere you needed a character device.


irc bots. Or chat bots in general since kids are not familiar with irc nowadays.

It's an amazingly useful teaching tool, since it's real enough to understand it and fake enough that you don't care about taking the bits apart and replacing them. Also, it has a lot of points where you can go and improve the design (extract interfaces, get away with inheritance and use delegation, then get away with objects, then end up with a lisp interpreter etc etc).


> You can’t add code to ducks.

You do if you are creating a piece of software to track ducks.

I don't get the impression that the person who wrote this ever had to seriously teach software development to software developers. Simplistic metaphors are used so frequently for this kind of thing because it prevents the learner from having to ascend more than one conceptual hurdle at a time. The alternative that he's proposing (learning inheritance in terms of some obscure drum machine software) would imply that the learner spend half their time digesting how this drum machine works, then the rest of their time (assuming they're still awake) figuring out how that relates to inheritance. It isn't helpful.


True, but I still don't think this is a good analogy. When you are learning inheritance you don't know what "x extends y" even means. Using a non-code example obscures what you are talking about.

I'd use an example like this: HPDriver extends PrinterDriver. CanonDriver extends PrinterDriver. EpsonDriver extends PrinterDriver. See, you can make 3 different drivers and share common printer code. That makes sense to me if I'm just learning inheritance. Duck extends animal does not.


I'd object to your example: your example seems to me to be closer to that of instance :: class than subclass :: superclass. PrinterDriver defines a contract; each of HPDriver, CanonDriver etc. will need to fulfill the contract. But I don't see them usefully adding behaviour. The OS would have to already know about HPDriver in order to use its extra functionality - then what's the point in using inheritance at all?


In a lot of OO languages instances rarely have different behaviour to each other.

You are also forgetting that it's possible to have more than one printer using the same driver (i.e. multiple instances of a class of printers). This is not uncommon when you consider network printers.

> The OS would have to already know about HPDriver in order to use its extra functionality

And that isn't true of any other instance in which inheritance is used?


A common pattern is a superclass with "abstract" methods that subclasses then implement. So the `sendCommand(cmd)` function is implemented in HPDriver and CanonDriver and higher level methods in PrinterDriver can use that piece even though PrinterDriver itself has no implementation of that method.


I don't understand how a driver implementation is an instance. I can imagine the HPDriver class having completely different code than the CanonDriver class.

For a realistic version of this example, consider JDBC driver implementations.


Perhaps they are scanner & printer drivers, the implementation exposes a ScannerDriver interface and a PrinterDriver interfaces. They subclass a DeviceDriver that implements the low-level I/O. There are other generic and vendor-specific interfaces that could be used here, as well as perhaps some kind of more specific class on top of DeviceDriver like PCLDriver or PostScriptDriver.

class HPDriver extends DeviceDriver implements ScannerDriver, PrinterDriver


Different printer models could have different features (color/B&W, selecting output tray, “print” an outgoing fax, etc.), and you could make each PrinterDriver subclass responsible for generating the appropriate UI code for frobbing those features.


You can explain is-a using examples that actually make sense to model using inheritance, like a set of game objects that have an interface like:

    class GameObject {
     public:
      // Called every frame to update the object's logical state.
      void Update(GameState* state);

      // Called to draw the object.
      Draw(GraphicsContext* context);
    }

    class EnemySpaceship : public GameObject { /* ... */ };
    class SpaceDebris : public GameObject { /* ... */ };
The point is to move the discussion into the practical reasons for introducing inheritance, rather than to make it a pointless ontological exercise of whether a Square is-a Rectangle or vice-versa.


Most of the benefits of OO programming come from abstracting the machine / concerns (eg. MVC) rather than modeling the domain. (eg. Cars / Ducks / etc).

I know it's just an example but it raises several important questions:

Why does the GameObject need to know how to render itself?

Why does the GameObject update the GameState which the GameObject is ostensibly also a part of?

Does updating the GameState directly affect the ability to separate concerns as to whether the GameState being modified is local vs. remote?

When you start modeling domains it leads very easily to situations where you have hardcoded permissions for the Manager class instead of bothering to implement a permissions system. Or you have a CEO class that's a singleton. (Hopefully, your code doesn't have to work at RIM)

The code savings from

  class DeathStar2 : DeathStar {
    public override FatalFlaw(){}  
  }
is going to pale in comparison to

  class DeathStar2 : Object {
    acts_as_travelling_salesman
    has_many :laser_turrets, :max => 1024
    has_many :tie_fighters, :max => 2048
  }
when applied over many many systems and objects. Modeling domains is to modeling concerns as algebra is to calculus. Most things in life despite their appearances are not hierarchical and thus do not fit well when modelled explicitly using a class hierarchy. Showing people how to model a domain is easy but virtually useless, showing people how to model concerns is hard but is where the big payoffs are.


Modeling a domain right is deceptively hard, and to me is one of the primary skills of our profession. Teaching what OO and inheritance mean is not the same thing as modeling a domain.


The 'CEO singleton' - very funny. :)

However, I'm all for domain modeling in the beginning, for a beginner. I learned OO the hard way - all by myself and pouring over books. I agree with the OP that a vehicle or duck was not a good model. Having worked with databases and crm's in the past (before they were 'crm's), I always dealt with people, organizations, calls, notes, etc. So when I came across an infamous car example, I was always wondering 'this has no bearing on anything I would do - why not talk about real objects like people?'.

Even before hitting any kind of patterns, it's crucial (in my mind) to understand datatypes, methods, scoping, inheritance, etc. 3 Simple classes like Person, Address, Organization can go a long way.

In my mind, there's no way a beginner is going to understand patterns first. They have to be shown the basics and the have to be handheld when entering the OO way of thinking. One those first few steps are taken, then it paves the way for more complex examples and patterns.


Code savings isn't a reasonable goal of OOP. Information-hiding (in which you take an arbitrarily-complicated method and abstract it behind a class) is.


When you give people inheritance all they see is hierarchies and when you give them information hiding systems all they see is access control.

One should not be able to find trade secrets by grepping for "private", nor reproduce a companies org chart by a class inheritance graph.


Do you have an example of modeling concerns vs. modeling domains?


Lets say you're modeling a system with Customers and Salesmen both of which are Persons, should you put that travelling salesman algorithm into the Salesman class or the Person class? That's what I mean by modeling domains.

By modeling concerns I mean that you should create a RouteFinder class and an adapter that extracts the coordinates from Salesman, Persons, and Customers. Maybe there is a convenience method on the Salesman class that makes it easy for him to travel to his customers but most of the work (the concern) is modelled separately from the domain.

The primary concern of the program is routing, not Salesmen and Customers. It's like how rails/ASP.NET MVC/django concerns itself with making websites, not modeling domains. If you focus on the concerns the domain becomes an implementation detail. (eg. For the person class the coordinate can be found at Person.Address.Latitude)


This is really not a good example as games have (read should have) entity systems. See e.g. these two links for an introduction. http://t-machine.org/index.php/2007/09/03/entity-systems-are... http://www.purplepwny.com/blog/?p=215

There should be no Draw() in your game object (read entity) base class. Not every entity is rendered. You should have instead a component Renderable. And if it happens to be so that your special snowflake has a Renderable component then you can render it on screen. E.g. AI pathing nodes would not usually have one unless you are debugging them you can add one.

In your example you end up with Movable extending GameObject, Camera extending GameObject and then you have no way of combining these two behaviours.


Actually, this makes more sense as a trait or interface than as a superclass, especially as you're not even implementing any behavior.

It's the same way for squares and rectangles. Square does the Rectangle and Quadrilateral interface, Rectangle does the Rectangle and Quadrilateral interfaces, and it doesn't matter. If you want a subtype/supertype relationship, then you have to diverge from what people intuitively think about classes. Liskov's Substitution Principle says that subtypes must be completely substitutable anywhere a supertype is used, which means a rectangle isa square because a square does setX, getX, and getY, whereas a rectangle does setX, setY, getX, and getY. (This doesn't make much sense because people use inheritance as "copy and paste this crap from the superclass into my subclass" rather than to setup substitutable subtype/supertype relationships.)

Interfaces avoid that problem because there is no supertype or subtype, only equal types that agree to have the same interfaces. Then it doesn't matter if a square isa rectangle or not.


My nitpick about EnemySpaceship: whether an entity is enemy or not should not be part of its type but part of its state and/or logic.

I know it's "just an example" but to me this thread is about whether arguably incorrect examples are bad practice or not.


Is "Spaceship extends GameObject" really that different than "Duck extends Animal"?


When people say "Duck extends Animal" is bad they typically mean that it's trying to model real-world relationships rather than computational relationships. The example is meant to demonstrate that inheritance should be based on what inheritance means for your program, to make your codebase "nicer", and not to simply categorise your objects.

"Spaceship extends GameObject" exists mostly to facilitate polymorphism and code reuse - virtual function resolution, probably, along with all of the non-virtual data and behaviour associated with every GameObject. A "GameObject" isn't a real thing, it's just a convenience.

Out of context, "Duck extends Animal" could obviously do the same things for the same reasons, but that's not really the point of the example. It's implied that the relationship was used because real ducks are real animals, not because Duck inheriting from Animal is a good idea.


"It's implied that the relationship was used because real ducks are real animals, not because Duck inheriting from Animal is a good idea."

I just do not understand this objection. If you wrote Sim Farm, it's quite possible you'd want Duck to inherit from Bird and Cow and Pig to inherit from Mammal, and both Bird and Mammal to inherit from Animal.

In the course of caring for your farm, you have to fix tractors, grow crops, and care for animals.

All Animals must be fed and will starve if they don't. Why would you code "needs food" on every individual animal class? Birds can get avian flu, and Mammals can get rabies; the pigs can even get rabies from the dogs. Why would you code "can get rabies" on Dog and Pig?

Maybe you don't always want to use Sim Farm as your example for teaching OO, but a lot of working code does model real-world items. Animals and cars and such are no worse than files as objects to model, and have the advantage that non-programmers already know something about them. Hence they can focus on the OO concepts and not on the domain logic.


I'll buy that.

But don't try to convince any OO purists that inheritance "exists mostly to facilitate polymorphism and code reuse". :-)


Yes, because drawing a bunch of game objects on a screen is something a real program might actually want to do, and therefore opens up opportunities to talk about real-world design tradeoffs.

Making a program print "quack" when you call the "speak()" method is something that only happens in textbooks, and since there's no reason to actually do that there's no easy way to discuss the alternatives and why polymorphism is a win.


So what you're saying is that Farmville is the ultimate result of reading OO textbooks, and that if they had spent more time on a realistic model such as spaceships and gameobjects that a game such as Knights of the Old Republic could be produced by Zynga?


That has nothing to do with what haberman is saying. Most people know games, and a simple game model can be expressed in inheritance terms in such a way that's illustrative and less contrived than conventional tutorial metaphors. And I agree with that.


>rather than to make it a pointless ontological exercise of whether a Square is-a Rectangle or vice-versa

it isn't pointless. It is the key point of the design : http://www.objectmentor.com/resources/articles/lsp.pdf


It is pointless if you make it an ontological exercise. If you make it a question of substitutability (as your link does), then I agree that you're doing it the right way.

The question shouldn't be: "is a Square a special kind of Rectangle, in a pure/platonic/logical sense?" The question should be "can a Square be treated as if it were a Rectangle." The answer to the second question could depend on the program and what it wants to do! The first is a rathole that accomplishes nothing.


It is a key point in design AND it usually feels like a completely pointless and annoying waste of time.

Humans have such a marvelous natural facility for attaching a meaning to a sentence that we don't feel like loopholes and contradictions within language are meaningful. Ask the average person "who shaves the barber" in a describing of Russel's paradox and they'll give you a "what was the problem" look rather than any answer at all. see http://en.wikipedia.org/wiki/Barber_paradox

So squares versus circles doesn't seem like a slightly important problem till you have a variation of it starting out of your debugger.

And the other side of this is ... since teasing out these loopholes is really hard and so many exist in potentia, it might really be just as well to leave things confused and correct them as they come up. If you a square class that's not a subclass of a rectangle and your rectangle class has an isSquare method and a toSquare method, well, your code will screwy but you'll have saved thousands of dollars in training fees...


Please do tell of the occasion you found squares versus rects etc. staring at you out of the debugger. Because the solution to this problem depends on the behaviour you're trying to get, rather than some absolute solution, so philosophical argument about it always seemed pointless to me.

If the type is mutable, then per-instance information shouldn't be part of the class; have an isSquare calculated property or whatever. If the types are immutable, then it's OK to embed it in the class hierarchy. If you must have mutable types but you still want to embed it in a polymorphic hierarchy (most usually because of dynamic dispatch reasons, rather than if-casing logic on isSquare), then add a layer of indirection: have SquareBehaviour vs RectBehaviour, as needed. Whatever your solution demands, there's a way of doing it. What the solution doesn't need - nor even cares about - is the philosophical argument.


* Whatever your solution demands, there's a way of doing it. What the solution doesn't need - nor even cares about - is the philosophical argument.*

I think the disagreement here is that you seem to believe that philosophy is engaged in some different activity from the kind of "how should X type relate to Y supertype and Z characteristic" is the meat of object oriented design.

But really, what is being here is ontology. Ontology isn't really fancier than this and the here isn't more clear cut that what philosophers try to muddle out.

Ordinary philosophy has been kind of society-wide clarification of definitions, just a design is an organization-wide clarification of definitions. Take a look at the actual text of The Critique Of Pure Reason at some point. While might have been written with various arguments in mind, most of the actual text is a long, long discussion of what objects belong in what category - ie, nothing more "airy-fairy" than most design discussions.

Ordinary philosophy gets less attention than the elaborate debates around the "edges" of definitions. But this also happens with design discussions.


I agree with what you say, oddly enough. Perhaps my beef is actually with all the amateur philosophers who insist on a unique, canonical ontology, rather than the fact of the matter, that there are always multiple ontologies to choose from. So they argue about things like square and rectangle re subtyping, but there is more than one way to validly slice the pie, so the argument is pointless.


This is in fact more or less the example I suggest using in the article, except that the objects don't have a separate update method.


I agree without caveats.

"Car extends vehicle" and "duck extends bird" are great examples for the type of lesson they're trying to teach. I'm surprised people have time in their lives to worry about this crap.

edit: Also, what's up with the title for this submission?


If it teaches people to think of object-oriented programming as world-modeling, it is not a great lesson. Polymorphism is an abstraction technique whose goal is substitutability. "Car extends Vehicle" is useful if other kinds of Vehicles can be substituted for cars.

If you teach someone that there is value in making Car extend Vehicle just because that expresses a real-world relationship, you are teaching exactly the wrong lesson and your students will create overly complex inheritance hierarchies.


> Polymorphism is an abstraction technique whose goal is substitutability

If you said that to students in lesson #1 of an OOP course no one will understand what you're talking about.

Sure maybe the Duck extends Animal example isn't a realistic one, but you've got to give students a chance to get their heads around the very basics first, and they can at least understand it by using simple real-life things.


>If you teach someone that there is value in making Car extend Vehicle just because that expresses a real-world relationship, you are teaching exactly the wrong lesson and your students will create overly complex inheritance hierarchies.

Only if your students are idiot drones. I think the majority of us went through these same lessons and came out just fine. Stupid is as stupid does (why do I keep saying that so much?) People who are good programmers can take a simplistic lesson like that and grasp the over-all concepts while people destined for terrible careers simply wont. No point bringing down the rest us with them.


Yes, you're right, it's only a great lesson if it's taught properly. I mean, really? Doesn't that seem like hair-splitting to you?

In my opinion, dickering about this is the equivalent of, "Yeah, those two hortizontal parallel lines are a great symbol for equality as far as it goes, but really students need to understand the difference between equivalence and implication!" It's an equals sign.

Same deal here. It's an analogy. The fact that analogies must be taught properly to be useful is a tautology that doesn't even bear mentioning, yet here we are.


I think the point is more that the analogy is so misleading, that it hurts the student's understanding.

A similar bad example would be using familiar round objects to teach equivalence: there are 10 apples, and 10 oranges, so apples == oranges.

Car's don't extend vehicles, and duck's don't extend birds. Just because we can organize these physical things into conceptual hierarchies based on their functionality, doesn't mean we can use them to understand polymorphism.


There is a direct mapping between our natural construction of ontologies (we are pattern-detecting creatures, we naturally see commonalities across the birds) and why object orientation works. Cars are a subset of vehicles in most peoples' minds; and Buttons are a subset of Controls in most UI programmers' minds. And the topologies of these ontologies are similar enough, by design, that you can indeed teach valuable lessons using vehicles and birds.

As to polymorphism, that comes for free from understanding language. Most people will understand a "No vehicles allowed" sign to prohibit cars but not people. What else is that but an understanding of polymorphism in evaluating a predicate?


Object-orientation does not work because cars are a subset of vehicles. Please do not teach anyone that.

Object-orientation works because in some programs, cars can be treated as if they were a vehicle, without knowing what kind of vehicle it is. This is the principle of substitutability, and it's the actual reason that inheritance hierarchies work. But in other programs, cars and other vehicles may not have much in common and should not be part of an inheritance hierarchy!

The real-world ontology is not a reason in and of itself to create an inheritance relationship.


I think you missed my point. The real-world ontology doesn't exist; it's only in our heads. The mechanisms in our heads for these ontologies are what OO uses and why it works for humans.

The point is the commonality of mechanism, not the commonality of any specific ontology. And when you're teaching, that's enough for one lesson.


No, it's not misleading. It is insufficient to capture the totality of what is meant by object-oriented programming and polymorphism. But it is not misleading. It is simply a very narrowly applicable -- but very useful, when leveraged properly -- analogy.


The thing is, people are constantly misunderstanding this point (and attempting to mislead others about it). For example: http://news.ycombinator.com/item?id=2914868


Polymorphism is an abstraction technique whose goal is substitutability. "Car extends Vehicle" is useful if other kinds of Vehicles can be substituted for cars.

"That's great", thinks the student, "I'll keep that in mind on the offchance I ever need to write code for an automatic carwash..."

Honestly I think these examples are fine as far as they go, but they should be immediately followed up with nontrivial examples demonstrating how you might actually use this in programming problems which don't fall into the category "simulation of real-world objects". Otherwise the student can easily dismiss the whole idea as esoteric.


I worry about it because I learned inheritance from using it. I wish I had learned from a teacher. Instead I made hack work-arounds for something that I didn't realize was a common feature of OO languages. Until one day I did some poking and said "oh, THAT'S what extends and implements is for!"


His point is that you can't add code to ducks... full stop. If you want to talk about "software [that] tracks ducks", then do so, but by talking about physical ducks one further encourages the ancient fallacy that object orientation is about physical modeling and should be all about physical objects and their physical relationships. What was said was what was meant; you can not add code to ducks. They shouldn't be in the tutorial at all, just as Chevrolets shouldn't be.


You can't add code to ducks any more than you can add code to sockets. Let's assume for a moment the hypothetical student in this scenario is bright enough to tell the difference between literally describing a thing, and expressing an abstraction of it.

So basically then, the thing we're all supposedly getting in a kerfluffle about is that the example program in a hypothetical textbook that introduces the concept of inheritance is a cheap duck simulator. Ducks are a corny example, so I think it's fair to dismiss it as bland. But talking about physical objects is a great way to introduce inheritance to a novice because inheritance is fundamentally about hierarchical abstraction (code simplicity and reuse, incidentally, are side benefits enjoyed by all methods of abstraction). And what easier way to talk about hierarchical abstraction than the most accessible mental model already possessed by anyone who made it out of elementary school – the animal kingdom. The easiest pedagogical metaphors to grasp for students without special knowledge are those that draw similarities to what they already know.

OO is about a lot of other things too. But concepts such as Dependency Injection, which was the meat of the grandfather rant, are IMHO more an outgrowth of dealing with the limitations of OO design than a topic so fundamental to objects and classes that they need to be discussed when you're still talking about fundamentals like these. Are they good things to know about? Definitely. Will a student's mind be warped by basic inheritance example that imitates a taxonomy he already understands? As long as the whole book isn't about duck modeling, probably not, and even then I'm not sure you couldn't teach <IQuackable> FactoryFactories with a little imagination.


If someone is actually writing about a bird-flock simulator or something, I think Duck is a perfectly fine example. My beef is with Duck being presented in the abstract, disembodied.

> concepts such as Dependency Injection, which was the meat of the grandfather rant, are IMHO more an outgrowth of dealing with the limitations of OO design...

I don't think so. The inflexibility you can loosen with DI exists in all kinds of non-OO and even non-imperative programs as well. In a sense, the whole point of object-orientation is that it gives you a handle on that kind of thing: the thing you depend on is an object reified at run-time, which you can arrange to have passed in to you instead of extracted from a global namespace, and which can be replaced with some other object with different behavior, not an address you are jumping to that's hard-coded into your compiled jump instruction as an immediate argument.


Interesting. I suppose when I think of DI I usually think of Java. Do you know of a good example of the DI pattern being used in functional languages?


map and reduce. DI is so prevalent in functional programming that it's pretty much taken for granted.


Well, take this Clojure tutorial, for instance, comparing a simple DI implementation in Java with a single function in Clojure. The latter bears almost no resemblance to the former, other than achieving the desired effect. Is it fair to say that the user of a closure is implementing the Dependency Injection pattern, or is it a positive side effect of the fundamental properties of first class functions and lexical scope?

http://vimeo.com/10368175


IMHO it's largely a side effect of first class functions. In Java in order to pass a function you need to pass an object that has the function, or an interface with the function, so it leads to a lot of line noise and thus people don't do it much. DI doesn't look impressive in a functional language because it's so well supported, but if you look at something like a global accumulator in a functional language you'll need a monad because you need state.

eg in Java it's easier to write:

  int accum = 0;
  for(int i : collection){
    accum +=i;
  }
rather than

  collection.reduce(new IReduce<int> { 
    int reduce(int accum, int i){
      return i + accum;
    }
    })
I'm a bit rusty on Java so the syntax may be off but you get the idea, where as in a functional language (F#) that code is reduced to:

  collection 
  |> Seq.reduce +
Or in an imperative language that supports function passing (C#)

  collection.sum((a,b) => a+b)


>you can not add code to ducks //

[I'm in a bit over my head but] Sure you can. This is just the sort of thing that happens in such tutorials - "now lets make all our ducks say moo if they're speaking but on land at the time".

When I learnt OO we used frogs, in SmallTalk, as our analogy. I don't think I ever once thought that the "frog" was in some way limited to things frogs could do, only that it was representing a series of things with a particular relationship wherein those things could have different characteristics and behaviours.


So domain modeling has no place in software? The burden of proof for such a claim is a bit higher than "you can't add code to ducks".

That ducks are physical is irrelevant in any case. You can't add code to bank accounts or insurance policies or leap years either.


"You can't add code to payrolls or insurance policies or bank accounts either."

No... you can't. Exactly. OO isn't about real-world-objects, it's still about code-objects, and mixing the two up causes serious category errors.

Physical modeling is the wrong way to do it, it's the wrong way to teach it, it's the wrong way to conceptualize it, it should not be used in tutorials.

Domain modeling != physical modeling, and conflating the two is exactly the problem being addressed. You know you're doing physical modeling when you have a need for an iterator class, but you don't think you can use it, because what on earth is an "iterator"? You can't hold one of those in your hand.

The map is not the territory!


I don't get what you're saying. What does it matter whether software is modeling a physical system or, say, a business one? Either way you're drawing on concepts from some domain that exists prior to the software you're building. Either way you're trying to find representations of those concepts suitable for computing what you need to compute. The art of domain modeling is making those representations intelligible in domain terms. This has nothing to do with physicality. Of Evans' canonical examples in Domain Driven Design, one has to do with paint-mixing and another with accounting. The techniques are no different.


"The art of domain modeling is making those representations intelligible in domain terms. This has nothing to do with physicality."

Yes, exactly. I do love how often people cite back my own points at me as if they are disagreeing.

I think perhaps people have forgotten the origins of OO. A lot of people were taught that the right way to do OO is to match the physical model of the domain they are trying to program for. It's great that you and so many other people have either thoroughly internalized how untrue that is, or were never taught that model in the first place. But I for one was, as were a great deal of the rest of my generation who learned about OO in the 90s, and it's actually somewhat important that we finish stomping the idea out that physical mapping is at all an important aspect of OO.

And the best place to stomp it out of is in the Standard OO Tutorial (TM).

If you and all the excited downmodders never learned that bad idea in the first place, great! Count yourselves amongst the lucky people who probably also never had to be broken of things like line numbering, or two-letter variable names. (If you think I'm joking, I only wish I could show you some of the first-C++-class homework I graded back in 2000. I'm not joking.) In the meantime, this is a real thing that is still floating around in real curricula today, and you may join me in boggling at this fact, but it doesn't change its truth.

It actually stuns me that some of employees that we've hired who graduated as recently as last year appear to have received the exact same OO education that I did in 1999, which was already pretty creaky then.

The map is not the territory.


"I do love how often people cite back my own points at me as if they are disagreeing."

For what it's worth, when I read that I found it unpleasant in more ways than one and it killed my interest in further discussion. Perhaps that was for the best, as I didn't seem to be making any progress in understanding you.


A lot of OO was, in fact, influenced by the task of writing programs for simulations. It's conceivable someone might actually want to develop animal/bird/duck classes (e.g., SimAnimals). http://en.wikipedia.org/wiki/Simula


Yes, I know. That's where the idea came from. Nevertheless, it's a terrible introduction to OO. It promotes very wrong conceptualizations and horrible code.

And odds are, even if that is the type of code you're writing, you still shouldn't be trying to have a one-to-one mapping between physical things and classes and/or instances. It just isn't a very good way to work.

Just because it was the first thing that was done with OO doesn't mean that it was actually a good idea. (Actually, the whole idea that the first person to implement a technology or methodology is forever the one and only true authority the one and only true definition is a bizarre one anyhow; of all the people who implement a particular methodology, isn't a bit much to expect the very first person to get every detail correct? You can see this in a lot of purist debates in our discipline.)


There are lots of ways to structure programs, and talk about how programs are structured. Some are good for some programs, others are good for others.

Knowing when to use which metamodel is an art that is only learned after long experience.

A lot of this discussion about the inherent superiority of spaceships to ducks seems to reflect the familiarity bias of the participants as much as anything else. There are certainly interesting points being made here, but maybe game experience isn't as ubiquitous as some seem to think.

On the other hand, maybe game developers could learn a thing or two about ducks. In Minecraft, for instance, they cluck like chickens.


Simula67, the version most identify as Simula proper, actually evolved concurrently with Smalltalk.

Rick DeNatale's memoir has a great article debunking some early OO myths -- http://talklikeaduck.denhaven2.com/2006/07/29/about-me


Personally I'd add the code to "TrackedObject" rather than "Duck" just in case we ever wanted to track geese or cormorants.

I don't get the impression that the person who wrote this ever had to seriously teach software development to software developers. Simplistic metaphors are used so frequently for this kind of thing because it prevents the learner from having to ascend more than one conceptual hurdle at a time.

Perhaps, though I think his real point is that often folks teaching software development will only put out that first hurdle and forget about the rest. Teach 'em that "ducks are animals" and think they understand inheritance, when in fact these simple physical analogies are a bad way of understanding how and why you might use inheritance in the real world.

I think it's a phenomenon quite common across the whole OO pedagogy, in fact. If your only mental examples of "objects" are ducks and bicycles then you're going to get pretty confused when you escape simple example land and start trying to understand the difference between an NSView and an NSViewController.


I agree, but he gets to a much simpler, teachable example near the end, which is representing shapes. Mathematically, a circle is an ellipse, and an ellipse is a polygon. It's much cleaner than the biology or work-place inspired taxonomy. A simple drawing program can feel less contrived, but, then again, so can simple games that use an object hierarchy unrelated to polygons.

Fundamentally, though, I do agree with your point. You don't use completely realistic examples to teach concepts for the fist time. All of the incidental material that is necessary to grok the example clouds the new concepts. Sometimes this means the examples will be pedagogical toys, and that's okay. The benefit of the already known taxonomies is that the student has less to learn. The new concept is more obvious. If the worry is that the example is too much like a toy, then address that in the second example. In general, I think people put too much emphasis on first exposure to concepts. Education is iteration. If it doesn't take the first time, there are more tries coming up.


> I agree, but he gets to a much simpler, teachable example near the end, which is representing shapes. Mathematically, a circle is an ellipse, and an ellipse is a polygon.

Well, the circle-ellipse problem complicates that example a little bit: http://en.wikipedia.org/wiki/Circle-ellipse_problem.


The circle-ellipse problem almost never arises in real software, in my experience. It's not in the same league as the Fragile Base Class problem, or the Diamond Inheritance Problem, or the Schema Upgrade Problem, or the Ravioli Code Problem, or the problem of poor performance caused by too many layers of abstraction, or the Big Ball Of Mud Anti-Pattern. It's essentially a thought experiment.

What does it teach us? Well, one lesson is that you can't design good inheritance hierarchies in your program simply by aping ontological relationships in a Platonic Ideal World, which is the main point of my article. Another lesson is that sometimes you'll discover that your program is inadvertently violating LSP because of some nonobvious interaction, and that this is a bug, and you need to fix it. A third lesson is that mutability is tricky, in particular with respect to the Liskov subtyping relation; this same problem comes up in non-OO contexts as well, if you have mutability and any kind of subtyping relation. (OCaml polymorphic variants, for example, create a subtyping relation all by themselves, even without OCaml's OO features, and that's enough to create the problem.)


I disagree with the arguments in general and find that modeling real world, common items is much more illuminating from a teaching/learning point of view. It would be an ridiculously long post to explain why, but here is an example picking one of the author's bullet points: "Penguins don’t implement the fly method that can be found in birds."

Should bird even have a fly method? Only if all birds fly. And the application will have penguins, so the answer is "no". This doesn't make the Duck extends Bird example a bad thing though. It points out that if you do shallow domain analysis, you will end up with a poor OO design where you shoehorn things like "fly" into penguins for example.

If some birds fly and some don't, some swim and some don't, some burrow and some don't, etc. you might implement this with "mix in" interfaces like "Flyer", "Swimmer", "Burrower", etc. This is a good lesson in how one can distribute different behaviors across a class hierarchy without polluting the common base interface of the hierarchy (i.e. without forcing all birds to fly). However, this burns the behavior into the concrete classes, prevents things like modeling the fact that baby (or dead) birds cannot fly/swim/burrow, and does not allow for sharing different fly/swim/burrow implementations across bird implementations (in languages like Java anyway). In other words, it shows why inheritance can be a good choice, and also a bad choice.

A common solution to this would be to use two hierarchies - one is the bird hierarchy that all birds implement (with behaviors like "preen feathers"). The other is the behavior hierarchy that all behaviors implement. Birds "contain" a list of behaviors. Code that operates on birds knows the bird interface has methods that make sense for all birds. Code that operates on behaviors like fly/swim/burrow would have behaviors passed in, or perhaps ask the animal in question "do you know how to fly?" and if yes, ask for its fly behavior. There are many variations on this, but the point is that there is a separation of concerns: bird concerns and behavior concerns. This helps teach the trade-off between inheritance and containment (e.g. behaviors not burned in through inheritance can be added/removed over time, for example, when a baby bird learns to fly).

The point though is that this kind of example (Duck extends Bird) helps one understand how to model real world items so they match the domain, to learn about separation of concerns, learn the pros/cons of inheritance/containment, design patterns, and perhaps most importantly, do so using concepts most people will more readily understand (cars or birds or people).


Indeed.

The example with Birds, Ducks, and Penguins is much more colorful and fun then example with GUI widgets. It sticks. Students will remember ducks & penguins, but fall asleep before you even finish describing that boring GUI system.

Plus, nothing prevents you from using the same fun example to illustrate the second point. E.g. you can show that while penguin is a bird in real world, in your application Penguin might actually inherit from Pinniped, because it behaves like one (and it's "behaves-like", not "is-a" that we care about).


If a CS student falls asleep when you talk about GUI systems, they're not in the right class.

I didn't need fluffy birds and colorful metaphors to get into this passion of mine.

The audience doesn't deserve to be patronized like that.


Way back when I was first learning OOP, the typical "car extends vehicle" examples caused me not to see the practical point of OOP for a long time. I still resist using object-oriented approaches except in cases where the code demands it. (A feature, not a bug?) And, these days, if I do use OOP in places in the code, it's for encapsulation -- which is only a style consideration -- so the "car extends vehicle" example still doesn't apply.

I think a simple example from GUI programming would do much better. How about window classes? I did something like this recently:

    class Window {
        public method show()
        public method title(newTitle)
        public method close()
    }
Easy enough. What if I want a modal window though? Almost everything in it would be the same as a regular window, except I might need to do some things slightly different for show()ing and close()ing the window. Aha! A perfect example of an appropriate time to create a subclass:

    public class ModalWindow extends Window {
        public method show()
        public method close()
    }
Programming tends to attract practical individuals. I think teaching "car extends vehicle" then is usually going to cause one of two results: either a student that then uses that approach all the time, even in cases where it's not warranted, because they don't understand when it's appropriate and when it's not; or a pragmatic student that resists using it because the given examples don't apply to anything that they're actually going to have any chance of working on.


I don't think so. In a windowing library you would do better to have the IWindow interface and have both Window and ModalWindow implement IWindow. Decorator then is appropriate so that ModalWindow can wrap Window and add the desired functionality. I would not subclass Window in this case. This leaves Window and ModalWindow totally substitutable and totally decoupled. If you extend you couple Window and ModalWindow. Less coupling will likely serve you better.

I eschew inheritance except for when two objects or libraries are very much the same thing but the implementations are different. Just an example, where I might want to have the opportunity to use two different PDF libraries.

Further, there's a case where a number of objects share some common methods and you may decide to implement an AbstractWindow where those methods can live. But, you may find that Aspect programming works better in that case.

Generally, and perhaps surprisingly you shouldn't use inheritance in most cases. But inheritance is nonetheless a very important part of OOP.


Why would you want inheritance for different PDF libraries? This seems like interface to me - the DOM interfaces are an appropriate analogy.


Quite right. I didn't express that very well. An interface is appropriate with implementations for each separate library.


I think a simple example from GUI programming would do much better. How about window classes?

It's funny, because OOP came my sphere of awareness around the same time that desktop software was beginning to use it for extensible GUIs. I'm talking about Turbo Pascal's Turbo Vision, and whatever Turbo C++ had.

That actually slowed down my comprehension of OOP. I thought OOP was for GUI work. OOP added so much complexity to the GUI code, and the GUI code added so much complexity to the OOP concepts, that it actually set me back by quite a bit! :-)

Mind you, it didn't help that, just as I was grasping crucial OOP concepts (e.g. a pointer to base class can point to an instance of child class), my instructor told me that was wrong and that I didn't understand OOP...


Actually, you're right. I started with OOP around the same time (sounds like), and I remember having trouble grasping the benefits of OOP then because, like you say, it was much more complex all the way 'round. I was used to just calling functions which would return a struct and then passing that struct to other functions, which was straightforward and easy to keep track of.

> ...my instructor told me that was wrong and that I didn't understand OOP...

Ugh.

I wonder how it should be taught then? Looking at the comments in this thread, it looks like a lot of people are simply choosing to use it in ways that make sense to them. Is there a universally-agreed-upon use case for OOP? If not, is there anything wrong with just teaching it as one of many tools, and letting students sort it out on their own?

I've tutored a few kids on programming. OOP has been my least favorite part every time. I honestly have no idea how to approach it as an occasional instructor.


Yeah, I guess if I learned OOP with Java I'd hate it too. I look at phrases like "public class ModalWindow extends Window" and am very thankful I started with Python (and have learned Java where necessary, mostly to support Clojure).


That was pseudocode, not Java. This is a discussion of concepts, not languages. I don't participate in discussions about languages because they are pointless.


Convincing pseudocode! It looks like Java to a layman!


The only situation where I could see not wanting to hear the car extends vehicle example of inheritance is in an interview where you want the candidate to give a practical example of inheritance. I see nothing wrong with using an example even a child could understand as a first stab at explaining inheritance.

Fairly annoying how quickly a mostly useless rant such as this has jumped to the top of HN.


He asks for a simpler example. How about a collections api? Start with array, then linked lists, then functioms like contains and filter and which need an iterable interface, then different types of linked list, then add hashmap where you iterate keys, then different types of map e.g. linked map. This is all data structures 101 material anyway! After this primer, a simple GUI toolkit isn't as far out of reach.


I think the best introduction to object oriented design is shown in the first chapter of Martin Fowler's book "Refactoring". He gives the example of a DVD rental shop that has to calculate the rental fee based on the type of DVD: - A New Release is $3 per day - A Children's DVD is $1.50 for three days, then $1.50 per day after that - A Standard DVD is $2 for two days, then $1.50 per day after that

http://www.amazon.com/Refactoring-Improving-Design-Existing-...


The rest of that book, unfortunately, is stuff like: "To move code from one file to another, follow these simple steps. Step 1: Delete the code from the source file. Step 2: Add that code to the destination file. Step 3: Update references to the code in (1) to point to (2). Step 4: Recompile your project and fix the parts that don't compile."

What insight!


I've been working on writing a programming book for quite some time now, and I can tell you that coming up with good examples is really hard.

For me, a good example meets as many as possible of the following demands (in no particular order).

An example should

* demonstrate the feature or concept under discussion

* be short - shouldn't be more than half a page to a page

* solve a problem that the reader can easily understand

* solve an interesting problem

* not use unrelated features of the language or library that haven't been introduced yet

* should avoid complexity unrelated to the feature or concept under discussion

* should be as close as possible to what one would use in "serious" (ie non-teaching) programming

I found that i spent about as much time searching for good examples as actually writing text or code for the book. I'm pretty happy with some of the examples (for example the chapter on grammars parses JSON, which is rather real-world, but not too complicated), some of them still fail most of the demands given above.

(If you're curious, grab the latest PDF from https://github.com/perl6/book/downloads -- it's still very much a work in progress).


One of the first examples of OOP I encountered was the old 'shape' hierarchy demonstration: 'shape' is a base class. 'rectangle' is a new class that inherits from shape. 'square' inherits from 'rectangle', etc.

It demonstrates the ".. is a .." relationship that inheritance describes perfectly.

This is a pretty common way to introduce OOP as it draws on stuff we all know intuitively. The example proposed with a bunch of drum machines concepts, trash, timers & stripes or whatever is so mind bogglingly obscure for most people that I'm sure they would have checked out long before the actual OOP lesson begins.

Not sure what the point is.

Oh, and btw, just because you might not ever need to represent a 'duck' object, doesn't mean that it is a stupid example. Say I write a game with a bunch of actors, I might have a base actor class, a bird class that extends that, and a duck class that extends that. Bam, legitimate use for a 'duck' object. There's probably a lot more people wanting to write "a clone of the sims" than "an intractably obscure drum-machine".


> 'square' inherits from 'rectangle', etc. ... It demonstrates the ".. is a .." relationship that inheritance describes perfectly.

Your comment is a perfect example of why this is a terrible way to teach object-oriented design: if your squares and rectangles are mutable, as they usually are in the old shape-hierarchy demonstration, you've just violated the LSP and introduced a subtle bug into your program, in the very first line of your comment. If you understood object-oriented design, you wouldn't do that. You've been tricked into thinking you know how to do object-oriented design, but your sense of competence is an illusion.

For what it's worth, although I don't have any real experience teaching other people to program, I've found that animated graphics that make sounds hold the attention of nontechnical people a lot better than abstract hierarchies of Platonic geometric shapes. I haven't compared them with bird-flocking simulations.


> It demonstrates the ".. is a .." relationship that inheritance describes perfectly.

It's rather imperfect, see the http://en.wikipedia.org/wiki/Circle-ellipse_problem for caveats.


Depends on the audience. If you're using the "Car : Vehicle" example, you're probably explaining the basics of OO to a newbie. I think that's a fine example. If you're talking about more advanced topics, then you can use a more advanced example. In fact, you can use different examples suited to different topics! Relax, there's no need to be a pedant.


Honestly, when I was a newbie, the simplistic examples drove me crazy because I didn't get the point. I can solve the toy problem effectively with concepts already introduced (functions) why am I burdening my code and my brain with all this new overhead?

The OP's suggestion to use a GUI interface is much better. GUI objects like windows are a common and interesting subject. Ironically, Object Oriented features in languages like Java and C# made a lot more sense to me after writing a Win32 gui app in straight C.


The problem people have with "Car extends Vehicle" is that they expect magic. It's the same impatience we sometimes feel with little kids because they're not grasping something we've been taking for granted for decades. If you're trying to teach someone new to OOP, there is no example that will take them from their current level of knowledge to yours, in one fell swoop. You can't expect them to grapple with decisions such as "inheritance vs. delegation" when they're only now learning what inheritance is all about.

So yeah, "Car extends Vehicle" and "Duck extends Bird" works pretty well at that point. You can try to cram "Penguin" and "Helicopter" and "Flyer" and "Vehicle" down your students' throats in the same lesson, but I doubt that will magically make them digest it properly. And if you can't do that with easy stuff like "Duck" and "Bird", how do you expect them to swallow "NumericHalo" and "ImageDisplay"?


This is ridiculous. I'm teaching part of a course called "programming for biologists" this fall including the part about object oriented programming. Most of my students are ecology grad students. You're telling me I shouldn't use simple animal examples, when these are exactly the types of things my students are going to be modeling?


1. I absolutely agree with the linked article. My perspective is somewhat different, however. I submit that OO hierarchy should not mimic whatever properties whatever real-world objects may happen to possess. Instead, when we solve a problem and have come up with a solution, the OO hierarchy should describe the solution. A different solution, or a different problem, may easily lead to a different hierarchy of (perhaps different) objects. And these would be software objects.

2. Therefore, I suggest that a "simple" example would come from a well-known software system. Such as a window system (rectangle, button, icon, window, etc) which, either as a concrete example or an abstraction, should be easily understood by anyone with a minimal (desktop) computer experience and no programming experience.


I completely agree with this. Duck-laden object oriented programming tutorials made no sense as I tried to transition from C to Objective-C.

The best explanation I found was Chapter 2 of Apple's "Object-Oriented Programming with Objective-C":

Every object has both state (data) and behavior (operations on data). In that, they’re not much different from ordinary physical objects. It’s easy to see how a mechanical device, such as a pocket watch or a piano, embodies both state and behavior.

http://developer.apple.com/library/mac/documentation/cocoa/c...


I actually think these are really good examples. I especially like the shape example, because you quickly run into problems where you can talk about the difference between modeling for a solution and representing an abstraction.


I fail to see the problem. I would bet that there are tons of real games that actually have a Car class that extends Vehicle.


The thing is that there ought to be no trivial examples of object oriented paradigms because they are never trivial.

If you're good you can create object oriented interrelations that work but end up having object charts which have that plastic, phoney kind of artificiality that makes little natural sense. If you're a god-like programmer you can come up with interrelations of code and data that make perfect sense but they're quite not always object oriented.

As for OO, the best thing I've settled with is this:

- Scoop up all your programming experience and first think how do you want to reuse some code or some data structures

- Then you go and look up appropriate paradigms and language features that could support your case of reuse

- And finally you go figure out how to best do it in a natural way in your programming language.

However, I often start with basic lists and dicts and only after something becomes truly obvious and a near-universal trait, I might make a class out of it if I can better reuse code that way. An archetypical example would be something like a BaseObject that supports some protocol for init, deinit, and refcounting, for example. Everything else can adhere to that protocol. Better yet, I'll make it an interface so I can decouple the implementation of the protocol from the definition of the protocol.


Yes, a thousand times yes. I have never understood why dependency injection is not taught in OOP courses -- as described in the article, DI truly shows why OOP design is useful. Conversely, the "domain model"-approach commonly used probably gains some of its attraction due to the similarities to E/R database diagrams, but far from always lead to designs which are actually useful in practice.


Is Kragen making an elaborate joke about duck typing here?

Penguins don’t implement the “fly” method that can be found in birds. And you don’t go around causing things to fly without knowing what kind of bird they are. (Ducks themselves decide when they want to fly, and they certainly seem to know they’re ducks and not vultures.)


> Is Kragen making an elaborate joke about duck typing here?

Not really, no. I was just saying that "fly" methods on hypothetical bird classes fail to motivate OO.


Just one piece in the whole object orientation blows but still is the best thing we have to make complex systems understandable to simple people quandary.

It is true that the real world object examples confuse everything that OO is really used for. It is also true that we need them to make OO approachable at all.

OO is essentially an ontological cluster-fck where multitudes of logical and representational levels are trampled on. The thing is, OO uses* the adhoc mixture of logical confusions that is most people's description of their world. Somehow people's amazing brains muddle through speaking natural language despite it's terrible muddling of everything. Thus we can stand the confusion of not really knowing whether "class window" is a glowy thing on the screen, a location in memory or an abstract datatype.


I would also add. Don't introduce a language by showing how to compute the Fibonacci sequence.

I never have and never will need to compute the Fibonacci sequence. Recursion is such an infrequent part of everyday programming that it doesn't really matter if a language makes it easier.


It is frequent in some applications, and when programming in languages that encourage you to recur instead of looping (see: functional proramming). Also, recursion is one of the most important concepts of computer science.

I didn't really use recursion in my programming until I switched from C++ to Common Lisp.


It's true that it's fairly infrequent, but it doesn't follow that it doesn't matter if a language makes it easier. When it's applicable, it simplifies things enormously.


>One disadvantage is that it has to deal with a fair amount of arithmetic; there's lots of `/ float(n)` and `* self.rect.w + 0.5` and `self.size2/2 * (1 - (1 - age2)*2)` and the like, which I think reinforce a common misconception about computer programming: that you need to learn algebra and arithmetic in order to write programs, and that programs mostly deal with numbers.

Is this really such a disadvantage? Algebra and arithmetic are hardly advanced subjects. Isn't it fair to insist that programmers know at least middle school math before they program?


> Isn't it fair to insist that programmers know at least middle school math before they program?

Certainly not! I started programming when I was 4, many years before I got to middle-school math. And there's a great deal of software in the world that has very little to do with algebra or arithmetic. It's easy to open up http://www.canonical.org/~kragen/sw/urscheme/compiler.scm.ht..., for example, to a random page and see no arithmetic at all. The same would be true of https://github.com/evilstreak/markdown-js/blob/master/lib/ma... if JS didn't require you to use arithmetic to iterate over arrays.


I don't get what is soo wrong about the car example, because of it i grasped the object orientation style instantly. What's easier to undderstand than this!?


Inheritance. Gah, why won't this bad idea die!

Shared calling protocol, shared data sub-structures, fixed compile-time equivalence hierarchy, all clumped together.

Go gets it right. Cut the linkage between composition and equivalence. Composition handles inclusion/merging of existing objects. Interfaces handle equivalence of objects. Interfaces can have hierarchies, but there's no one true hierarchy baked into the objects they describe.


The example for inheritence I liked best was something like this:

Number (implementing trivial exponentiation through multiplication)

RealNumber & ComplexNumber implementing their specific kind of multiplication.

Later on replace the exponentiation by fast exponentiation. This directly demonstrates the benefits of avoiding code duplication.The change only has to be made in one place.


The real truth being that not even one thinks the same way. Some people learn better with visuals versus some who prefer text.

Some people see no problem with Vehicle->Car or Animal->Duck or whatever other ultra basic OO example.

In the end what's best is whatever can make someone understand the concept.


In my experience, there's a process where adding accuracy (and thus usually complexity) to a domain model, implies removing existing inheritance assumptions.

For instance, in the trivial case an Employee is a Person, but when you go further you discover that there are all sorts of relations that need more detailed fleshing out. Is changing jobs just a call to Employee.setEmployer? That certainly feels akward. And as that process continues, you get a more complex domain model with comparatively less fixed inheritance relations.

The extends keyword sortof becomes an extend object, first-class among your domain and susceptible to dynamic behaviour.

But can you teach kids that in a first OO class?

EDIT: care to explain the downvote? I'm quite passionate about this subject so would love to debate it.


Good to remember Alan Kay's guideline here - "OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things."


THAT'S WHY ITS A BEGINNER TUTORIAL. Get off your high horse.


The criticism being put forward isn't at all arrogant. A tutorial for beginners that teaches you the wrong thing is incredibly detrimental. It's good to point that out and fix it.


"Duck extends Bird" is not the wrong thing at all. As other comments have pointed out, the justifications in the article don't apply to someone who is just starting out learning the basics of OO.


"Car extends Vehicle" is a perfectly fine example to introduce someone to OO programming. That's all.


If I understand correctly, this rant is about the use of simplistic examples in teaching OOP, correct? i.e. it's not just about using such examples when debating language features.

If so, then I very strongly disagree with this guy. I have painful memories of trying to come to grip with OO in high school in the early 90's. Sure, those zoological and automotive examples are almost insultingly trivial, but that's because you don't need real-world engineering complexity interfering when you're just trying to wrap your head around inheritance and extension!

Bird ==> Duck makes sense because they're classes. As in, those are classes of animals in the real world, and there is a relationship among them. Expressing that relationship in code doesn't mean you're coding animals, it means you're modeling a real-world class hierarchy with your data definition.

I think that people are forgetting that OOP is actually hard. If you don't already have a very strong grasp on the underlying mechanics of a language (e.g. function pointers and vtables in C++), then it's going to always seem like magic.


I appreciate your disagreement, and I might be wrong. My concern with those examples is not that they're trivial but that they're misleading.


Nothing annoys me more than this kind of analogy. People who are trying to understand it can't figure out what a Duck really is in this scenario, and what it means that it's a subclass of Bird.

Use simple but realistic classes as examples.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: