I've been developing medium-to-large projects in mostly C# over the past two years. One thing about my style that has changed over that course of time is that I almost never use inheritance anymore. When I was in school, it was my primary tool for code reuse, but now I find it conflates solutions to different problems, as well as hampering readability.
My style now is small classes with a few, small methods each and lots of composition. I find it easier to reason about code, separate functionality, and reuse pieces. Also, placing unrelated state in separate classes really discourages a lot of the common mistakes with OOP.
Admittedly I don't do as much reading on the subject as I should. I am probably discovering things that were common knowledge 25 years ago. Maybe this is a common road that developers take. That's fine with me.
Rust's and Go's OOP style appeals to me quite a bit. I have only written small things in those languages, so I'm not exactly well-versed in what larger projects look like. Before reading this article, I wasn't able to put into words what bothered me about the style of inheritance in languages I've used up until now.
Same here. I used to look for places to use inheritance but now only inherit from abstract classes, seal everything else by default, and mark all of my fields readonly.
The value of Go and Rust, I think, is that they are object-oriented languages that try to avoid conflating classes with types.
If you think of a type as a set of possible values, then a subclass is actually a supertype—a superset of the possible values of its parent class—and likewise a superclass is a subtype. The “is-a” relationship is saying “for all instances I of X, I is an instance of Y” which is essentially the definition of superset. That is what the Liskov Substitution Principle is about.
In C++ and other languages with support for object-oriented programming, a subclass is not necessarily a supertype: virtual functions can throw “I don’t make sense in this derived class”. And a supertype is not necessarily a subclass: templates offer the benefit of the doubt when it comes to static interfaces.
This mismatch leads, in my experience, to interfaces that are ill-specified at best and buggy at worst—so it’s nice to see those issues addressed.
The problem I see, is that the average programmers are only aware of the OO models popularized by C++, Java and friends, while there are quite many to choose from.
This is one reason why so many have problems with prototype inheritance in JavaScript, which is also not the only language having it.
I find positive that so many OO models exist to choose from.
The class-based OO approach of C++ has proven to be practical for real-world software development. The same goes for the approaches taken by Java and C#, for example. While they can be abused, these approaches generally provide a usable balance between conceptual comprehensibility, consistency, modularity, code sharing/reuse, and the ability to model a variety of realistic domains.
The same can't be said of JavaScript's prototype-based OO. It's not that people have trouble understanding it; they know it perfectly well. It's just that, when being used to write real-world software, it is nowhere near as convenient and practical as class-based OO.
This is largely why we see so much effort go toward faking class-based OO within so many different JavaScript libraries and applications. JavaScript doesn't natively (yet) provide the tooling that developers need (class-based OO), so they're forced to try to implement it themselves using what JavaScript does offer. And the results usually are quite horrid. The fact that Self, Io and other prototype-based languages never really took off is similarly related, I suspect.
Choice between different OO models isn't a bad thing. But when it comes to real-world software development, where there are clients, deadlines, and money to be made, developers need tools that work. Class-based OO has proven itself as a useful tool. Prototype-based OO has been more of a hindrance than a benefit, on the other hand.
You keep showing up in unrelated threads talking about how JavaScript is terrible, even when it isn't remotely relevant to the topic at hand.
Case in point: the models of Rust and Go have nothing to do with prototype-based OO. Typeclasses as Rust uses are in fact about as far as you can get from prototype-based OO: they are less dynamic, not more dynamic. There is no virtual dispatch involved at all. Neither Rust nor Go have any support for prototype-based OO.
I understand you dislike JavaScript, but would you please stop derailing comment threads that have nothing to do with it?
> Real world software, as you say, with money, clients and deadlines, is written in JavaScript all the time.
That says a lot about the lack of alternatives for code that must run in a browser and next to nothing about the merits of prototypes.
I have the fortune to work with a bunch of people that worked on Self, or worked with people that did. Many of them told me that after all of that experience, the conclusion they came to was that prototypes don't really work well in practice, even when you have multiple parents like Self. (JS lacks that, which makes it even more painfully limited.)
Meanwhile, many JS luminaries, people who understand every corner of the language, are working hard to add class syntax to ES6 (, or defining transpilers that support it (Erik Arvidsson with Traceur, Jeremy Ashkenas with CoffeeScript).
I find prototypes interesting. I've implemented my own prototype-based language from scratch to understand them better. As soon as I had it working and started writing code in the language, the first thing I found myself wanting was to add classes to it.
It's encouraging to see such functionality at least being proposed for JavaScript. While it doesn't do much to fix many of the other inherent problems that the language is riddled with, it does at least show some small degree of sensibility going forward.
I think that you've missed the fact that we're discussing OO techniques and approaches here, rather than the overall usability of JavaScript.
The software I mentioned includes JavaScript written in exchange for money, for clients, facing deadlines, in addition to open source JavaScript projects that may not face such constraints.
In such code, we often see prototype-based OO not living up to the hype, and not being suitable for real-world work. Due to these failings, we see many, many developers resort to implementing some limited subset or variant of class-based OO using the functionality that JavaScript does offer. The fact that we see this going on time and time again, involving many different developers, all across the world, over the course of many years should tell us something. I think that "something" is that class-based OO is inherently superior to prototype-based OO for the types of problems typically encountered when developing practical software.
Besides, C is the widest-deployed platform in the world. Add in C++, and their share of the market is absolutely huge. Basically every JavaScript implementation today is implemented using one or both of them. The major browsers embedding those JavaScript implementations are implemented in one or both. The operating systems the browsers are running on are implemented in one or both. The servers they're accessing are running operating systems, web servers, database servers, programming language implementations and applications written in C and/or C++. All of this is linked together using various networking devices running code written in C or C++. So every single line of JavaScript code that's executed depends on a stunningly huge amount of C and C++ code also being executed.
I will omit language war you both (parent and grandparent) seem to like waging and I'll go straight to the point.
Which is that OO based on prototypes is strictly more powerful than popular class based ones. Which is proven by the very fact you mention: people are writing class-based OO implementations in JavaScript with easy, one can create such an implementation in under 100 LOC. So if you think that "superiority" is determined by expressive power, then (popular) class based OO sucks balls in this comparison :)
I'm including the 'popular' qualification because thinking about class based object orientation in terms of C++ and Java is rather limited. If take a look at Smalltalk you'll see wonders like #become:. Ruby has implemented substantial subset of what Smalltalk provides and Python, although through really ugly hacks, is still able to replicate prototype-ness of JS.
This hints us to the real problem I think you missed, I mean age old difference between static and dynamic typing. Things like Google's Closure Compiler seem to suggest that static typing is desirable in the large projects. And while JITing can alleviate most of the performance problems, maintenance still remains to be a pain in the ass. This same thing applies to class and prototype based OO - while more expressive for the most part, prototypes are not as maintenance friendly as classes are. All sorts of tools, when working with classes, can provide you autocompletion, for example - it's much, much harder with prototypes, which can change at any moment.
So, to summarize, I think you blame the right thing but for incorrect reason. Prototype based OO is "better" than class based in terms of expressiveness, but "worse" in terms of maintenance (for now at least). As everything, both techniques have a place in programming and it's just stupid to bash one or the other.
I don't think the assertion that JavaScript's prototype-based OO is more expressive because it lets you fake partial implementations of class-based OO with relative ease is true.
Now, if we could reliably implement class-based OO systems comparable to, or even better than, those found in Java, C#, C++, Python, Ruby, and Smalltalk, for example, then maybe we could say prototype-based OO is more expressive.
However, all we end up with are multiple half-assed (for a lack of a better term) approaches, each of them critically incomplete in one way or another, and often incompatible with one another, too. This creates the integration and maintenance headaches that should be more than apparent to anyone who has worked on any sizable JavaScript code base.
In fact, I think we see the opposite in reality. It's much more effective to implement prototype-based OO systems using class-based OO languages. We see exactly this with the major JavaScript implementations (i.e., prototype-based OO systems) today being written in C++.
So prototype-based OO fails to deliver both in terms of expressiveness and maintainability. It is just a worse technique, when considering the facts. Trying to pretend it's good at things it clearly isn't good at doesn't make much sense to me.
And there's no "language war". C and C++ do the vast bulk of real work today. No other languages today can compare to them, and the ones that try still depend very heavily on one or both.
> So prototype-based OO fails to deliver both in terms of expressiveness and maintainability. It is just a worse technique, when considering the facts. Trying to pretend it's good at things it clearly isn't good at doesn't make much sense to me.
You are focusing too much on prototype OO.
The models offered by Smalltalk, Beta, CLOS, Haskell and quite a few others are different approaches to the classical model known to the average developer.
This is the main problem in the "offshore everything model" that looks for cheap dumb developers able to work like cogs and not required to think.
> And there's no "language war". C and C++ do the vast bulk of real work today. No other languages today can compare to them, and the ones that try still depend very heavily on one or both.
When I started developing (1986), C was just another language that most people did not care about. In fact I only cared to write some C code around 1993.
The current situation came to be due to UNIX's influence in the enterprise and the fact that any developer worth his salary isn't not going to rewrite the full stack from zero.
It has nothing to do with their suitability for OO development.
I focus on JavaScript and its attempt at prototype-based OO because there's a huge amount of such code that already exists, and more is being written each day. It poses practical problems affecting many developers today.
While Smalltalk, BETA, CLOS, Haskell and other languages do have their own approaches, we really don't see them being used anywhere near as much as JavaScript, C++ or Java are. Realistically, Haskell is seen rarely outside of academia, aside from some very isolated projects. C++, and then Java, put an end to any real momentum that Smalltalk had gained during the 1980s and 1990s. BETA and Common Lisp see very little usage these days, too. They are pretty much irrelevant in a discussion of applied software development.
I wouldn't attribute C's success and C++'s success purely to UNIX or enterprise users. In fact, many of the most significant users are open source projects. They're successful because what they offer is what developers need. Flexibility, performance, portability and in C++'s case, sensible and practical object-orientation. People go out of their way to use C and C++, even when it isn't as practical (such as under Windows and various embedded platforms). This is quite different from people using JavaScript, which is used mainly because it's the only practical option available for browser-based scripting.
That's true, but I just have to ask: so what? I, for one, don't subscribe to the theory that popularity equals quality. Although both C and C++ are immensely popular in terms of code written in them that doesn't mean they're the last word in language design.
> No other languages today can compare to them, and the ones that try still depend very heavily on one or both.
I don't understand what you mean by "compare to them". If you mean in terms of features - sure there are languages more advanced. If you mean in terms of popularity then I can only say I find such comparisons meaningless.
> Now, if we could reliably implement class-based OO systems comparable to,
> or even better than, those found in Java, C#, C++, Python, Ruby, and Smalltalk,
> for example, then maybe we could say prototype-based OO is more expressive.
We can do this (modulo "reliably", which I don't understand what you mean by) and that's the problem. You yourself explain this:
> However, all we end up with are multiple half-assed
That's exactly the case. Java, C++, Python and Ruby are extremely incompatible with each other. No wonder that JS implementations of ideas taken from those languages are still incompatible - although less so than originals between themselves. It's also not that strange that they are incomplete - given that everything you'd like to do in class-based system you can do with prototypes there is simply no need for frameworks being complete.
Can you post a few examples of what is possible in Java and not possible in JavaScript?
Also remember that language designers can alter syntax to better support their systems, so we need to exclude syntax from comparison.
> It's much more effective to implement prototype-based OO systems using class-based OO languages.
I don't know if it's impossible, but your examples are flawed. Of course you can write any language in any language, like JS VMs are. But that's completely different matter, I was talking about using the language itself in a prototype based manner. I suspect that, due to static and low-level nature of classes in C++ it's completely, utterly impossible to replicate JS object system inside C++. That you can write interpreter that implements this system is obvious and not interesting at all.
As a counter example, in Python you could, with careful use of metaclasses and implementation details of classes and objects create objects whose methods lookup order and semantics of would be the same or close to those of prototype based JS - in the language itself.
=======
Your arguments fail to prove that prototype based-ness somehow fails. It does not, in itself, fail in any way. As I wrote above, show what can be done with classes (single inheritance model) that cannot be replicated using JS or better IO prototype semantics - that would be convincing. The fact that JavaScript development is a mess - I won't deny it - proves nothing about prototypes and only a bit about JS.
By the way, have you tried Haskell, OCaml or F# and Erlang?
You are doing a lot of hand waving, with very little backup. Where are your credible sources that class based OO is so much better than prototype based OO?
Personally I find prototype based OO very powerful and easy to use, but you won't catch me saying it is "better" in any way than class based OO.
Decades of personal experience dealing with many large-scale, real-world software systems written by many different developers, using many programming languages offering a variety of OO approaches.
With such experience comes the realization that there are big differences between the different approaches. Some are seriously inferior to others. Some generally aren't better or worse than others. Some are obviously better in many ways.
Have you ever worked on a significantly large JavaScript-based system that has been created by many developers over the course of several years, or even a decade? I find most people that have will know what I'm talking about. Experiencing it for yourself is much better than any academic citation I could give you.
I have worked with a relatively large JavaScript system (about 20k lines) and found that the vast majority of maintainability problems had to do with dynamic typing and late binding, not prototypal inheritance. This could be a feature of different codebases making different use of the language.
Right, so that was your opinion. Nothing wrong with having an opinion but on HN we try to back our opinions with real world data, otherwise it becomes just a case of he-said, she-said.
Have you ever considered that the suitability or otherwise of Javascript for huge projects might be attributed to other aspects of the language, rather than prototype based OO?
No, it's more than just my opinion. I'm merely stating the facts that exist regardless of my personal preferences or beliefs. I can't provide you with pretty graphs or tables of data to back this up, but the effects are very real and easily observed by those who have experienced similar situations.
JavaScript has many, many, many other problems aside from its use of prototype-based OO. It is a truly awful language is almost every respect. But the problems caused by its broken OO system are very obvious, and the impact they have on the maintainability of JavaScript software are extremely real. Its prototype-based OO is responsible for an entire family of its issues.
> If you think of a type as a set of possible values, then a subclass is actually a supertype—a superset of the possible values of its parent class—and likewise a superclass is a subtype.
You have this exactly backwards. A subclass is a subtype. A superclass is a supertype.
Let's say you have superclass A and derive subclass B from it. In the universe of all possible values, some will be A but not B (they will either be direct instances of A, or instances of some other subclass of A). But every B will also be an A by definition since B is-a A.
Therefore the set of values that are B is a subset of the set of values that are A.
I was a bit backward with my language. What I was trying to get at is that many OO languages encourage you to add fields in derived classes. This makes the derived class a supertype, being a larger set that fully includes the set of its parent class. You’re adding elements to a product type. But this often breaks substitutability, so to conflate class inheritance (a useful concept) with subtyping (another useful concept) is fraught with problems.
> This makes the derived class a supertype, being a larger set that fully includes the set of its parent class.
This is not how "supertype" is typically defined in the literature. I'm not sure what "set of its parent class" means, but I believe the way to look at this is to consider the universe of all possible objects.
In that universe, you will have some objects that are instances of the derived class. Each of those is also an instance of the base class. You may also have instances of the base class that are not instances of the derived class (they may be straight instances of the base, or be instances of some other sibling derived class).
That means the set of instances of the base class completely includes the set of instances of the derived class (since every derived is also a base). Hence, the base class is a supertype. Conversely, since the set of instances of the derived class may not include some instances of the base class, it is a subtype.
> You’re adding elements to a product type.
I don't think you can cleanly map classes to tuple types. Consider a derived class that adds no fields. In that case, it's not an identical type to its base class, but it would be an identical product type.
Yes and no. I think the poster does make an interesting point.
If the parent class is instantiable - which is a code smell already but also very common and even encouraged by some OO evangelizing[1], as is the case for the "colored point" example - then he's right, the derived class which adds fields does admit more values than the parent class does (excluding child values).
[1] encouraged by the commonly heard advice: "if you don't like exactly what the class does, then derive from it and supply your own customizations". Ie. this is the use of inheritance for ad-hoc customization, as opposed to a design-first approach which would have the parent be abstract (which has its own problems of course such as requiring great foresight).
Range and capability are separate concepts. That’s what I’m getting at—conflating them is problematic, not least because it’s trying to statically specify dynamic behaviour.
Here’s a simple example. A 2D vector (x:ℝ × y:ℝ) is a subtype of a 3D vector (x:ℝ × y:ℝ × z:ℝ). The range of the 3D vector includes that of the 2D one, but there is no sane way to make one a subclass of the other, due to differences in behaviour—magnitude, for instance.
> Here’s a simple example. A 2D vector (x:ℝ × y:ℝ) is a subtype of a 3D vector (x:ℝ × y:ℝ × z:ℝ).
Again, I think that's backwards. Every 3D vector is also a 2D vector if you ignore its z coordinate, so 3D vectors are a subtype of 2D vectors (assuming some type system that has subtyping between tuples of fewer fields).
You could implement this by having 3D vector subclass 2D, but as you note that's fraught with peril. But that's because substitutability requires meaningful semantic behavior and not just matching signatures. Just because two objects can both "foo" doesn't mean they are substitutable in a pragmatic sense. They have to do the appropriate thing when you foo them.
If you think of a type as a set of possible values, then a subclass is actually a supertype—a superset of the possible values of its parent class—and likewise a superclass is a subtype.
Nope. A subclass is usually (not always!) a subtype, not the other way round. Or, more specifically (and accurately), most statically-typed class-based OO languages treat subclasses in their type systems as subtypes by default, i.e., they assume that a subclass instance can act as an instance of its superclass, LSP-wise. This, I believe, is as accurate statement as one can make (on this level of generality).
The “is-a” relationship is saying “for all instances I of X, I is an instance of Y” which is essentially the definition of superset. That is what the Liskov Substitution Principle is about.
It's a definition of a superset, if Y is the superset and X is the subset.
In C++ and other languages with support for object-oriented programming, a subclass is not necessarily a supertype: virtual functions can throw “I don’t make sense in this derived class”.
That's a part of why I believe that static typing doesn't make too much sense when it comes to OO languages (with (sub)classes). If static typing is supposed to take care of subtype assignability statically and the semantics of the behaviour of derived classes can break the LSP assumptions, static typing becomes sort of worthless.
That interface isn't great, but that interface isn't coming from Go. I realize you aren't claiming it is, and you're absolutely right that Go's interfaces don't automatically "solve" issues of bad design. Programmers are free to come up with bad interface designs in Go just as easily as they came up with bad class designs in Java or C++. This is just to avoid confusion for people who aren't generally familiar with Go but read this article.
The Go standard io library has a Reader interface:
and a ReadWriter interface which uses type embedding to combine both:
type ReadWriter interface {
Reader
Writer
}
Idiomatic Go will break interfaces up into pretty discrete bits of functionality and use type embedding to build up more complex interfaces. The article does show this a little bit with its seekable interface but fails in some other areas and I kind of wish it just used the real io package interfaces or else examples of something not covered better by the actual Go standard library.
Well, sure, but Reader and Writer are interfaces existing in java too :)
Anyway I think we agree on the general issue, but actually I suspect go's approach _does_ solve some bad design issues that crop up in common OO, namely the fragile base class one.
I just don't think it impacts the misspecification one.
I think this points to a seperate issue as well: Often times we have a choice to express the same thing through types or through state. As the influence of functional languages grows, we tend to choose types more often.
It wouldn't be surprising to have a ReadOnlyFile type and a ReadWriteFile type. This can easily be justified because an individual file object doesn't need to change its read/write property after it has been created. But conceptually, files can be written to, just not always.
Another example makes that clearer. Adults can do things that minors can't. So, for instance, a signContract() method would have to go on the Adult type not on the Person type. But people grow older, so at any point in time some objects conceputally change their type from Minor to Adult.
Technically this is not a difficult problem to solve. But it shows the disconnect between our natural language concepts and the ever more type centered way of modelling we use in modern software design.
My point isn't a formal one but rather an observation about software design trends.
More imporantly though, what FP does in both its dynamically and statically typed incarnations is to prefer (and sometimes enforce) immutability.
Immutable objects support a much more type centered view of the world. Theoretically, you could have something like a VehicleAtFullSpeed type (dynamic or static) because the speed attribute of an individual object never changes.
> then a subclass is actually a supertype—a superset of the possible values of its parent class—and likewise a superclass is a subtype
I'm sure this makes sense and is well-though-out, but I just can't understand it. What difference does it make if `Dog < Animal` are classes or types? Still any dog is an animal and not any animal is a dog. Or am I missing something?
you're not missing something, I believe the comment just got it flipped, and was referring[1] to problems relating to covariance and contravariance, when one tries to view OOP in terms of type theory.
As a consequence/means of avoiding Russel's paradox, the standard axiomatization of set theory (ZFC), doesn't allow sets to contain themselves (so no set of all sets). Thus, if types are sets of values, and types are values, you've got a problem since the type of types seems to be a set that contains itself. Of course, I think this just means that well-founded set theories like ZFC are a poor choice for modeling type systems for languages with first-class types. There are set theories that allow sets to contain themselves [http://en.wikipedia.org/wiki/Non-well-founded_set_theory]. Such set theories don't replace ZFC, but rather extend it, adding "hypersets", which are the sets that contain themselves – the well-founded sets behave as usual.
I've been recently watching Structure and Interpretation of Computer programs, where it is demonstrated that first class functions and assignment operator with nested scope are enough to implement object oriented system.
The simplest example given is a Counter object. Translated from Lisp to Javascript it goes like this:
function make_counter() {
var val = 0;
function get() {
return val;
}
function inc() {
val += 1;
}
return [get, inc]
}
function get_count(counter) {
return counter[0]();
}
function inc_count(counter) {
counter[1]();
}
var counter = make_counter();
console.log(get_count(counter).toString());
inc_count(counter);
inc_count(counter);
console.log(get_count(counter).toString());
This gives polymorphism (you can define make_fast_counter() that increases by 2 and still use get_count() and inc_count() with it) and encapsulation (you can not decrease the count).
The venerable master Qc Na was walking with his student, Anton. Hoping to
prompt the master into a discussion, Anton said "Master, I have heard that
objects are a very good thing - is this true?" Qc Na looked pityingly at
his student and replied, "Foolish pupil - objects are merely a poor man's
closures."
Chastised, Anton took his leave from his master and returned to his cell,
intent on studying closures. He carefully read the entire "Lambda: The
Ultimate..." series of papers and its cousins, and implemented a small
Scheme interpreter with a closure-based object system. He learned much, and
looked forward to informing his master of his progress.
On his next walk with Qc Na, Anton attempted to impress his master by
saying "Master, I have diligently studied the matter, and now understand
that objects are truly a poor man's closures." Qc Na responded by hitting
Anton with his stick, saying "When will you learn? Closures are a poor man's
object." At that moment, Anton became enlightened.
From a high level programming in Go looks to me
to be very like programming with Microsoft's
COM. It's interface oriented, no inheritance. You could say that
this is the purest form of OO if you take Alan
Kay's viewpoint that the emphasis should be on
the messages and not the object. In COM you
have QueryInterface and in Go you have interface
matching and casting. QueryInterface or an
interface cast is like a hinge between dynamic
and static typing. You get great flexibility but still
preserve your abstractions in that you never
depend on an implementation.
Of course you can
do this in, say, java by using instanceof to match
against an interface but the difference with COM
and Go is that it is a primary mechanism and so
the code written in the language tends to use this
pattern a lot. i.e. It's encouraged.
It seems like it's allowed explicitly by LWN: "The following subscription-only content has been made available to you by an LWN subscriber. Thousands of subscribers depend on LWN for the best news from the Linux and free software communities. If you enjoy this article, please consider accepting the trial offer on the right. Thank you for visiting LWN.net!"
"Where is it appropriate to post a subscriber link?
Almost anywhere. Private mail, messages to project mailing lists, and blog entries are all appropriate. As long as people do not use subscriber links as a way to defeat our attempts to gain subscribers, we are happy to see them shared."
My style now is small classes with a few, small methods each and lots of composition. I find it easier to reason about code, separate functionality, and reuse pieces. Also, placing unrelated state in separate classes really discourages a lot of the common mistakes with OOP.
Admittedly I don't do as much reading on the subject as I should. I am probably discovering things that were common knowledge 25 years ago. Maybe this is a common road that developers take. That's fine with me.
Rust's and Go's OOP style appeals to me quite a bit. I have only written small things in those languages, so I'm not exactly well-versed in what larger projects look like. Before reading this article, I wasn't able to put into words what bothered me about the style of inheritance in languages I've used up until now.