There's nothing wrong with object methods (that's 100% pure syntax vs. a function call) and an implicit "this" scope for symbols (which is just a limited form of dynamic scope[1]). They don't make code hard to understand. OO can be abused to produce bad designs, of course, but that's not an indictment of its syntax.
Non-syntactic aspects are maybe a more involved discussion. For an example, I personally think traditional OO lends itself very nicely to runtime polymorphism. And this is something that more modern languages have really struggled with (take a random new hacker and try to explain to them virtual functions vs. trait objects). Now... polymorphism can be horribly abused. But, it's still useful, and IMHO the current trends are throwing it out with the bathwater.
[1] Something that itself has long since fallen out of fashion but which has real uses. Being able to reference the "current" value of a symbol (in the sense of "the current thing we are working on") is very useful.
I don't see that follows. The flood of OO languages in the early 90's went hand in hand with a very broad reorientation of the way design was done. That that had bad side effects isn't really a rejection of the fact that OO design really was a fundamentally different way of thinking about problems, and languages that supported it syntactically were doing so to enable this paradigm shift. That's not really "fashion" in the sense that I meant.
On the flip side, the modern language zeitgeist isn't really trying to change things in fundamental ways, except to say "don't do bad OO". But you don't need to reject OO syntax and features (c.f. polymorphism above) to reject bad OO. That part is fashion.
But to me the rise of OOP in the 90es where driven by the need of programming user interfaces and the rise of Windows and similar. A graphical user interface is naturally represented as a hierarchy of objects each with their own internal state. Often the language was designed to work in an integrated development environment with a user interface builder (with language features such as object serialization and reflection).
This is very obvious in Borland Delphi (OO Pascal), Objective-C, VB, Smalltalk, C#.
Then things shifted and people started doing web development where all of sudden you had hundreds of concurrent users on a single server; and then people began (re)inventing languages that handled concurrency well.
Having said that. I don't think the dominating trend today is functional programming but rather "multiparadigm" (as it should be).
I always figured that the main thing people really wanted out of "OO" was simply namespaces. People forget about dealing with older programming systems with global namespace (C, Matlab, Fortran, etc). Namespaces were a big step forward, and in the early days, most of the languages with them were OO, so it would be easy to mistake the benefits of namespaces for OO magic.
OO crapola also mapped reasonably well onto GUI design. That's a whole class of programming that doesn't really exist any more. Where it does (I dunno what people use besides Qt), it looks kind of objecty.
What surprises me is that (at least judging by job adverts) so many people still think to seem OOP is the best thing since sliced bread, as opposed to a just another tool to be approached pragmatically.
It's "pure fashion" until you try to massage your new subclass into behaving slightly differently when you're living with class hierarchies that are N levels deep.
Clean compositions, ala Scala traits, can be done with OO. It's just not something I've seen in the wild, or something actively encouraged by any OO teachings I've witnessed. At the same time, composition is pretty much the only way FP is taught (at least in my experience).
This, basically. People should stop thinking about "composition vs. inheritance" and start thinking of implementation-inheritance and thus OOP in a strict sense as "composition plus open recursion". Open recursion (viz. the fact that methods defined on your "base" classes can invoke 'virtual' methods on self and end up depending on behavior that may have been overridden at any level in the class hierarchy) is quite often a misfeature and not what you actually want, given that the invariants involved in making its use meaningful or sane are extremely hard to usefully characterize.
Open recursion is pretty much core to the most useful OOP idiom, which I would assert is UI component toolkits.
UI heavily relies on a large number of conventions that the user learns to expect to cat in certain ways. You can parameterize a UI widget that encapsulates these conventions with function pointers or overridden methods, but the end result is pretty much the same.
I'll strongly agree however that far too few programmers in OO languages pay enough attention to the hard problem of designing for extension. It's the reason that C#, unlike Java, defaults to non-virtual methods.
Except UI trees are probably the worse place for objects
In PHP for example all query builders are meant to be used as a tree of objects, which means if you want to recursively change the table names used by the query you're out of luck because that state is often private and behaviour not implemented
Or maybe you want to inspect the query as data at runtime or assert against its shape or deeply update or delete or copy or compose parts of it all of those things are difficult
See hiccup for a data oriented UI HTML tree, honeySQL for SQL there's one for css, routing, there's some for all of the state of your app like redux and Integrant
As soon as you have a tree of objects your coworkers won't know how your objects work till you train them and even then it won't be as powerful as your standard library, but your coworkers do know how to manipulate arrays vectors maps sets dictionaries etc self made objects are obtuse in these use cases
VB, Delphi and WinForms have all been pretty successful in their domains. You'll need to work harder to convince me, and many other people, that their use of objects was the worst. The objects are part of the standard library, that's practically a given when talking about OO UI.
Objects are a bit weaker for UI as a projection of a domain data model, especially when it needs to be reactive to changes in the underlying data. The more the UI is data-bound, the more I prefer a functional / data flow idiom.
When the goal is to model something “in real life” OOP tends to map decently well and it’s easy to teach. When you’re trying to make sure your program isn’t going to go off the rails, limits on mutation is one of the first places to look. When your software has 50mm+ valid states, one should hopes they have a large manual QA team. If you have all the possible states held in their own subsystem, you can automate stability much more easily.
Classical OOP forces you to organize around a single and very specific taxonomy; things "in real life" are usually the very opposite of that. I remember textbook examples of inheritance with shapes or animals, both of which actually show clearly why it's a bad idea.
>>I remember textbook examples of inheritance with shapes or animals
Man, I wonder how people went through this wondering, Who uses this? Can't they show us something real!
Eventually you just kind of tune out, because when you often meet the real world use cases, OO either descends to hierarchical verbosity from hell with layers and layers of generic abstract stuff when things should be more direct.
Why is modelling in OOP any more "real life" than with other programming paradigms? I've heard this many times from OOP zealots but I just don't get it. Most examples of this I've seen focus on physical objects such as cars which is just ludicrous as your average piece of software is morel likely to be dealing with a data structure, such as a user profile, than anything physical.
My favorite example is SimCity. Any time you have a bunch of models that operate mostly independently and somewhat based on neighbors OOP seems to map nicely. When you are taking more abstract concepts or data flows (which is... probably 95% of web programming and 80% of all programming for example) it doesn't map well, and you end up with a lot of natural funkiness because the base modeling language doesn't match the concepts.
Yes, I know, Tim Sweeney has also got an interest in PL theory.
Nonetheless, that was in 2013, it's now 2019, video games are still written in C++ and not any FP language. FP has been "the future" for as long as I've been alive and I don't think it'll ever happen.
Exactly this. I think this is the issue with OO that developers have come to realise, and which more functional languages get around. It's the "banana, gorilla, forest" problem that Joe Armstrong of Erlang once mentioned in a book I think, and it boils down to the lack of care when considering separation of concerns.
Of course you can separate concerns properly in an OO context, but most developers don't. It's much easier to consider properly when the entire language is structured around it.
I think I understand what you meant. I like comparing it with the Directories vs Labels comparison (that presumably was won by Labels).
Back when Gmail just started, one of the things that it made different from other web-mail services (including hotmail) was letting you "label" emails instead of "moving them" to folders.
The problem with Directories was that, at some point, content might have two different classifications, so the question of putting it in two directories arises (if using that abstraction).
Same thing happens with Object hierarchies, even if you start meticulously defining the hierarchy of your objects given the current domain you are mapping, chances are in 2 years you will get a trait/data that does not really fall in one of your defined objects, and you will struggle to put it in one or the other, and your encapsulation will start breaking.
That happens "in practice" in real life, and is something that tons of books about OOD, OOA, and OOP define as incorrect architecture in theory, but there was always a disconnect.
I don't code Scala, but aren't these traits approximately same as interfaces in C#? Interfaces are used a lot in the wild, including in the standard library, they're often generic, IEnumerable<T>, IComparable<T>, and so on.
> OO can be abused to produce bad designs, of course, but that's not an indictment of its syntax.
It is.
The strength of a language isn't just in what allows you to do, but also in what it limits you to do.
OO in the style of C++/C#/Java is extremely flexible in terms of code organization. This means that most codebases eventually grow towards a mess of different styles and inconsistent design patterns. One guy's abstraction is the next guy's unnecessary plumbing. Teams often have to manage consistency of style and structure through out-of-code agreements.
The advantage of paradigms like procedural or functional programming as well as trait based "OO", is that there is generally an obvious way to structure something. Two different programmers working on the same problem, are likely to produce similarly structured code. The result is that different programmers will adopt much easier to different codebases.
> OO in the style of C++/C#/Java is extremely flexible in terms of code organization.
Is the problem perhaps that the wrong lessons were taken from earlier OOPs by later OOPs? Alan Kay in 2003:
> OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I'm not aware of them.
If Alan Kay's definition is at odds with what 99% of the industry call OOP then it's not particularly useful, his definition sounds more like micro-services.
Funnily enough it also describes some of my more elaborate shell scripts with various messages flowing into and out of self contained processes.
Traits can be abused too, you can create a trait for every thing because abstraction is good and pass traits everywhere. Where to get the trait? From a factory of course. Then you roll an IoC container.
"The advantage of paradigms like procedural or functional programming as well as trait based "OO", is that there is generally an obvious way to structure something."
And surely that would be the way that you consider correct.
After seeing codebases with hundreds of global variables I know that OOP is not the only paradigm that can be abused.
You have some good points. But have you tried Idris or Agda?
My problem with OO is one of culture. If and only if you work with big codebases you start to feel that you can’t make assumptions about anything. For all I know the plus operator can send nukes. Then they say it doesn’t matter because they made the perfect lasagna with a million layers.
I know there’s a million ways to code with oo to make it better. But I never saw that enforced by tools which is a must if we are a thousand developers in the same company.
Meanwhile if I see a concept in Haskell I can in most cases just trust the assumptions.
Also the compiler researchers that I adore are those that focus on making proofs and stable code, not the ones trying to dumb down programming to make it accessible.
You build a house, which is more important, a good foundation (built on math) or accessibility for the beginner builders?
The Haskell community is full of terrible culture, things like badly designed libraries that assume the type signatures are the same thing as documentation, or ridiculous constructs designed to seem clever or work around the various straitjackets Haskell imposes.
If you feel you can just 'trust the assumptions' in Haskell but not in OO environments you've probably just been comparing very different codebases written by very different programmers.
In a codebase that uses OOP well it's very easy to understand what assumptions you can make and the tooling can be excellent. For instance, IntelliJ will happily show you all the possible implementations of a virtual method if you're using polymorphism. "It might launch the nukes" is pure Haskell meme noise - the equivalent unexpected behaviour for Haskell would be a difficult to understand space leak.
Building a house rests on a long history of crafting and engineering. If working builders needed to understand abstract algebra we might have a lot fewer of them. The foundation of construction isn’t really mathematical. You can build a house without even doing a single measurement. Accessibility for beginner builders is in fact very important.
I’ve used Agda. It’s extremely difficult. Even pretty basic proofs can be extremely tricky; just figuring out how to use the transitive equality proof syntax is a challenge. You quickly run into the need for stuff like “universe polymorphism” which comes with a huge and terrifying theory. If this is the only way to make decent programs then we’re doomed!
Cognitive overhead is the key word for me here - it can be just as difficult to reason about massively long function chains as a bunch of stateful classes, but usually in my experience an OO codebase follows conways law very tightly in that you get a history of how developers were split up - not discreet units of functionality with well defined interfaces. It’s hard to say something is objectively better, but it’s along the same lines to me as not buying a car from a manufacturer with a bad reputation.
The most upvoted answer begins with the words “Unpopular answer” and then continues with cynicism. That definitely feels very HN. (No offense, of course; just found it amusing.)
Although you make some good points, I disagree with the premise that it’s pure fashion. Language syntax doesn’t prevent you from implementing various OO paradigms but when you combine syntax with community norms, things do tend to be constrained. Nobody does traditional inheritance in Go, for example, which actually tends to work out well in use cases where it’s popular.
The times i've ever actually needed 'runtime' polymorphism (usually implemented w/ subtype & virtual methods) are utterly dwarfed by the times I'd been able to make use of parametric polymorphism, which I find far easier to reason about and use.
I'll disagree with your popular answer (it's the most upvoted right now). There is a certain degree of fashion, however pure functions and immutable values make data parallelism extremely easy, almost trivial. Our hardware has almost hit the ceiling on single core performance, so easy parallelism is the way to make use of this.
I do believe that object orientation as a concept has value. A lot of concepts map easily to objects by nature. However the 90s OOP fashion, exemplified by Java, lead to horrible lasagna code. Especially if the underlying space didn't map straightforwardly to the concept of an object, then you're adding a layer of abstraction that can lead to misunderstandings.
Pure fashion? I think that's a bit much. Rust was inspired heavily by ML/Haskell/C++, which are not particularly object oriented (C++ is object friendly, not object oriented).
C++ was explicitly about adding support for OOP to C, and most modern languages that have OOP support derive from C++ in that support (often by way of Java), though there area few that remain that are directly inspired by Smalltalk without C++ as an intermediary, and a smaller number that don't come from the class-based OO family rooted in Smalltalk (JS notably, though modern JS has added class-based features.)
Sure, but adding OOP to an imperative language kind of implies it's one of many systems coexisting. Modern C++ is best described as imperative, functional, and object-oriented.
> but adding OOP to an imperative language kind of implies it's one of many systems coexisting.
OOP is inherently an imperative paradigm; you might mean “procedural” instead of “imperative” (certainly, that makes both uses of “imperative” in your post more sensible), but even then, OOP as a paradigm is very closely related to the procedural paradigm.
I don't think the problems with the "classic OOP languages" like Java, C#, C++ and Python stem from the objects themselves. IMO it's the implicit mutability and lack of actual type safety (even for typed languages) that the new crop is trying to solve. And with functions it's more explicit and not "just doing the same thing better". This way it's probably easier for adoption, as people explicitly adopt "new ways" to do things in a "better way"
I agree it's fashion to some extent, and still could be done with OOP, though I think it's nice we are doing it this way (for the above reasons).
W.r.t. dynamic scope, I agree that there are some nice use-cases for it. Racket has good support for dynamic scope (it calls dynamically scoped variables "parameters"), and I've found them to be useful, e.g. for handling environment variables of sub-processes.
> There's nothing wrong with object methods (that's 100% pure syntax vs. a function call)
Not really; in OOP a method call like `foo.bar(baz)` sends the `foo` object the message `bar` with argument `baz` (in Kay's current terminology); or, in more 'mainstream' terminology, it looks for a method in the `bar` slot of `foo` and calls it with `baz`.
As far as I'm aware, this is a pretty core concept in OOP: if our code isn't organised along these lines, then we're writing non-OOP code which just-so-happens to use classes/prototypes/methods (in the same way that we can e.g. write OOP code in C which just-so-happens to use procedures/switch/etc.).
(Side caveat: I appreciate that I'm making some assumptions here; one of my pet peeves with OOP is that it's rife with No True Scotsman fallacies about what is "proper" OOP, e.g. see 90% of the content on https://softwareengineering.stackexchange.com )
There is a fundamental asymmetry between the roles of `foo` (object, receiver of message, container of methods) and `baz` (argument, included in message, passed into method).
A function call like `bar(foo, baz)` doesn't have this asymmetry: the order of the arguments is just a convenience, it has no effect on how the `bar` symbol is resolved (in most languages it's via lexical scope, just like every other variable). Swapping them is also trivial, e.g.:
In contrast, I can't think of an OOP alternative to this which isn't messy (e.g. adding a `rab` method to `baz` via monkey-patching).
Of course, the elephant in the room here is CLOS, but that's sufficiently different from most OOP as-practiced that it's better considered separately (e.g. I'd be more inclined to agree that CLOS methods are "100% pure syntax vs a function call")
> Non-syntactic aspects are maybe a more involved discussion. For an example, I personally think traditional OO lends itself very nicely to runtime polymorphism.
I also disagree with this, again due to the artificial asymmetries that OOP introduces (that objects "contain" methods). In particular, as far as I'm aware OOP simply cannot express return-type polymorphism. The asymmetry between arguments and return values is certainly more fundamental than the completely unneeded distinction between receiver and arguments, but it's still very useful to dispatch on the output rather than the input.
The classic example is `toString` being trivial in OOP (a method which takes some object as input and renders it to a string; the implementation dispatches on the given object); whilst `fromString` is hard (a method which takes a string and returns some object parsed from it; OOP can't dispatch the implementation by "looking inside" the object, since we don't have an object until we've finished parsing).
You can do that sort of thing in Kotlin, which is an OOP (well, multi-paradigm) language:
inline fun <reified T> String.fromString(): T {
return when {
T::class == String::class -> this
T::class == Int::class -> Integer.valueOf(this)
else -> TODO()
} as T
}
val str: String = "abc".decode()
val int: Int = "123".decode()
You have to be careful with type inference: if there's no way for the compiler to figure out what type you want from the call site, you'll get an error.
You might say this isn't OOP in the strictest possible sense, but extension methods + type inference + reified types gives what you're asking for in a way that's natural for an OOP programmer.
> You have to be careful with type inference: if there's no way for the compiler to figure out what type you want from the call site, you'll get an error.
I'd say that's a feature, not a bug :)
> You might say this isn't OOP in the strictest possible sense
I would say it's not OOP in any sense. Having one function implement all the different behaviours, and pick which one to run by switching on some data (in this case the class tag) is a classic example of being not OOP.
As a bonus, this ignores dynamic dispatch (the only implementation is for `String`) and it's not polymorphic (the same code doesn't work for different types; instead we have different clauses for each type).
This would be salvagable if `String::fromString` only enforced the type constraint, and dispatched to `T::fromString` to do the actual parsing, e.g.
inline fun <reified T> String.fromString(): T {
return when {
T::class == String::class -> this
else -> T::fromString(this)
} as T
}
I'm not sure whether that would work or not (I've never written Kotlin), but I still think it's "messy" (monkey-patching methods on to classes, reimplementing dynamic dispatch manually, inspecting classes (via the equality check), going out of our way to prevent infinite loops, etc.)
There's nothing wrong with object methods (that's 100% pure syntax vs. a function call) and an implicit "this" scope for symbols (which is just a limited form of dynamic scope[1]). They don't make code hard to understand. OO can be abused to produce bad designs, of course, but that's not an indictment of its syntax.
Non-syntactic aspects are maybe a more involved discussion. For an example, I personally think traditional OO lends itself very nicely to runtime polymorphism. And this is something that more modern languages have really struggled with (take a random new hacker and try to explain to them virtual functions vs. trait objects). Now... polymorphism can be horribly abused. But, it's still useful, and IMHO the current trends are throwing it out with the bathwater.
[1] Something that itself has long since fallen out of fashion but which has real uses. Being able to reference the "current" value of a symbol (in the sense of "the current thing we are working on") is very useful.