Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Why do new(ish) programming languages eschew OOP features?
231 points by vanilla-almond on June 30, 2019 | hide | past | favorite | 227 comments
Some relatively new or popular languages are not object-oriented, although they may have object-like features. Examples include: Rust, Go, Julia, Nim.

I have always struggled with OOP and have never found it a natural way of programming. But many other programmers will disagree of course.

I find it fascinating that some new languages have chosen to eschew the OOP model. Why do you think that is? And what do think of this trend (if it is indeed a trend)?




Unpopular answer: pure fashion.

There's nothing wrong with object methods (that's 100% pure syntax vs. a function call) and an implicit "this" scope for symbols (which is just a limited form of dynamic scope[1]). They don't make code hard to understand. OO can be abused to produce bad designs, of course, but that's not an indictment of its syntax.

Non-syntactic aspects are maybe a more involved discussion. For an example, I personally think traditional OO lends itself very nicely to runtime polymorphism. And this is something that more modern languages have really struggled with (take a random new hacker and try to explain to them virtual functions vs. trait objects). Now... polymorphism can be horribly abused. But, it's still useful, and IMHO the current trends are throwing it out with the bathwater.

[1] Something that itself has long since fallen out of fashion but which has real uses. Being able to reference the "current" value of a symbol (in the sense of "the current thing we are working on") is very useful.


Unpopular answer to the unpopular answer: OOP was pure fashion

.. “Our customers wanted OO prolog so we made OO prolog” .. http://harmful.cat-v.org/software/OO_programming/why_oo_suck...

.. screw things up with any idiotic "object model" crap. .. http://harmful.cat-v.org/software/c++/linus


I don't see that follows. The flood of OO languages in the early 90's went hand in hand with a very broad reorientation of the way design was done. That that had bad side effects isn't really a rejection of the fact that OO design really was a fundamentally different way of thinking about problems, and languages that supported it syntactically were doing so to enable this paradigm shift. That's not really "fashion" in the sense that I meant.

On the flip side, the modern language zeitgeist isn't really trying to change things in fundamental ways, except to say "don't do bad OO". But you don't need to reject OO syntax and features (c.f. polymorphism above) to reject bad OO. That part is fashion.


Fashion is surely a part of it.

But to me the rise of OOP in the 90es where driven by the need of programming user interfaces and the rise of Windows and similar. A graphical user interface is naturally represented as a hierarchy of objects each with their own internal state. Often the language was designed to work in an integrated development environment with a user interface builder (with language features such as object serialization and reflection).

This is very obvious in Borland Delphi (OO Pascal), Objective-C, VB, Smalltalk, C#.

Then things shifted and people started doing web development where all of sudden you had hundreds of concurrent users on a single server; and then people began (re)inventing languages that handled concurrency well.

Having said that. I don't think the dominating trend today is functional programming but rather "multiparadigm" (as it should be).


> On the flip side, the modern language zeitgeist isn't really trying to change things in fundamental ways

I disagree, I'd say all of the quoted languages are attempting to fundamentally change things to get around the expression problem[0][1]

[0] https://news.ycombinator.com/item?id=11683379

[1] https://en.m.wikipedia.org/wiki/Expression_problem


Isn't that obvious? If fashion is a reason for something to come into existence then fashion can also be a reason for it to fade away.


I always figured that the main thing people really wanted out of "OO" was simply namespaces. People forget about dealing with older programming systems with global namespace (C, Matlab, Fortran, etc). Namespaces were a big step forward, and in the early days, most of the languages with them were OO, so it would be easy to mistake the benefits of namespaces for OO magic.

OO crapola also mapped reasonably well onto GUI design. That's a whole class of programming that doesn't really exist any more. Where it does (I dunno what people use besides Qt), it looks kind of objecty.


> simply namespaces

1980 module, Modula-2

1979? package, Ada

http://www.drdobbs.com/open-source/history-and-goals-of-modu...


Not sure what this says, except that this industry remains terrible about learning from the past.


It might say people really wanted more than simply namespaces ;-)


What surprises me is that (at least judging by job adverts) so many people still think to seem OOP is the best thing since sliced bread, as opposed to a just another tool to be approached pragmatically.


It's "pure fashion" until you try to massage your new subclass into behaving slightly differently when you're living with class hierarchies that are N levels deep.

Clean compositions, ala Scala traits, can be done with OO. It's just not something I've seen in the wild, or something actively encouraged by any OO teachings I've witnessed. At the same time, composition is pretty much the only way FP is taught (at least in my experience).


This, basically. People should stop thinking about "composition vs. inheritance" and start thinking of implementation-inheritance and thus OOP in a strict sense as "composition plus open recursion". Open recursion (viz. the fact that methods defined on your "base" classes can invoke 'virtual' methods on self and end up depending on behavior that may have been overridden at any level in the class hierarchy) is quite often a misfeature and not what you actually want, given that the invariants involved in making its use meaningful or sane are extremely hard to usefully characterize.


Open recursion is pretty much core to the most useful OOP idiom, which I would assert is UI component toolkits.

UI heavily relies on a large number of conventions that the user learns to expect to cat in certain ways. You can parameterize a UI widget that encapsulates these conventions with function pointers or overridden methods, but the end result is pretty much the same.

I'll strongly agree however that far too few programmers in OO languages pay enough attention to the hard problem of designing for extension. It's the reason that C#, unlike Java, defaults to non-virtual methods.


Except UI trees are probably the worse place for objects

In PHP for example all query builders are meant to be used as a tree of objects, which means if you want to recursively change the table names used by the query you're out of luck because that state is often private and behaviour not implemented

Or maybe you want to inspect the query as data at runtime or assert against its shape or deeply update or delete or copy or compose parts of it all of those things are difficult

But use a data oriented library and suddenly they're easy transparent and predictable, self plug : https://github.com/slifin/beeline-php

See hiccup for a data oriented UI HTML tree, honeySQL for SQL there's one for css, routing, there's some for all of the state of your app like redux and Integrant

As soon as you have a tree of objects your coworkers won't know how your objects work till you train them and even then it won't be as powerful as your standard library, but your coworkers do know how to manipulate arrays vectors maps sets dictionaries etc self made objects are obtuse in these use cases


VB, Delphi and WinForms have all been pretty successful in their domains. You'll need to work harder to convince me, and many other people, that their use of objects was the worst. The objects are part of the standard library, that's practically a given when talking about OO UI.

Objects are a bit weaker for UI as a projection of a domain data model, especially when it needs to be reactive to changes in the underlying data. The more the UI is data-bound, the more I prefer a functional / data flow idiom.


When the goal is to model something “in real life” OOP tends to map decently well and it’s easy to teach. When you’re trying to make sure your program isn’t going to go off the rails, limits on mutation is one of the first places to look. When your software has 50mm+ valid states, one should hopes they have a large manual QA team. If you have all the possible states held in their own subsystem, you can automate stability much more easily.


Not even there.

Classical OOP forces you to organize around a single and very specific taxonomy; things "in real life" are usually the very opposite of that. I remember textbook examples of inheritance with shapes or animals, both of which actually show clearly why it's a bad idea.


>>I remember textbook examples of inheritance with shapes or animals

Man, I wonder how people went through this wondering, Who uses this? Can't they show us something real!

Eventually you just kind of tune out, because when you often meet the real world use cases, OO either descends to hierarchical verbosity from hell with layers and layers of generic abstract stuff when things should be more direct.


Why is modelling in OOP any more "real life" than with other programming paradigms? I've heard this many times from OOP zealots but I just don't get it. Most examples of this I've seen focus on physical objects such as cars which is just ludicrous as your average piece of software is morel likely to be dealing with a data structure, such as a user profile, than anything physical.


My favorite example is SimCity. Any time you have a bunch of models that operate mostly independently and somewhat based on neighbors OOP seems to map nicely. When you are taking more abstract concepts or data flows (which is... probably 95% of web programming and 80% of all programming for example) it doesn't map well, and you end up with a lot of natural funkiness because the base modeling language doesn't match the concepts.


Game engines tend to model real life and they are usually very OOP.


According to John Carmack, the acclaimed expert in the field of game programming, "Functional Programming is the Future" - https://www.youtube.com/watch?v=1PhArSujR_A


Yes, I know, Tim Sweeney has also got an interest in PL theory.

Nonetheless, that was in 2013, it's now 2019, video games are still written in C++ and not any FP language. FP has been "the future" for as long as I've been alive and I don't think it'll ever happen.


Mostly I guess speed matters there and you have fewer options apart from C++.


Exactly this. I think this is the issue with OO that developers have come to realise, and which more functional languages get around. It's the "banana, gorilla, forest" problem that Joe Armstrong of Erlang once mentioned in a book I think, and it boils down to the lack of care when considering separation of concerns.

Of course you can separate concerns properly in an OO context, but most developers don't. It's much easier to consider properly when the entire language is structured around it.


I think I understand what you meant. I like comparing it with the Directories vs Labels comparison (that presumably was won by Labels).

Back when Gmail just started, one of the things that it made different from other web-mail services (including hotmail) was letting you "label" emails instead of "moving them" to folders.

The problem with Directories was that, at some point, content might have two different classifications, so the question of putting it in two directories arises (if using that abstraction).

Same thing happens with Object hierarchies, even if you start meticulously defining the hierarchy of your objects given the current domain you are mapping, chances are in 2 years you will get a trait/data that does not really fall in one of your defined objects, and you will struggle to put it in one or the other, and your encapsulation will start breaking.

That happens "in practice" in real life, and is something that tons of books about OOD, OOA, and OOP define as incorrect architecture in theory, but there was always a disconnect.


> It's just not something I've seen in the wild

I don't code Scala, but aren't these traits approximately same as interfaces in C#? Interfaces are used a lot in the wild, including in the standard library, they're often generic, IEnumerable<T>, IComparable<T>, and so on.


> OO can be abused to produce bad designs, of course, but that's not an indictment of its syntax.

It is.

The strength of a language isn't just in what allows you to do, but also in what it limits you to do.

OO in the style of C++/C#/Java is extremely flexible in terms of code organization. This means that most codebases eventually grow towards a mess of different styles and inconsistent design patterns. One guy's abstraction is the next guy's unnecessary plumbing. Teams often have to manage consistency of style and structure through out-of-code agreements.

The advantage of paradigms like procedural or functional programming as well as trait based "OO", is that there is generally an obvious way to structure something. Two different programmers working on the same problem, are likely to produce similarly structured code. The result is that different programmers will adopt much easier to different codebases.


> OO in the style of C++/C#/Java is extremely flexible in terms of code organization.

Is the problem perhaps that the wrong lessons were taken from earlier OOPs by later OOPs? Alan Kay in 2003:

> OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I'm not aware of them.

* http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay...

* https://news.ycombinator.com/item?id=19415983 (via)


If Alan Kay's definition is at odds with what 99% of the industry call OOP then it's not particularly useful, his definition sounds more like micro-services.

Funnily enough it also describes some of my more elaborate shell scripts with various messages flowing into and out of self contained processes.


Traits can be abused too, you can create a trait for every thing because abstraction is good and pass traits everywhere. Where to get the trait? From a factory of course. Then you roll an IoC container.


"The advantage of paradigms like procedural or functional programming as well as trait based "OO", is that there is generally an obvious way to structure something."

And surely that would be the way that you consider correct.

After seeing codebases with hundreds of global variables I know that OOP is not the only paradigm that can be abused.


You have some good points. But have you tried Idris or Agda?

My problem with OO is one of culture. If and only if you work with big codebases you start to feel that you can’t make assumptions about anything. For all I know the plus operator can send nukes. Then they say it doesn’t matter because they made the perfect lasagna with a million layers.

I know there’s a million ways to code with oo to make it better. But I never saw that enforced by tools which is a must if we are a thousand developers in the same company.

Meanwhile if I see a concept in Haskell I can in most cases just trust the assumptions.

Also the compiler researchers that I adore are those that focus on making proofs and stable code, not the ones trying to dumb down programming to make it accessible.

You build a house, which is more important, a good foundation (built on math) or accessibility for the beginner builders?


The Haskell community is full of terrible culture, things like badly designed libraries that assume the type signatures are the same thing as documentation, or ridiculous constructs designed to seem clever or work around the various straitjackets Haskell imposes.

If you feel you can just 'trust the assumptions' in Haskell but not in OO environments you've probably just been comparing very different codebases written by very different programmers.

In a codebase that uses OOP well it's very easy to understand what assumptions you can make and the tooling can be excellent. For instance, IntelliJ will happily show you all the possible implementations of a virtual method if you're using polymorphism. "It might launch the nukes" is pure Haskell meme noise - the equivalent unexpected behaviour for Haskell would be a difficult to understand space leak.


Building a house rests on a long history of crafting and engineering. If working builders needed to understand abstract algebra we might have a lot fewer of them. The foundation of construction isn’t really mathematical. You can build a house without even doing a single measurement. Accessibility for beginner builders is in fact very important.

I’ve used Agda. It’s extremely difficult. Even pretty basic proofs can be extremely tricky; just figuring out how to use the transitive equality proof syntax is a challenge. You quickly run into the need for stuff like “universe polymorphism” which comes with a huge and terrifying theory. If this is the only way to make decent programs then we’re doomed!


Cognitive overhead is the key word for me here - it can be just as difficult to reason about massively long function chains as a bunch of stateful classes, but usually in my experience an OO codebase follows conways law very tightly in that you get a history of how developers were split up - not discreet units of functionality with well defined interfaces. It’s hard to say something is objectively better, but it’s along the same lines to me as not buying a car from a manufacturer with a bad reputation.


That not an OO thing. That is Conway's law. It applies to all software.


> usually in my experience an OO codebase follows conways law very tightly

grandparent comment is saying that OO codebases express conways law more fully (as it is definitely a spectrum).

I don't have an opinion on it either way (beyond stating that adherance to conways law is a spectrum rather than binary), just clarifying.


The most upvoted answer begins with the words “Unpopular answer” and then continues with cynicism. That definitely feels very HN. (No offense, of course; just found it amusing.)

Although you make some good points, I disagree with the premise that it’s pure fashion. Language syntax doesn’t prevent you from implementing various OO paradigms but when you combine syntax with community norms, things do tend to be constrained. Nobody does traditional inheritance in Go, for example, which actually tends to work out well in use cases where it’s popular.


The times i've ever actually needed 'runtime' polymorphism (usually implemented w/ subtype & virtual methods) are utterly dwarfed by the times I'd been able to make use of parametric polymorphism, which I find far easier to reason about and use.


I'll disagree with your popular answer (it's the most upvoted right now). There is a certain degree of fashion, however pure functions and immutable values make data parallelism extremely easy, almost trivial. Our hardware has almost hit the ceiling on single core performance, so easy parallelism is the way to make use of this.

I do believe that object orientation as a concept has value. A lot of concepts map easily to objects by nature. However the 90s OOP fashion, exemplified by Java, lead to horrible lasagna code. Especially if the underlying space didn't map straightforwardly to the concept of an object, then you're adding a layer of abstraction that can lead to misunderstandings.


Purity helps a lot but it's far from making automatic parallelism trivial. See for example how few FP languages do it. Not many.


You mean how few enforce it? About as many as enforce OOP. It's more of a strong suggestion, like using objects in Java.


Pure fashion? I think that's a bit much. Rust was inspired heavily by ML/Haskell/C++, which are not particularly object oriented (C++ is object friendly, not object oriented).


C++ was explicitly about adding support for OOP to C, and most modern languages that have OOP support derive from C++ in that support (often by way of Java), though there area few that remain that are directly inspired by Smalltalk without C++ as an intermediary, and a smaller number that don't come from the class-based OO family rooted in Smalltalk (JS notably, though modern JS has added class-based features.)


Sure, but adding OOP to an imperative language kind of implies it's one of many systems coexisting. Modern C++ is best described as imperative, functional, and object-oriented.


> but adding OOP to an imperative language kind of implies it's one of many systems coexisting.

OOP is inherently an imperative paradigm; you might mean “procedural” instead of “imperative” (certainly, that makes both uses of “imperative” in your post more sensible), but even then, OOP as a paradigm is very closely related to the procedural paradigm.


I don't think the problems with the "classic OOP languages" like Java, C#, C++ and Python stem from the objects themselves. IMO it's the implicit mutability and lack of actual type safety (even for typed languages) that the new crop is trying to solve. And with functions it's more explicit and not "just doing the same thing better". This way it's probably easier for adoption, as people explicitly adopt "new ways" to do things in a "better way"

I agree it's fashion to some extent, and still could be done with OOP, though I think it's nice we are doing it this way (for the above reasons).


pure fashion

pun intended?


W.r.t. dynamic scope, I agree that there are some nice use-cases for it. Racket has good support for dynamic scope (it calls dynamically scoped variables "parameters"), and I've found them to be useful, e.g. for handling environment variables of sub-processes.


> There's nothing wrong with object methods (that's 100% pure syntax vs. a function call)

Not really; in OOP a method call like `foo.bar(baz)` sends the `foo` object the message `bar` with argument `baz` (in Kay's current terminology); or, in more 'mainstream' terminology, it looks for a method in the `bar` slot of `foo` and calls it with `baz`.

As far as I'm aware, this is a pretty core concept in OOP: if our code isn't organised along these lines, then we're writing non-OOP code which just-so-happens to use classes/prototypes/methods (in the same way that we can e.g. write OOP code in C which just-so-happens to use procedures/switch/etc.).

(Side caveat: I appreciate that I'm making some assumptions here; one of my pet peeves with OOP is that it's rife with No True Scotsman fallacies about what is "proper" OOP, e.g. see 90% of the content on https://softwareengineering.stackexchange.com )

There is a fundamental asymmetry between the roles of `foo` (object, receiver of message, container of methods) and `baz` (argument, included in message, passed into method).

A function call like `bar(foo, baz)` doesn't have this asymmetry: the order of the arguments is just a convenience, it has no effect on how the `bar` symbol is resolved (in most languages it's via lexical scope, just like every other variable). Swapping them is also trivial, e.g.:

    var flip = function(f) { return function(x, y) { return f(y, x); }; };
    var rab  = flip(bar);

    bar(foo, baz) == rab(baz, foo)
In contrast, I can't think of an OOP alternative to this which isn't messy (e.g. adding a `rab` method to `baz` via monkey-patching).

Of course, the elephant in the room here is CLOS, but that's sufficiently different from most OOP as-practiced that it's better considered separately (e.g. I'd be more inclined to agree that CLOS methods are "100% pure syntax vs a function call")

> Non-syntactic aspects are maybe a more involved discussion. For an example, I personally think traditional OO lends itself very nicely to runtime polymorphism.

I also disagree with this, again due to the artificial asymmetries that OOP introduces (that objects "contain" methods). In particular, as far as I'm aware OOP simply cannot express return-type polymorphism. The asymmetry between arguments and return values is certainly more fundamental than the completely unneeded distinction between receiver and arguments, but it's still very useful to dispatch on the output rather than the input.

The classic example is `toString` being trivial in OOP (a method which takes some object as input and renders it to a string; the implementation dispatches on the given object); whilst `fromString` is hard (a method which takes a string and returns some object parsed from it; OOP can't dispatch the implementation by "looking inside" the object, since we don't have an object until we've finished parsing).


You can do that sort of thing in Kotlin, which is an OOP (well, multi-paradigm) language:

    inline fun <reified T> String.fromString(): T {
        return when {
            T::class == String::class -> this
            T::class == Int::class -> Integer.valueOf(this)
            else -> TODO()
        } as T
    }

    val str: String = "abc".decode()
    val int: Int = "123".decode()
You have to be careful with type inference: if there's no way for the compiler to figure out what type you want from the call site, you'll get an error.

You might say this isn't OOP in the strictest possible sense, but extension methods + type inference + reified types gives what you're asking for in a way that's natural for an OOP programmer.


> You have to be careful with type inference: if there's no way for the compiler to figure out what type you want from the call site, you'll get an error.

I'd say that's a feature, not a bug :)

> You might say this isn't OOP in the strictest possible sense

I would say it's not OOP in any sense. Having one function implement all the different behaviours, and pick which one to run by switching on some data (in this case the class tag) is a classic example of being not OOP.

As a bonus, this ignores dynamic dispatch (the only implementation is for `String`) and it's not polymorphic (the same code doesn't work for different types; instead we have different clauses for each type).

This would be salvagable if `String::fromString` only enforced the type constraint, and dispatched to `T::fromString` to do the actual parsing, e.g.

    inline fun <reified T> String.fromString(): T {
        return when {
            T::class == String::class -> this
            else                      -> T::fromString(this)
        } as T
    }
I'm not sure whether that would work or not (I've never written Kotlin), but I still think it's "messy" (monkey-patching methods on to classes, reimplementing dynamic dispatch manually, inspecting classes (via the equality check), going out of our way to prevent infinite loops, etc.)


You can do that via reflection and other tactics, yes. You'd better do it with an interface and a cast, as of course "fromString" may not exist.

I also wouldn't use this pattern but, it is possible, and Kotlin is an OOP language.


I was a big fan of OOP in theory when I learned it in the 90s and I still use it quite often, but in practice, any sizable OOP codebase that I've had to work with is _way_, _way_, more difficult than a non-OOP codebase that just directly solves the problem in a straightforward way.

OOP encourages adding layers of abstraction, indirection, and generic stuff that sounds great if you're trying to create some kind of generic underlying framework for everything. But it makes a huge mess when you're just trying to solve a specific problem, fix a bug, or make one tiny change. Now you have to debug trace through 65 layers of unrelated generic abstract stuff to try to figure out where/why something's going wrong.

I suspect one of the reasons that some new languages don't give you that gun to shoot yourself in the foot with is because the people that made them had worked on large OOP codebases and suffered from similar problems. They're making languages to more simply solve problems in a straightforward and direct manner.

Of course, you could do that with OOP. But people don't. Given OOP, (and that book of patterns that they found), they get fancy and make it way more over-complicated than it needs to be. The result is, ironically, unmaintainable software that takes way longer to work on, and has unexpected bugs throughout the system causing bugs in distant unrelated places - exactly the sorts of problems that OOP was supposed to address. I don't think that's an inherent problem of OOP, but just our human nature. We can screw up anything.

So new systems with more constraints can help reduce the ways that we can screw things up. And if you really need to get stuff done, the more constraints you can place on it, the better. Static typing, non-OOP programming, limited-palette artwork - it all comes down to less ways for you to screw it up, and easier to fix if you do.


> OOP encourages adding layers of abstraction, indirection, and generic stuff [...]

Generics/templates and indirection are not OOP. Procedural style encourages them as well. You can see that in C++ itself.


C++ is OOP, though.


C++ is multi-paradigm. While people can write OOP code in C++, they can also write header-only libraries, which is generic programming. In C++ generic programming, inheritance is merely a language feature to get what you want, not the main method of abstraction.


You can do that in C, though. C++ was originally "C, with Classes", so OOP was the whole point of C++ existing in the first place.


> C++ was originally "C, with Classes", so OOP was the whole point of C++ existing in the first place

This is like describing a middle aged man based on what he did in middle school. Things evolve. C++ has. Almost to a fault where one can legitimately call it a mutation.


I understand what you are saying, but really. Programmers don't change their ways when new features get added.

I always remark that programming practices change when programmers retire, not when features get added. Most people learn these things through existing code bases, and these habits last ages into the future.

This is why most of the times, you just start using a totally new language, that way you buy into a totally new set of programmers, practices and community.


> Programmers don't change their ways when new features get added.

The good ones do.

> This is why most of the times, you just start using a totally new language, that way you buy into a totally new set of programmers, practices and community.

What happens when that new language reaches a stage where it needs new features?

I think new languages are unnecessary unless they comprehensively solve the problems of the existing ones, and don't create new ones of their own. I have always felt that creating brand new languages is more of a personal taste driven rebellion against the language one uses. When they find that its too cumbersome for their taste, or the clique in committee refuses reforms etc, those who can will go ahead and create, some evangelize successfully (I feel this is the case because I have created a few half baked languages over the years to challenge C++ lol).


No. OOP is just one aspect, but the most popular one.


I wonder how much of this mess is because people get introduced to these concepts(Generics etc) through wrong languages.

Now, there is a very similar problem with concepts like recursion. Even Algorithm, DS and problem solving in general. You never get to learn this stuff well because you get drowned in syntax juggling, type headaches, absence of good features to do recursion, first class functions, mutability, closures, pattern matching, and data handling etc well.

I find learning these things in ML or Lisp languages simplifies the problem to a great extent. You basically learn to how to solve problems and then how to use the right tools to solve the problems. Instead people chose tools that send them solving infinite sub problems to arrive at solving the real problem. Most people tune out, and rightly so. Efficiency comes after correctness in any approach. And you should never optimize for speed to begin with.

Most people getting introduced to programming in Java/C++ are generally digging through layers and layers of code trying to do what should be done simply in a straight forward way. This is often made more complicated by addition of frameworks and opinionated way of forcing people doing things in a way that requires them to go and read and work through an entire history of software literature. All the while you could just write code to solve the problem.

Design Pattern abuse is very common in OO world. And people for some reason like to write a lot of code sometimes for something as simple as writing a few lines of code.


I am self taught in C++. I learnt it when I realized my BE FORTRAN wasn't going to enable creating GUI simulation. I did have a few hiccups like any beginner but soon it was alright, so have no idea what it is like to be formally taught in class.

As for Lisp and ML, being Indian, I had no access to these, even in CS section of the library, as you might understand. Many years ago I tried learning Scheme by/after watching the lecture series. I even have the book SICP. It is fun and elegant, refreshingly so, no doubt, but only till a point. I found once we venture into necessity for conditionals & data mutation, its unbearably cumbersome - I can deal with spaghetti C code but not this - may be I am conditioned by exclusive C++ usage. I tend to think those who stick to Scheme like languages and solve real world problems do it for bravado and cry when no one is looking lol.

As for design pattern, I don't see it being exclusive to OO. Its just a bunch of ideas to build software. One would have it for assembler as well if it was used for mass production (I wonder how it was in early days when it was exclusively assembler). Its certainly poorly used and abused, agreed.


>>As for Lisp and ML, being Indian, I had no access to these, even in CS section of the library, as you might understand.

I understand being in the same situation myself.

>>I tend to think those who stick to Scheme like languages and solve real world problems do it for bravado and cry when no one is looking lol.

That's mostly because of lack of libraries and the amount of reading one has to do to learn totally new way(ironically the original way) of doing things. Data structures are linked lists(lists), stacks(lists with utility functions), queues(lists with utility functions), trees(list of lists), heaps(list of lists), graph(interconnected lists). Basically dealing with data structures is List processing, hence the name Lisp. Now since building lists using Lisp with recursion gets easier because of amazing native facilities that come with Lisp. These things just get a whole lot easier. A lot of people will tell you they see that moment of enlightenment when they learn recursion in Lisp. It just feels like the concept was always there and you discovered it neatly while work through it.

SML and its descendants are the same, additionally you get the same enlightenment with Type systems.

When you read through Lisp, and then work with DS/Algo you begin to feel, the entirety of DS/Algo work was invented, grown and perfected in a totally different set of tools and thinking philosophies and then artificially shoe horned into C languages.

In the modern context Clojure and F# come across as nice replacements. Tons of libraries and good documentation/books are available for help.

OTOH a lot of great work is done in modern incarnations of Lisp and ML. Including mission critical large software stuff. This is because of strong typing in languages like F#, and code compression using macros in Lisp. Clojure is also designed to solve a lot of big problems that effect software development at large.

One of the things I realized while working with Python after using SML. When using Python I was running a kind of buggy type inference engine in my brain, then write unit tests cases to validate it. No human should do things that can automated with a computer. That includes programming itself.

Someday people will look at type systems, macros and recursion the same way as we look at garbage collection. If a computer can do it, and it can do it better than you. It is criminal to spend human effort doing it.

This is really the digging with shovels equivalent of writing code. Let the compiler work for you.


Sounds promising. Will have to check out some time.


> As for design pattern, I don't see it being exclusive to OO. Its just a bunch of ideas to build software.

One could in fact think of monads as a design pattern for FP.


Depending on who you ask, some would say that C++ is not OOP because its encapsulation model is broken unless you use idioms such as PIMPL.


I think another part of this is if you like what might be called classical OOP you don't need to make a new language.

Just use Java, or C#, or any of the dozen other mature, widely used languages that support it.


Generic programming is not same as OOP. Both Rust and Julia have extensive generic programming capabilities, arguably more powerful although less complicated than template meta-programming.


>65 layers of unrelated generic abstract stuff

Do you know what is monad?

Layers of abstraction are mostly based on polymorphism, Rust and Go support polymorphism. Though layers of abstraction are mandatory in modern coding practices in all languages if you don't write code as a single function with gotos and global variables.


And it turns out that defining interfaces is the compelling "divide and conquer" abstraction needed to tame largish software projects. OOP can do that too but interfaces are sufficient. "As simple as possible but not simpler." was one guy's general theory of things.


OOP was a huge trend in the 90s and I think a lot of devs have learned from experience the ways in which it kinda sucks..

- It doesn’t do great as the code gets older or more complex. Google “fragile base class problem”.

- Inheritance doesn’t do a good job of modeling most real-world problems. Most situations don’t map into a simple “class Dog extends Animal” kind of hierarchy. Steve Yegge’s “the kingdom of nouns” is a great rant on this topic.

- OOP actually requires expert-level knowledge to do right. It seems like it’s beginner-friendly but that’s kind of deceptive. If you’re using inheritance then you should understand Liskov substitution, otherwise it’s not going to work well. And if there’s generics involved then you might need to understand covariance vs contravariance, which is also a beginner-hostile expectation.

Imo, out of the newer languages, I really like Golang’s take. Interface-heavy, no inheritance, and some nice sugar around composition. Dunno if I agree with their lack of generics but I understand the choice, generics create a lot of complexity.


OOP is great as long as your inheritance stack is at most 2 classes deep: abstract base classes for different backend implementations, or for holding heterogeneous objects like a syntax tree or, idk, inodes in the kernel, or even just one actual implementation and additional implementations for tests. At which point you may as well call the base class an interface.

Can't think of any sensible 3+ classes deep hierarchy that I've seen in a sensible project. I've seen some that are not sensible.


Most modern code bases I've seen only use interfaces now. Only in the odd exception do they use inheritance.


Fragile base classes only affect C++. That's not inherent to OOP. Java solved it completely.

Inheritance gets ragged on a lot these days, but frankly I see it all the time in most mature, production codebases and it's usually used properly and in a tasteful manner. IMO the idea that inheritance is evil is just people who want to make a new programming language looking for a philosophical edge - just like how a bunch of language designers decided exceptions suck because "you can't see the control flow" (eg. in Go) and now years later have basically admitted their error handling sucks, so they need to add something very exception-like back in. Actually exceptions never sucked but Go's designers needed some sort of fresh idea to justify their project.


> OOP was a huge trend in the 90s and I think a lot of devs have learned from experience the ways in which it kinda sucks..

Was that the paradigm or the language implementation of the paradigm? The big two languages in the '90s were Java and C++ (later C#?).

If everyone was writings in Lisp/Smalltalk (or even Tcl), would things have turned out differently?


Depending on who you ask, "trend" could be replaced with "progression".

I'm with you in that I never really found the OOP style to mesh well with my brain. I always felt like I had to fight it and dealing with massive chains of class inheritance can make it really hard to reason about and change code. I've spent years building web apps with Python and Ruby too.

Having only spent a short while with Elixir, I'm finding semi-complicated topics and patterns to be a lot easier to digest even with no formal functional programming background.

Like just today I watched a talk where a guy live coded a web framework DSL in about 30 minutes[0]. One with a beautiful API for controllers and a router. It was a total head explode moment for me (but not in a bad way, it was like seeing the light).

I'm still a newbie with functional programming but so far it feels much more like a WYSIWYG style of programming where your domain is front and center where with OOP it feels like you're bombarded with purposely confusing legal documents and fine print with a small bit of your domain thrown into the mix.

[0]: https://www.youtube.com/watch?v=GRXc-jKRESA


I tend to agree. Due to its sheer popularity, I did some work with OO. In some languages, like Ruby, it has a certain charm.

But in its traditional Gang of Four incarnation (which I encountered years later as a newbie), it felt like an awful lot of ceremony and unneeded complexity to get anything done.


I don't have a huge social circle but in it among the professional devs I am the sole person for whom OOP style does not mesh well with their brain.

I've known most of these folks since the '80s and something that we've discussed a few times is wondering if it's when we got serious about computers, what platform (e.g. C64 or Apple II), or continuous employment in the field (I have a history of going off to do other things most of my dev friends have always been devs), preferred platform today (mostly WinTel vs. Linux), or the niche we work in (I mostly work in embedded stuff, lots of my friends work with stuff related to pretty big systems, either big back end stuff or user facing I/O).

Anyway I find the recent popularity of alternative styles to a welcome recurring innovation, especially the idea of having a "Functional Core with an Imperative Shell".


A lot of people have been working for decades and can’t imagine the idea of having to not deal with entire classes of bugs and defects. It causes a lot of strain at some companies I’ve worked at where people who’ve experienced “better” have a very hard time going back. I can’t say I think they’re wrong.


I would think more than 2 layers of inheritance is a major code smell in any language let alone Python. There is surely a way that inheritance chain could be broken down to make more sense.


Many systems people want to build with computers cannot be easily conceptualised as graphs of largely self-contained objects, interacting through swapping messages. These problems instead revolve around complex data processing, and are more easily conceptualised as data flowing through a network of functions, and into and out of containers.

It is possible to build these systems using OO, by reifying the network of functions and the data into objects. But all you are really doing is reinventing functional programming with a layer of OO cruft on top of it, and it is always going to be easier to build the equivalent system in a language that lets the functions and the data structures be unencumbered.

For programming tasks that actually are best conceptualised as objects, OO works quite well. These usually involve the provisioning of some largely static virtual environment, such as a UI, game world, or inversion of control container. Here, objects are long-lived, genuinely independent, and interact in well-defined ways via actions and events, usually swapping only small pieces of state.

My guess is that the advent of GUIs and sophisticated games in the 80s and 90s pushed programmers to be more interested in this latter type of problem, and this was reflected in the languages that evolved. Then the advent of the internet and machine learning in the 2000s and 2010s revived interest in data processing and data flow, and so language design began to shift back.

I suspect in 100 years, if humans are still doing any programming, all popular languages will be mixed-paradigm, and the question of OO vs functional will simply be a blend decided by needs of the problem at hand, rather than the kind of dogmatic arguments of this era, which will be viewed through a historical lens as rather quaint and silly.


> I suspect in 100 years, if humans are still doing any programming, all popular languages will be mixed-paradigm

What makes you think we'll need to wait that long? C#, C++, JavaScript, Java, Swift, Kotlin, Dart, Scala, Python, and Ruby all have objects, methods, polymorphism, first-class functions, lambdas, closures, and higher-order functions.


The current trend in computer game architecture is also away from OOP/inheritance and to Entity-Component-System designs where you have entities without behavior (often just an Id) that are composed of components (pure data structures) that are operated on by systems specialized to single or small sets of components (functions).


> For programming tasks that actually are best conceptualised as objects, OO works quite well. These usually involve the provisioning of some largely static virtual environment, such as a UI, game world, or inversion of control container.

Actually data oriented design and entity component systems work much better for games. You don't have hidden states and data flows much nicer from system to system. Actually, ECS is a good solution for implementing GUIs, too.


If messages that are swapped contain data, I find that FP (functional programming), if implemented with type checking, can include as much or if not more boilerplate than OOP, where object methods implicitly access the data (object attributes).

This may sometimes lead to code that is difficult to reason out... but if you have a few objects and many functions, OOP can be less verbose than FP.

I say this as a stronger supporter of FP than OOP.


I really like this take. Especially the idea that OOP is used for specific OOP use cases and not applicable to solving every problem at large. Data Processing is a great example


This answer is very, very good.


This is such an awesome answer.


Inheritance makes some sense, but composition is much, much easier to understand and work with. Inheritance has well documented problems that composition does not. It’s not so much about being object oriented or not, at least not directly.

I could elaborate, but others have said it better anyways.

https://en.m.wikipedia.org/wiki/Composition_over_inheritance


but composition is much, much easier to understand and work with.

The article you linked mentions IMHO the biggest disadvantage:

One common drawback of using composition instead of inheritance is that methods being provided by individual components may have to be implemented in the derived type, even if they are only forwarding methods

Code whose only purpose is to "appease the design" and otherwise does absolutely nothing of value is a strong-enough negative reason. It's pure bloat, overhead that gets in the way of both programmers trying to understand/debug and machines trying to execute.

Too much inheritance can lead to a "where is the method" problem, but I think that's still better than the alternative of dozens upon dozens of lines of otherwise useless code, because the former at least does not increase the amount of code that needs to be written/debugged/maintained.


That isn't an inherent issue of composition though. You could say

   {...students(ajaxDb), ...teachers(ajaxDb)}
to create an object that can handle both students and teachers. Or you could say

   public class StudentsAndTeachers extends Students implements IStudents, ITeachers {
       private _teachers;
       public constructor(Database db, Teachers teachers) {
           super(db);
           _teachers = teachers;
       }
       public getTeacher(TeacherId teacherId) {
           return _teachers.getTeacher(teacherId);
       }
   }

   new StudentsAndTeachers(ajaxDb, new Teachers(ajaxDb));
It seems clear that the features of the language define what is verbose and what isn't verbose. Even the inheritance in the language that encourages inheritance is more verbose than the composition in the neutral language.

(More likely, you wouldn't write this, but every example that is colloquial in inheritance is non-colloquial in composition. The composition oriented solution can be used cleanly with strong guarantees provided to its users; whereas all inheritance oriented solutions are so tacky and hard to use that you will demand a framework with dependency injection which half your team won't be able to understand and will have to treat as magical incantations.)


Composition is entirely capable of creating an ad-hoc informally specified bug-ridden slow implementation of inheritance.


That relates to my comment in what way?


Pretty obviously, ridiculous_fish is claiming that that's what you're doing.

Note: I'm not saying that ridiculous_fish is right. But it's pretty obvious that that's the claim.


How often do you need to write methods just to forward to another method that _stay_ that way, and don’t evolve more logic later on? I avoid inheritance like the plague, but I feel like I don’t really write forwarding methods very often.


Few of design patterns work exactly like that: forward to another method. Evolving that API afterwords is not a lot of hassle. I really fail to see the argument here..


This, 100 times. Program for enough years and you'll find composition such a breath of fresh air. But like any pattern/design, the proof is in the pudding; as in, crap code can be written in any shape or format.


I'm curious why you find it a breath of fresh air? The Wikipedia link in the post you're responding to specifically states that composition over inheritance is an OOP principle. If one needs to abandon OOP to make use of composition, was OOP actually what was being done or was it instead an exercise in creating an class-based taxonomy? (a lot of people seem to fall into this trap)


It's the way OOP was used from what I saw 10-20 years ago. It was all inheritance, from uni course to patterns used etc.

I've only fell into composition when I got into video game development. We saw it a bit more with Silverlight but now it's such a clear movement (composition over inheritance) that it's in my resume's motto.


It's the way many OOP languages went, but it's not intrinsic to OOP.

I recall encountering Emerald in the late 1980s. Here's its 1989 paper on composition over inheritance (code reuse wise): http://www.emeraldprogramminglanguage.org/Raj_ComputerJourna...


Composition should be favored over inheritance whenever feasible. It is not always feasible so many use for inheritance remain.

What you can do is make composition easier. Kotlin does this with its delegation system: https://kotlinlang.org/docs/reference/delegation.html


Of course, it is possible to totally eschew inheritance as a language feature. Go does it nicely and I can’t think of many circumstances where modeling my code has suffered as a result, usually the opposite.

That said I don’t claim inheritance to be useless by any means, and I would not use Go for every kind of software.


I didn't claim it is impossible either :)

I kinda agree. Still, for an imperative language I think it's a tool I'd like to have in my toolbox, or at least it needs a compelling alternative, which I don't think Go has (1). Of course there are many cases in which you can do perfectly without.

(1) I know it has composition through struct embedding, but not real inheritance: a method delegated to the embedded struct can't call back a method on the embedder automatically. If you do know of some other means in which Go can replace this kind of inheritance, I'd be happy to hear about it.


Client-side moats and rise of the Web.

A big idea of OOP is to allow components to evolve independently. Java ensures that today's code would run on next year's JVMs and class libraries. Your user's upgrade their OS and your app keeps running, and now supports dark mode. Rad, right?

But new languages like Rust/Go/etc have bincompat as a non-goal. A minor version bump means fix the compiler errors, recompile the universe, giant static binary, GtG. This is sweet for server deployments.

Meanwhile client-side has shifted in the other direction: it's harder than ever to launch a new programming language that runs on your user's computers. The major OS vendors now control the programming stack: ObjC/Swift, Java/Kotlin, etc. Supporting an alternative stack like React Native or Flutter requires enterprise-level investment. OS vendor tooling and APIs form an immense moat.

JavaScript is the exception that proves the rule. JS has its twisted take on objects, but supports enough dynamicism for reflection, polyfills, etc to permit backwards and forwards compatibility. This is where OOP shines! (Imagine replacing JS with Rust - it's absurd) But JavaScript is so hard to compete with on its turf - the web - that it sucks all of the oxygen out of the room. (WASM makes the problem worse, not better: there's zero plausible bincompat story for WASM APIs).

OOP is about enabling independent evolution of components: the OS, app, plugins, etc. all in a conversation. The modern computing landscape is about siloed software stacks on a client, and statically-linked megaliths on a server. The strengths of OOP can't be engaged now.


>Java ensures that today's code would run on next year's JVMs and class libraries

>But new languages like Rust/Go/etc have bincompat as a non-goal

Sorry but this has nothing to do with OOP


I agree with your thoughts on how everybody neglects the desktop, but an independent evolution of components can also be achieved without OOP. Also OOP doesn't guarantee future compatibility - it can make it easier, if implemented with that intention in mind.

However, it looks like ensuring backward compatibility is an enemy now: users are forced do upgrade, sometimes every day, so obviously in the eyes of those corporations backward compatibility can be completely ignored.


Maybe because traditional OOP is already well represented by popular languages with large ecosystems? Creators of new languages want to do something new.

There is Elixir / Erlang which many consider to stick more closely to the original idea of OOP.


Agreed. There is definitely a problem where most people think OOP = Java. But, if you think a bit more abstractly you find that Erlang is a much better OOP while being famous for being so very functional. Also, you get bits like http://wiki.c2.com/?ClosuresAndObjectsAreEquivalent


I think it's probably too early to say there is a trend. There are still traditional OOP languages being created such as Kotlin and Crystal, and not OOP languages have always been created including the ones that inspired this new wave (ML/Haskell for Rust and the Common Lisp for Julia). And even if it's a trend, it might not have to do with issues with the concept of OOP, but for the fact that language creators and early adopters are usually people who are not satisfied with the current established languages (which happens to be mostly traditional OOP), so they naturally favor alternatives models.

All that said, I think it's just that OOP was always broader than the most used languages made it seem. Message passing OOP (like smalltalk, ruby and arguably Erlang/Elixir processes) are not exactly the same as the more class structured OOP (Java, C++, Python) and even before many of those languages there was already the Common Lisp Object System which extended single dispatch OOP to multiple dispatch (and like with Julia, is object.func(args) really different from func(object, args) enough to not be OOP?). In the same way interfaces/traits/abstract classes/composition/dataclasses are also present in OOP languages to handle cases where the model is not a perfect hierarchy of self managing entities, so a language favoring those over inheritance in general is just being more opinionated towards a strategy that was already possible in traditional OOP.


If you'd like to hear the case against OOP in modern programming, the Rich Hickey talks are a pretty good place to start:

* The value of values

* Simple made easy

* Are we there yet?

I'm on mobile, but you'll find these easily on the google.



Absolutely watch the talk "Are we there yet", it's my favorite of all time.

In short OO gets time fundamentally wrong, is unable to represent a discrete succession of values, and entangles state with operation.

OO has been ~35 year wrong turn for the software development industry.


OOP contains in-place mutable state because mutating state is extremely important in basically all computer systems.

FP has become fashionable in recent years and now it's gone to people's heads. But some difficult facts about computer science remain:

- CPUs are much faster at reading and writing to recently used locations. Mutating in place is fast compared to constantly copying things.

- Many of the most efficient data structures and algorithms require mutable state. There are entire areas of computer science where you cannot implement them efficiently without mutable state, like hash tables.

- Constantly copying immutable objects places enormous load on the GC. This is getting better with time (there are open source ultra-low-pause GCs now), but, Rich Hickey just sort of blows this off with a comment that the "GC will clean up the no longer referenced past". Sure it will: at a price.

- He repeats the whole canard about FP being the only way to exploit parallel programming. People have been claiming this for decades and it's not true. The biggest parallel computations on the planet not so long ago were MapReduce jobs at Google: written in C++. Yes, the over-arching framework was (vaguely) FP inspired. But no actual FP languages appeared anywhere, the content of the maps and reductions were fully imperative and the MapReduce API was itself OO C++!

Also note that Java has a rather advanced parallel streams and fork/join framework for doing data parallel computation. I never once saw it used in many years of reading Java. SIMD is a much more useful technique but nearly all SIMD code is written in C++, or the result of Java auto-vectorisation. FP programming, again, doesn't have any edge here.


For anybody looking to broaden their programming mind, I highly recommend those talks.

They are given by the author of Clojure, but Clojure is hardly a prerequisite to many of his talks.


A new language is in part to discover a new place in the space of languages that isn't already well represented. OOP has been done since Smalltalk-80 and many of the class-based languages that followed. Deep class hierarchies worked well for developing widget trees and not much else. Now we're looking for a different trade-off since the constraints of hardware are different than before. Immutability performs well with multicore we have enough memory and advanced gc to not need to reuse mutable objects to conserve memory. Working in a mostly functional style with few key places of reference updates to large structures are much more clear and debuggable.

Many of the newer languages are multi-paradigm trying to find a sweet spot that either takes the best of each and add something new or tries to capture a large audience with features.


Imho, OOP is part of a greater puzzle.

There are domains where OOP fits perfectly, there are domains where functional programming fits perfectly, there are even domains where imperative programming fits perfectly.

We're constantly searching for the perfect fits-it-all solution which we haven't found yet. From my POV this is why there are some competing standards, which all have their merits but on a first glance exclude each other.


Traditional OOP is a basket and sometimes conflation of multiple powerful concepts, which can be very convenient, but don't always fit the problem well. In practice, if the only tool you have is a spinning electric fan of hammers, your problem can get an awkward and painful shoehorning.

Back when it seemed most people were doing Java, we expected any new language to be OO. At the time, when a new language design chose not to provide traditional OO... that could be for elegance of the language design, and (by accident or design) also a nudge to programmers, to learn how to use other concepts.

And, with some languages, the language is powerful enough that you can layer a reasonable implementation of a traditional object system atop their primitives, if you want to.

FWIW, Racket (a Scheme descendant) has always had an traditional object system library (ordinary class-instance, plus mixins), which was used for its cross-platform GUI library and IDE application framewok, but is not often used for other things. Racket's somewhat simpler `struct`, however, is used heavily, for all sorts of things, as are the traditional basic Scheme/Lisp types. Scheme/Racket procedures are also used heavily, including to do things that you'd use objects for in some other languages. And Scheme (and especially Racket) also gives you very powerful tools for syntax extension, which, among its uses, can do things elegantly that would be pretty messy or nonviable to do with traditional OOP classes. You can also roll your own object library in Scheme/Racket -- I once quickly whipped up a simple prototype-delegation model as an extension to portable Scheme, as an exercise, and this is within the abilities of any fairly new Scheme programmer.

(I'm not disrespecting OO. I'm a long-time OO person, in several languages, including having been a commercial developer of fancy OO developer tools, and tend to architect (at least) data of systems with OO or entity-relationship. But, programming-wise lately, I've mostly been working mostly with what used to be considered non-OO "paradigms".)


A big change in my life was reading the Tao of objects -- https://www.amazon.com/Tao-Objects-Gary-Entsminger/dp/013882... along with the great Coad-Yourdan OOA/OOD books -- example: https://www.amazon.com/Object-Oriented-Analysis-Yourdon-Comp...

The premise was great. We reason about things as real-world objects, if we organize our code the same way, we can reason about our code.

Twenty-plus years later, In practice, however, I'm given a graphics object with a DisplayText method. It has three parameters, two of which are optional. If I call it with text I want displayed? It is an extremely rare event that what I wanted to happen, happens.

It gets much worse from this simple example, with versioning, monkey-patching, and overloading (ouch!). Add in multiple languages using the same codebase? Mutable code?

The maintenance programmer is left with a general concept. Under that concept there is code. Whether that concept maps to the concept the users asked about, or what changes in state that code makes? Nobody knows.

It's worse than bad. The incoming programmer is given concepts he thinks he can reason about, but which rarely match what's actually happening.

To combat that, the OO-mutable guys have gone to TDD. Test-first, then code.

That's a survival skill in modern programming, but all it does is change the definition of "What is a foo?" from a free-wheeling conversation to a concrete executable set of tests. And that is assuming you use it everywhere.

OO was a big-picture, conceptually-simple way of organizing code for big projects. We had a bucket for everything.

We ended up with hundreds of buckets that were confusing and we were never sure what belonged where, or if we touched one thing what other things might happen.

Functions, however, remained very simple. No matter what the function does, is it something you want or not? As long as you keep it simple and immutable, over time you keep building up and honing a reusable set of functions: tokens describing important things you want the system to do. As it turns out, quite surprisingly to me and others, reasoning and simplifying around what you want the system to do is much more productive and easier to understand than reasoning and simplifying around what you want the system to be


The author of the article below lists the strengths of Object Oriented Programming:

1. Encapsulation

2. Polymorphism

3. Inheritance

4. Abstraction

5. Code re-use

6. Design Benefits

7. Software Maintenance

8. Single responsibility principle

9. Open/closed principle

10. Interface segregation principle

11. Dependency inversion principle

12. Static type checking

He then goes on to debunk them showing alternatives from functional programming.

http://www.smashcompany.com/technology/object-oriented-progr...


IMO they just expose more of how OOP really works, giving you flexibility.

Take Rust. Just because I like their pragmatic approach. They represent objects as structs with functions attached that have access to the struct data. In C++, Java, etc this is roughly how objects actually work underneath.

They eliminate inheritance, replacing it with interfaces. Exposing the objects for what they really are, structs with functions attached, makes this strategy easier.

Traditional OOP, especially with multiple inheritance, tends to encourage nested objects (structs) that become hard to reason about.

Another related innovation has been structural typing. Typescript is great with this. Essentially, if you have two "objects" with the same fields, you can assign them to eachother freely. Typescript doesn't care if they actually inherit from eachother. If the interface signatures are compatible (matching fields, matching types), they can be freely used in eachothers place. This is great for constructing anonymous objects to pass off without all the cruft, basically just Json fields.

TLDR: newer languages decided that if two objects look like ducks they are both ducks, even if you decide to call one something else. Because who really cares what you name your ducks or which ducks they inherit from. This breaks from traditional OOP by loosening the rules of inheritance, but so far it's been a boon for productivity (for me at least)


> They eliminate inheritance, replacing it with interfaces. Exposing the objects for what they really are, structs with functions attached, makes this strategy easier.

I don't see how this is not already achievable in Java or C#. No one is forcing you to use inheritance. And when you really need it, it's there for you to use instead of jumping through hoops.


That's basically true. Languages that use interfaces with structural typing are significantly easier to work with though. You can "implement" an interface implicitly by just having matching fields


This means you can't use interfaces as tags, which is a very important feature (e.g. see how it's used in Rust). It also means that you have several different types implementing your interface "by coincidence", making it difficult to use an IDE to find out the types of interest, not to mention what sorts of bugs might result because of this.

There are better solutions like what Kotlin and Scala use (and potentially C# in a future version).


JS solves the tagging issue using "symbols". They're extremely weird at first look but they're basically around for that reason.


I disagree with it being OO when it doesn’t have inheritance. No classes or class hierarkies, no passing things along to children. Seems very different than what those who invented OO had in mind.

Structs with functions attached is easy, and not entirely uncommon in procedural and even functional languages.


Interfaces are inheritance, just without code. But interfaces definitely serve at least one role of an abstract base class.


Maybe. But then even FP languages are considered OOP, as most of those languages have records and interfaces.


Yes I agree. Once you have interfaces with default implementations especially, you have non-nested OOP


> Because who really cares what you name your ducks or which ducks they inherit from.

Who cares if it's an employee or a gun, as long as I can fire it.


Underrated comment


I'm not sure I agree with your thesis. For instance, Javascript has recently added an OOP class syntax, and Typescript extends the result with strict types. (JS was of course in some sense object oriented beforehand, but it wasn't recognisable to most working OO programmers as OO.) Moreover, I think you need a deliberately hamstrung language to get a genuinely OOP language - C++ and Javascript are both fully capable of writing OO code, but most people shy from calling them OO languages because they're not limited. A more interesting question is whether new programs are object oriented, and I guess the answer here remains, broadly, yes. This is why Javascript got class syntax.

But I will assume you're a better observer than me, and make the following claims:

I think there's two reasons: Object-oriented style inheritance is unsound (inheritance is not subtyping), so academics don't like it. Moreover, classical OO is not composable or extensible - unless you write your own primitives in every application and end up with Java-like verbosity. Therefore, research in new features tend not to be object oriented. Therefore, new languages tend to adopt non-OOP styles.

The other reason is probably that there's enough OOP languages. The effort it takes to create a new OOP language exceeds the effort it takes to shoehorn your OOP algorithm into an existing OOP language. Therefore, there's less motivation from the engineering side.

On the other hand, there's still lots of scope for better functional style languages; that marketplace isn't exhausted yet. Rust, for instance, merges a lot of academic techniques trialled on functional languages with a systems programming architecture.


> Object-oriented style inheritance is unsound (inheritance is not subtyping)

Most modern OO type systems rule out unsound inheritance. (Java, Scala, etc.)


I'm not sure. As someone who's mostly self taught I don't really understand the big deal either way. I mostly learned about the idea behind oop while trying to implement my own classes using metatables in lua and trying to manually deal with problems languages with built in classes deal with for you.

I like the idea of encapsulating functions with data and like the ability to inherit to subclasses. I don't really like languages like java that enforce oop at the language level. I prefer languages like D, python or c++ that offer it as a tool but don't restrict you to writing strictly oop code. I find D's class implentation to be the simplest to understand. With classes being reference types with inheritence and structs being value types lacking inheritence. It makes it clear to me when a type should be a class and when it should be a struct or some other data type.


Because OOP is a good idea that's been ridden into the ground by dogmatic implementations.

OOP is fine, and it makes sense for lots of things. But going crazy with OOP makes for a mess, and a lot of software and languages and tools have chosen "let's do OOP" over "let's do good software". So non OOP stuff is just the natural backlash to OOP gone wild.


I think there are many useful ideas in OOP languages and I still struggle with purely functional languages, I think/hope we'll see increasingly mixed model languages.

I'd be interested to see a language like C# but with:

+ Immutable by default.

+ 'Free' functions, so they don't have to be in classes.

+ No inheritance.

+ Some sort of distinction, and I'm not clear how it would work yet, between data objects (immutable fields/properties and pure functions) and service objects (functions only, but with the ability to constructor inject dependencies).

+ Purity to be a method signature level contract, enforced by the compiler.

+ C# 8 approach to nullability, e.g. Maybe types.

+ Data 'shapes' as well as/rather than interfaces. So you can require that a method parameter must have some shape, e.g. some property or method, rather than having to implement the interface/inherit.

I think parts of these exist in lots of other languages, or are on C#'s roadmap, but every other language I've tried feels, to me, like it's missing the amazing productivity of C# (and I'd guess Java) where the available IDEs, packages and core libraries are still way ahead of many other languages.

I guess the described language might be closer to F#, but when I tried F# it felt like it was missing the obvious "here's how you actually build a thing" part. A lot of functional language material talks about avoiding state, or not changing state, but that's the point where most the applications I build actually do something useful. So for me it was missing obvious on-ramps for building a website, or a desktop application, perhaps things have changed more recently?


Agree about immutability (I would love to have “readonly” to be applicable to classes), but I like the current static classes approach.

It’s a single keyword, the language guarantee you won’t have any instance fields or methods in these classes.

Global functions pollute namespaces, and auto-complete index. Static classes provide local namespaces for these functions, they also hide implementation details. I sometimes implement moderately complex pieces of functionality as a static class with just a single public method, the rest of the code is in private methods. These private methods don’t show in auto-complete in outside code, even if they’re extension methods. If such class grows too large for a single source file, C# has partial classes.


It's a good point about auto-complete I hadn't considered, maybe globals could have some obvious name thing going on, I hate this suggestion but for example an underscore prefix.

Some examples of where global functions might make sense, the Main function of a console application doesn't need to be in a class, or the general Utils and Helpers approach soon becomes cumbersome, you end up having to name things that don't really need names. While static classes can help organise these global functions you also end up playing the naming game when you don't need to. It might not be any improvement but it's an interesting thought exercise to imagine functions living in just a namespace, not a class.

I think in C# static still doesn't give you enough guarantees at either the class or member level. A static class can still have public static mutable members which is a horrible code smell. C# contains enough flexibility to do the things I want, e.g. make things static and readonly, but it's a lot of additional typing and I'd be interested in seeing the model reversed, e.g. immutable defaults with something like `public mutable class Foo` which I remember having seen in other places (F#?).


> you end up having to name things that don't really need names.

Strictly speaking namespaces don’t need names, either. But for medium to large projects they’re still useful, and enforced by the IDE.

> to imagine functions living in just a namespace, not a class.

You can place them into a namespace on the consumer side of the API, with `using static`. While not the same, might be good enough in practice: you normally have multiple `using` statements on top, for namespaces.

> I'd be interested in seeing the model reversed, e.g. immutable defaults with something like `public mutable class Foo`

I wouldn’t want immutable defaults, but I do want readonly classes and structures, with readonly being part of the type system.

BTW, the new value tuples with named fields is a step in the right direction https://blogs.msdn.microsoft.com/mazhou/2017/05/26/c-7-serie... Useful but limited, I would prefer real immutable classes & structures. Currently I have to create them manually, mark all fields public readonly, implement the constructor, that’s more typing than I’d like.


Not a favorite language of mine, but I think Kotlin comes close to your description.


Object oriented programming doesn't really fit every problem. It really seems more strange that it took so long for us to see some alternatives.

I mean, it took over programming like a revolution, like structured programming, but structured->OO doesn't have anything nearly like the benefits of unstructured->structured. It seems like a good paradigm for a couple of problem domains, but IMHO, it got cargo-culted on to everything about mainstream programming without delivering much on the promises it got sold with in regard to increased productivity and code re-use.


One thing I think is OOP works pretty well for GUI type programs. But has little advantage when building networked service infrastructure. So during the desktop era OOP was ascendant. Now we're in a distributed data processing/storage/retrieval/service era. In particular data is ephemeral and not 'owned' by a particular service. So it doesn't make sense to start attaching local methods to it.


OOP for GUI is getting old fashioned I think. I am thinking about the class-based imperative frameworks in Java/C#.

Just look at the functional components and hooks in react.

They have just abandoned the class based syntax.


Are they avoiding OOP or classes? With some of these languages (Rust, also Swift) you have OOP without using classes. Here is a good video explaining the benefits of using structs and protocols with swift, which are similar to structs and traits in Rust. (WWDC 2015 Protocol Oriented Programming in Swift) https://www.youtube.com/watch?v=g2LwFZatfTI


I think our understanding of the good parts and bad parts of OOP has matured a lot. As a result, we tend to find the good parts of OOP in new languages (runtime polymorphism, 'metaprogramming' facilities, sometimes message passing), whereas the bad parts of OOP are left out (classes, concrete inheritance).

This raises the question: how have we concluded that these parts are bad? Inheritance is the easiest one - it tends to dramatically increase the surface for coupling, which is why composition is recommended over inheritance. Classes is more subtle; I would say the issue with classes is that they strongly encourage you to couple various aspects, as classes are a single programming construct with which you do data representation, data specification, interface declaration, interface implementation, code organization, plain old logic, state management, and sometimes types declaration and concurrency control.

It takes discipline and awareness to resist the temptation of baking many aspect of a program into one class (even more so when you have to come up with a name for each class - naming is costly!), whereas it's natural in a language where each of these aspects is addressed by a different language feature. Finally, this temptation is made even stronger because class hierarchies feel so elegant - you spent a lot of time designing your class hierarchy (deciding where each part of the logic goes, choosing if methods were public of private and so on, applying various design patterns, etc.) so that MUST mean you've produced quality code, right? We love to tell ourselves this story, when the truth is we wasted a lot of time on non-essential decisions.

So many programmers don't resist the temptation, and end up with a tangled inflexible mess.

Don't take my word for it - try a variety of 'OOP' and 'non-OOP' languages! You can't achieve a good understanding of this unless you have empirical perspectives from both the inside and outside.

One thing that doesn't help is there isn't an agreed-upon clear definition of OOP. There was one made by Alan Kay, but the mainstream language we call OOP are very far from embodying it. So OOP is more a fuzzy cultural notion: "this cluster of mainstream languages that use classes." I think it would by much easier to discuss all of this if we separated the notions of 'OOP' and 'class-based programming'.


The ideal language is flexible enough to be multi-paradigm, i.e. it allows you to use OO or procedural or functional etc. as needed/desired, but doesn't force any one of those on you. It gets back to something Paul Graham said[0]: roughly that having "ulterior motives" is a bad trait in a language, and I would consider "getting dogmatic about the paradigm they want you to use" to fall under that. But Rome wasn't built in a day, keep that in mind too, specifically concerning newer languages. Sometimes something isn't there but it's on the roadmap.

[0] http://www.paulgraham.com/javacover.html


any sufficiently complex OOP program is indistinguishable from black magic.

OOP tends to encourage going through more layers of indirection than necessary or helpful, which makes the flow hard to follow. E.g. instead of just calling a library method, OOP frameworks might have you extending a framework class. This makes OOP code difficult to debug and maintain.


Mostly because it's generally recognized that inheritance was a bad idea. "Prefer composition and interfaces over inheritance" and such.

Also because mutable object state is problematic--it makes a large system harder to reason about and certan types of bugs more likely, and makes thread safety very difficult.


OOP is a subset of the broader category of polymorphism, and it premised on a broken analogy, which posits that data and the transformations that can be applied to them are similar to actions upon 'objects', and that specializations of objects are perfect subset of ideal 'class' categories. This analogy is vaguely useful to introducing basic programming to uninitiated, but the analogy is neither true in the real world, nor does it map well to information systems.


The analogy you describe is only broken if you're doing something that it doesn't work for. It's great for simulations and games, for instance.


Here you can find a series of blog posts discussing why OOP is not a good fit for games: https://ericlippert.com/2015/04/27/wizards-and-warriors-part...


I've read these before, they're great (and I'm going to read them again now for fun. :)

That said, "let’s write some classes without thinking about it!" pretty much sums up the criticism of OOP. You can't just throw your problem domain naively at some language structures and then blame the language when it doesn't work. I bet I could pick any programming paradigm and find a way to do it wronger than this.

(If I were doing an RPG of the sort in the articles, just off the top of my head, I'd probably end up with fairly abstract 'Creature', 'Item', and 'Action'/'Spell' classes, plus a big data table defining the various types of each of these and the ways they could interact. Your game designers shouldn't be worrying about compiling C++ in order to add a new type of dagger, after all!)


I'm not so sure about that. Naively the "Dog is-an Animal" inheritance hierarchy based ontology is a great match for games, but basically every beginning OO game programmer quickly finds out that it doesn't scale at all and composition is the way to go almost every time. Indeed, the currently fashionable game engine architecture patterns (data-oriented design, entity-component systems) explicitly eschew object-oriented thinking.


Actually, I spent a good 7 years in the video games industry, working everything from high level game logic to graphics programming to performance optimisation. On that last point, OOP is a complete dog when it comes to performance. If you have an update loop with a bunch of heterogeneous objects, you'll be increasing your instruction and data cache misses, by having to load vtables and method definitions. I haven't seen a situation in a game update loop where composition couldn't provide better flexibility as well as performance benefits.


What games did you work on (or kinds of games, if that's too personally identifiable), if I may ask? I'd be curious what sort of games have so many objects flying around that the overhead of vtables and cache misses was a significant factor in your overall performance.

I worked in the industry myself for a year or so, on an MMO (back when they were the Next Big Thing) and while our game wasn't super optimized, OOP overhead wasn't even on our radar as an issue compared with rendering, wrangling art assets, Scaleform UI elements, dynamic loading of the same, and all the other bits and bobs.

(Not arguing that composition isn't often better, just that performance isn't usually a significant reason why.)


Because, as it has been said countless [1] times before,

1. OOP is technically unsound

2. OOP is philosophically unsound

3. OOP is methodologically wrong

4. OOP is a hoax

[1] http://www.stlport.org/resources/StepanovUSA.html


"Countless times", but you give one reference that has one paragraph that contains those bullets, with only a one-or-two-sentence justification for each (except the last, which is given no justification at all, just as an opinion). That's not much to back up your claims, even if it is Stepanov.

And even on "methodologically wrong", Stepanov's rationale is wrong. After programming in various languages for decades, to use his analogy, we had "proofs". Then the "axioms" of OOP were laid down.

Note also that Stepanov says that generic programming is for him the only possible way to program. I suspect (possibly wrongly?) that you adopt his criticism of OOP, but not the rest of his rather dogmatic statements.


And yet most of software for quite some time was/is written in OOP languages.


A real tragedy of our time.


In what way? What do you believe would change if all that software was written in a perfect language of you choice?


Depends on what you mean by OOP. But overall I think OOP remains hugely influential but its more controversial and radical features have fallen out of fashion.

The general idea of bundling data definitions with code that operates on the corresponding data seems to mesh well with how most people think and remains hugely popular. Encapsulation has proven useful. Runtime polymorphism is very useful in a proper context and all the languages that you list have some support for it, although this support is not taken to the extreme - e.g. it may be more useful to think of the number 42 as a value rather than as an Object. The most controversial feature is inheritance - deep inheritance hierarchies have proven problematic so new languages discourage it or provide limited support.

Another trend is that ideas from e.g. FP have also entered mainstream and there is an expectation that new languages will provide support for useful features associated with other paradigms so most new languages can be described as multiparadigm rather than strictly OOP.


> I have always struggled with OOP Said every one, I think this is the main reason.

In theory OOP is supposed to make it easier to do separation of concerns in reality separation of concerns is really hard and adding some syntax sugar didn't help.

It's hard to create an abstraction that isn't overboard or so specific it's not an abstraction.

It's hard to separate concerns if you just inherit those concerns.

Testing OOP inheritance is awkward so most people don't and use dependancy injected instead which kinda breaks the whole point of inheritance and OOP.

OOP makes it look like you have separated concerns but in reality you where concerned with the wrong thing (most likely nouns) and spread the real concern throughout the whole project.

I think the old school OOP of Classes is dead and the newer Traits and Object composition will take over, or functional programming which is very Gucci at the moment.


That basically no modern oop language is actually oop. Smalltalk is an excellent example of what the oop movement could have been, Java is what it became instead.


OOP is too vague, let's talk about related features which are clear. (To me the 2 most important are inheritance and mutability, I won't mention that "protected" magic at all.)

Inheritance has too easy syntax to avoid "accidental duplication" (pieces of code seem similar on short term but eventually differ on the long run) without creating a new common abstraction to depend on: even before realizing the introduced dependencies are unnecessary but now even deeply entangled. Because in its core what might just happened is "raw" code(text) reuse instead abstraction reuse. "Code duplication is better than wrong abstraction." Or "Write code what is easy to delete."

You might keep adding and adding until you won't be able to substract, because you have to check every dependency back and forth, and there are no layers of abstractions but a constant change of [meaning] with the hope of usefulness in future infinite far away. Additionally intermediate level classes also need names (and meaning...) which makes the naming problem constant and might even blur the clear lines between the originally clear abstractions. The language of our system/domain consists of the relation of individual abstractions, we can't just overload every word and/or litter with unnecessary ones.

It's also a question to decide when to write a class and instantiate instead of just using an adhoc "anonymous" object and use a necessary pattern like factory function or some delegation (like parent class) later if needed. (Ofc. classes might be created later as well, but weird enough it goes the other way around in my xp, maybe its just me, ruined by noob Java 1.6/C# habits gained at the first years of university).

It's too easy to find wrong abstractions with inheritance, too easy to blur good ones, and too hard/exhausting to delete them.

Reflection on the thread: I think I'll focus more on the proper naming of functions "DO"s instead of "WHAT"s, because DOs must have clear intent to drive the design/architecture.


It seems that what people actually mean when they say "OOP" is inheritance.

I haven't used Julia, but it seems that Nim has inheritance[1]. Rust has traits that can have default methods, so it's closer to Java and C# in that regard.

OOP is quite useful, it's easy to write bad code in any language (and I've seen lots of it in golang for instance, which, due to the nature of the language, ended up making it even more difficult to follow than badly written Java for instance). So be careful to not fall for the hype that is going on.

[1] https://nim-lang.org/docs/tut2.html#object-oriented-programm...


I don't know, and shouldn't comment, but I do anyway because I'm opinionated as heck.

Datastructures are important for storing data. Functions are important for manipulating data.

Just because they're both abou data, doesn't mean that they belong naturally together.

I personally find "moveEntity(player, x, y, z);" (and the implications) prettier than "player->move(x,y,z);"

Now, if only my C could have first-class functions, without any other consequences to the language or runtime, I'd be supper happy, but I'm sure there's some fundamental reason that's impossible.

Then again, I'm a pretty crappy software developer.


There was always some criticism of OOP, even when it was at its peak in the era 1995-2005. Paul Graham, and many others, wondered why OOP was enjoying such a vogue. But after 2005 the focus began to shift. I tried to cover a bit of this history (the long term trends) back in 2014 when I wrote “Object Oriented Programming Is An Expensive Disaster Which Must End”

http://www.smashcompany.com/technology/object-oriented-progr...


I think we're just figuring out different problems are solved using different paradigms.

I find complex business logic tends to be best modelled using algebraic data types and functional programming.


If I generalize: Everyone thinks OO is well and dandy in a greenfield project.

Breaks down quickly in big projects though.

The best code I’ve seen is well documented and minimizes stateful code that’s easy to follow.

Somehow the culture around Java became something similar too:

Documentation: Nah I have types. Stateful code: Nah I just don’t affect other objects (but...maybe just in these 1000 places). Easy to follow: Nah I follow pattern x and y without documenting it and not enforcing it strictly, also I have so many classes so it should be easy to follow.


People don't agree on what OOP even is, so I don't think the question can be answered.

For example, Rust itself doesn't know if it is OOP or not: https://doc.rust-lang.org/book/ch17-01-what-is-oo.html

Go similarly doesn't know: https://golang.org/doc/faq#Is_Go_an_object-oriented_language

Nim claims to be a multi-paradigm language with full support for OOP: https://nim-lang.org/docs/tut2.html

Julia doesn't seem to discuss OOP that I could find. But if Common Lisp has long claimed and been described as supporting OOP, and Julia copying its dispatch system from Common Lisp, it similarly is in a weird place of is it OOP or not?

Bottom line... I think a lot of new and popular languages are at least partially OOP. That they are more hybrids of prior languages, that they dropped certain features and added others is normal, since they are trying to be different and distinguish themselves, but they all seem to still have enough of OOP to not be sure if they should claim to be OOP or not.


You would have to ask the authors of those languages but I do not find it very surprising. If you take the whole set of features that commonly comprises object-orientedness it is a rather large set that also seems a bit arbitrary. Why would the same language construct known as a class support inheritance and static members and data hiding? Those are orthogonal things. It seems as such more logical to provide these and other features but not necessarily as a package deal.


Good point, I've never thought of it that way before.

You've heard of the "God-class" anti-patern. Java style classes are a "God-abstraction".


There are two wildly different kinds of OOP. One based on Simula, Java and C++ and one based on Smalltalk, Ruby and CLOS. I would love to see more developments on the latter.


Nearly all the discussion here is about inheritance.

I think the main advantage of OOP is message passing - which, if you think about it, is pretty how networking works across the board. (This also makes sense if you think about Smalltalk's heritage).

Basically - if I send a message to a networked server - as long as my message is in the correct format and I send it to the right place - then I don't care about the implementation at the other end.

The fact that I send a GET request to HAProxy that analyses it, routes it and then passes it to Nginx that embellishes it and passes it to some application server that actually does the work is OOP in action. As the message sender, I need to know the protocol (message format) and where to send it - and that's it. The fact that the implementation goes through three layers and does who knows what is irrelevant.

The trouble is people looked at Smalltalk and took the wrong bits to be important (just like Steve Jobs did when Apple designed the Lisa GUI).


> I have always struggled with OOP and have never found it a natural way of programming.

'Modern' OO languages are never natural. Actors are more natural.

A typical day-to-day Java Spring web service workflow:

1. Receive request in Controller.

2. Get FooRepository in through DI.

3. Read a user from UserRepository (behind UserRepository is a SQL database).

4. Get another stuff from DI, say EmailService.

5. Call EmailService to send email based on properties of Foo.

6. Return OK to the client.

Let's see what's in the example:

1. Controller/Repository/Service/Container/Request are not objects. They are made up engineering concepts.

2. The User seems to be a reasonable object, but actually, it's not. It's dead, it only temporarily revives when you summon it from the database, then becomes dead again. It does not know how to do things, it cannot even repeatedly check the current time and then send email itself just like any person in the real world. In contrast, actors are mostly long live in-memory processes and can tell anyone do anything at any time.

Why it's bad or I should say, redundant?

1. Consider Controllers, Services or other objects, if you have 100 methods on a single class it's considered a code smell. Then guys in the project will refactor it to 10 different made up classes with 10 methods each. The name of the classes and their relationships to methods are totally arbitrary, 10 people will yield 10 totally different results.

2. The User is just data, there is no point in make them objects. A) The OO has an in-memory reference system but it's awkward when combining with database entity ids or any kind of distributed environment. B) OO class is redundant because struct or algebra data type (ADT) will do, and they have much better semantics on equality. C) When it comes to modeling OO is more awkward than ADT (because there's no sum type). When it comes polymorphism (the main advertised aspect of OO) the sub-type polymorphism is more awkward than parametric polymorphism.

Personal conclusion (at least in the Web scenario): Functional style (ADT + functions) is simpler. Actor style is intuitive for the real-time web system. 'Modern' OO is both awkward and redundant because no one is doing OO in web programming anyway.


It depends on what you mean with OOP. If you mean the original meaning: just messaging, extremely late binding, hiding the state etc. OOP never took off.

The established meaning is using classes as some kind of abstract data type with getters and setters or using classes interfaces. That kind of OOP is usually supported.


Because they are written by people who never actually understood Design by Contract and class invariants.


Because OO as the sole or primary paradigm supported deliberately in the design of a programming language is a space that has been explored extensively, being dominant in language design since the late 1980s or so. The OOP-centered-language space is rather thoroughly explored.


Most great programmers don't like OOP or Enterprise development so have very little need for OOP. When they finally decide to create their own language, they kick OOP to the side. This is my speculation.


I would say Rust, Go and maybe even Nim are fairly Object Oriented. Only thing that is going away are class hierarchies as a method to structure programs, because it seems they are more trouble than worth?


Class hierarchies are the best way to structure OOP programs. The main reason why people started moving away from OOP is precisely because those people didn't know how to structure their programs correctly. With functional programming, you can write correct code without any structure... It's extremely hard to follow the logic but it works. IMO full functional programming is a bandaid patch which allows incompetent developers to write correct code... Until it becomes a giant idempotent, functionally transparent but impossible to comprehend mess and it's impossible to add new features.


No on so many levels... Your first statement is horribly wrong!


A refutation is worth more than a bare denial.


True and i thought about writing one... but it would require much more time to write this essay and then there are several very good comments in this thread already. Furthermore there are numerous good blog posts about functional programming and why OO has failed (eg. on ploeh.dk).


OO has failed? In what sense?

It hasn't lived up to the hype? It's not the One Right Way to write all software? OK.

But it's still something that thousands of programmers find useful as a way to build their programs. That's not exactly "failure".

(BTW, ploeh.dk is inaccessible, at least to me.)


SmallTalk would beg to differ. Javascript as well.

And I worked with large, fairly OO-oriented codebases in JS, and it was fine and I didn't miss inheritance.


When I say class, I don't explicitly mean using the class keyword. Also when I say hierarchy, I don't mean only inheritance (composition is also a kind of hierarchy). In my statement, I mean class more like "a kind of structure from which you can create instances", prototypes are similar to classes in that way. Definitely you should still follow some kind of hierarchy when using prototypes... In fact you should be even more careful.


I do think you need well specified data.

My experience is that hierarchy is more trouble than worth.

And I did like the fun of trying to figure out a hierarchy for existing system. But after a year in use, I woul look at the hierarchy with the 'what was I thinking' look.

So I moved from nesting abstractions with classes to dependency injection to mostly passing well-defined data to functions.

Like, after I understood prototypes in js, I used them for a bit, but then I switched back to passing data-structures to functions :)

There might be another reason why new languages are moving away from this kind of specifying hierarchies. There are already several languages with ecosystems built around hierarchies, so if you are fine with java, you probably wouldn't jump to golang :)


OOP code bases tend to get messy with age, as layer upon layers of abstraction are added, as programmers come and go. State is fragmented and hidden in various places. If you need to solve a bug or understand what happens with state on a particular point in time, you have to do some diffing in a lot of different places.

Imperative programming coupled with data oriented design and Entity Component System tend to be much cleaner and a lot more efficient. Instead of thinking of how to apply a pattern to complicate things a bit, you focus on the problem to solve.


I co-authored the somewhat-famous Games Pack 1 for the TRS-80. I grew up with 8080 and Z-80 programming for low-level and my first high-level languages were APL, BASIC, Pascal and Modula 2. All top-down modular. Then I designed a language called R-code, for robot control, and another language called LIM (Limited Instruction Model). It was clean, simple, readable, and easy to understand. I say use what you enjoy and don't worry about trendy things like OOP. All good wishes.


For a long time, abstraction was the name of the game. OOP is the most popular way to achieve a higher level of abstraction.

Programmers have noticed that infinite abstraction was not necessarily a good thing to have (or an easy one to achieve with a good design).

They had neglected alternative paradigms and lower level languages. They had hoped to just abstract C away, and forget about it. Now they want to replace C instead of abstracting it away.


Mostly because the way we approach OOP is wrong, and so developers - who were taught to write software as "instructions given to a machine" (i.e., procedurally) - don't fully understand the paradigm.

That leads to the belief that OOP is unnecessarily difficult.

It's more difficult to properly decompose the problem space than it is to just add code.

It really just boils down to developers being lazy and ignorant.


Objects are not fundamental to computing. This is why there are no deep solutions to any of the standard downfalls of OOP.


A small but very insightful tidbit. I am not completely sure if I agree with you: At some point in my life I programmed in assembler (8086 and Z80), so I know very well how computing looks in its more reductionist way. They are JMPs, calls and JNZ, JZ, PUSH and POPs.

However, the reason why we use say, structured programming, functional programming and OOP is because abstractions are always helpful for people. It makes it easier to think about the solution to a problem.

In my opinion, OODesign came at a moment where GUI development was really strong, and the 1:1 mapping from "window", "button", "menu", from real-life to the computing space was very useful. However, as we continue modeling more complex processes and entities in computers, that mapping becomes more cumbersome both in the encapsulation and even the naming (who has not wasted time coming up with a proper name for THAT class, or the infamous SimpleBeanFactoryAwareAspectInstanceFactory). This is where OO fails IMO.


Even that (1:1 mapping) is pretty far the OO Alan Kay intended. Even relations (AKA the foundation of many databases) are built out of rigorous math. I'm not aware of any such derivation for objects, other than "let's keep the methods with the data they act on." I'm sure I'm oversimplifying that, but I do reasonably fail to find something deeper.


OOP made for great tool sets, training seminar and other products. But the mentality of "circle the nouns in your requirements" is stupid, and serves nobody. Programmers analyze they're requirements into code and data, and those things have little bundling to one another or the requirement domain.


Eh, I think circling the nouns is useful, but mostly for database entity modeling.


Rust, Go, Julia, Nim are all cruly braced, procedural, mutable, object oriented languages.

I think we need languages that bring new paradigms. All that you quoted fail in this respect.

https://www.youtube.com/watch?v=0fpDlAEQio4


Julia and Nim don't have curly braces.

Nim isn't object oriented either, it's procedural oriented just like C. Experienced Nim devs avoid methods and inheritance and only use them after very careful considerations.

Mutation is needed to implement low-level stuff and have deterministic memory usage.


Neither Go nor Rust has objects or inheritance.


> Neither Go nor Rust has objects

What's an "object" anyway? A collection of data fields bound to a method is what most people would probably consider. In which case, both golang and Rust have objects.


Nim is not curly braced, it uses Python-like indentation.


With lambdas classical oop (virtual functions) becomes less of a necessity... That and past overdoses of oop.


People just don't have enough experience with Functional Programming to really know how awful it is. So now it is the new kid on the block (in terms of going mainstream) so people think it is the best thing ever.

The most telling sign is how many FP languages are in existence today. If it was such a good thing we wouldn't need them all. It is a mess that cause many other types of problems without any clear benefit.

I do get that many people like the FP paradigm, no two humans are alike and people will find different ways of thinking and reasoning about a problem more suitable. Which is OK. BUT it doesn't mean FP is a any better than OPP or vice versa.

Last I would like to point out how Python (OOP) obliterated R (FP) it the Data Science market although R enjoyed a head start of few years and was the Franca Lingua of statisticians.


R was used by statisticians, but python was used by every non-cs researcher who needed something more than matlab.

Python didn't beat R because it's OO. It won because of its existing popularity (with many people learning it in their intro programming class) and the massive amounts of open source software built for it.

> People just don't have enough experience with Functional Programming to really know how awful it is

This is just anecdotal, but I've used a FP language full-time for the last two years with several other engineers and ramped others up on the codebase. It has its own challenges but it's significantly easier for me to reason about than OOP - less bugs, easier to maintain, easier to parse. I can't see myself ever willingly going back.

It might be helpful to know what problems you think FP has, if you have a significant amount of experience with it.


> Python didn't beat R because it's OO ...

My point that it beat R although it is OOP. This is to show that OOP is not inherently bad. Many people find it useful to the point the 'preserved' advantages of FP are not worth the effort.

> This is just anecdotal ...

My anecdote story is very different than yours. To the point if one of my engineers will ever suggest FP again he will get fired on the spot.


> To the point if one of my engineers will ever suggest FP again he will get fired on the spot.

If you are going to react so disproportionately to the mere suggestion of FP, I am sorry, it is difficult to take your anecdote or opinion seriously.


> The most telling sign is how many FP languages are in existence today. If it was such a good thing we wouldn't need them all.

This is a silly argument. FP means only first-class functions and immutable values, and the vast majority of FP languages agree on those.

But there are many other design decisions for a programming language - type system, laziness, purity, homoiconicity, and whatever other features, paradigms or constraints people might find desirable. THIS is what explains the diversity of FP languages.

Btw, there's the exact same phenomenon in OOP languages. They all agree on classes, and differ on hundreds of other aspects.


Python "winning" has more to do with it being taught at pretty much every university in into CS courses these days and less with it being a better suited language for it. Besides, outside data science R still reigns supreme when it comes to statistics. Data science is a hype right now, but 99% of what is called data science is basic statistics. Python is also rarely being taught in biology, or economy or psychology departments. Its either R or Stata or so.


We don’t have more FP languages than OOP languages, and several languages that are considered FP have their own implementation of OOP (OCaml, Common Lisp...) in addition, several OOP languages are now implementing classic FP features like lambdas and pattern matching.

Also, the question asked doesn’t actually care about FP. Go isn’t a functional language, bit neither is it OO.


You are right, it should have been a comment to another person on this thread. Not the OP.

My point regarding the number of FP languages that it is not a silver bullet, neither is OOP to be sure, but FP has its own set of problems hence the many different implementations.

Regarding OOP languages implementing classic FP features, which is true and a blessing! These are good features which IMHO gives more credit to OOP languages.


Regarding number of languages, I don’t agree that it signifies a problem. For instance, F# doesn’t exist because OCaml is bad, but because there was room for a functional language with good interop with .NET. Same story with Clojure vs Common Lisp.

Making a new language doesn’t automaticly imply that some other language got something wrong.


Go lets me do enough object oriented programming to keep me more productive, while not creating a pile of confusion.

Sadly, there seems to be almost no functional programming concepts supported that I'd use (map, filter, reduce, lambdas). I think the lack of generics and operator overloading might have something to do with that?


Yes. I’d assume those would come rather quickly once generics are added.


Has Python actually "won"? I was under the impression that both languages are popular with data scientists.


IMHO and very TL;DR: same reason why Agile failed. Just like Agile, OOP has some good ideas at the core, but "Big Enterprise" ruined it by starting the (from todays point-of-view, unbelievable) big OOP-hype in the 90s (OOP was basically the Machine Learning / Blockchain of the late 90's, just worse because it infected absolutely every last corner of software development, everything had to be OOP, from programming languages, to application architecture, to operating systems, to CPUs...)

Thankfully people started to look behind the curtain and noticed that the whole "industrial-scale ceremony" that was built around the few good OOP ideas actually hinders software development, so they took the few good pieces and integrated them into the new languages we're starting to see now.

20 years late, but better late than never :)

(PS: watch the same cycle unfold around functional languages in the next 20 years)


I don't quite understand. How is Ruby not object-oriented? I thought it was the most thoroughly OO language in popular use!


OP did not mention Ruby; OP mentioned Rust.


Some problems model better with objects. This thread seems to treat poor OOP design and antipatterns as the definition of OOP.


I suspect the answer is that cheaper memory means greater immutability is far more feasible. And immutability, which is not very OOP, almost by definition, makes software development far more robust.


OOP is just SOLID and I would argue that classes are unnecessary for that and that those languages do have features of OOP.


OOP is not agile enough :O


Most people have a wishy washy view between the differences between FP and OOP. It is never clear why a map reduce is better than a for loop or vice versa.

That being said iS an area where OOP is definitively worse than FP: In fact OOP (as defined by JAVA and C++) is Categorically less modular than FP.

There is concrete and definitive reasoning behind why this is the case.

I've explained it all in a thread before I'll just link it for people who disagree or are curious.

https://news.ycombinator.com/item?id=19910450


> It is never clear why a map reduce is better than a for loop or vice versa.

to me map/reduce feel like a further development of structured programming. a map(...) call is less expressive / more constrained than a loop and that's the point; compare to how loops and if/else can all be written with goto, but we prefer the more structured alternative




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: