Hacker News new | past | comments | ask | show | jobs | submit login

>Encapsulation, modularization, abstraction - let's not pick nits about the precise meaning of these terms, they are all about the same thing, really: Managing complexity of your own code.

The thing is, code complexity can be managed superbly without the concept of a class.

Golang doesn't have classes. C doesn't have classes. But both have structs and they have functions taking a pointer to, or a struct of type T. (what OO calls a "method" for some reason). So we can have "objects" as in, custom datatypes, and we can have functions to act on them.

And on top of that, we have modularization in libs and packages.

So, why do we need more complicated languages? "Show a locked door before you show a key" indeed.




> What more do we need?

Good languages with good IDE support?

I'm old enough that at some point assembly and C were my favourite languages and PHP was what I was most productive in and Python was also something I enjoy.

These days no way I go back for new projects.

After Java "clicked" for me and the introduction of Java generics shortly afterwards it became my favourite language and later TypeScript and C# had been added to that list.

I'm not to dumb, but I prefer using my smarts to develop programs, to work together or even pair program with the language to mold something, not to babysit languages that cannot even help me with trivial mistakes.


Is not the point, that a language with optional typing can help you out with trivial mistakes, if you give it sufficient information? Java makes (made?) you give all the info all the time, even when you do not need its help. Java holds your hand in an annoying way. TypeScript does not because you can gradually add types to your program, but is based on the shakier foundations of JS.


Yes, I would agree that structs are the solution to the complexity the author describes. But Python doesn't have structs: if you don't learn about classes you will probably eventually organize some data using dictionaries or heterogeneous tuples.

In another language you might introduce structs first (C and Go only have structs, C++ has both classes and structs but they are the same thing) but in Python you don't have that option - you can use data classes or named tuples but those already need most of the ceremony you introduce for dealing with classes.


If it’s useful: Python now has dataclasses, they are very struct-like.


> The thing is, code complexity can be managed superbly without the concept of a class.

It can be managed by modules/namespaces, but without even that (a la C), I really don’t see it managing complexity well. The important point of OOP is the visibility modifiers, not “methods on structs”, that would provide no added value whatsoever.

What OOP allows is - as mentioned - not having to worry about the underlying details at call site. Eg, you have a class with some strict invariant that must be upheld. In case of structs you have to be careful not to manually change anything/only do it through the appropriate function. And the bad thing is that you can make sure that your modification is correct as per the implementation, but what classes allow for is that this enforced invariant will remain so even when the class is modified - while now your call site usage may not fulfill the needed constraints.


> The important point of OOP is the visibility modifiers,

OOP is not required for implementation hiding. It can be done entirely by convention (eg. names starting with an underscore are internal and not to be used).

Go took this a step further and simply enforces a convention at compile time: Any symbol in a package not starting with an uppercase letter, is internal and cannot be accessed by the consumer.


Well yeah, and typing doesn’t need enforcement by compilers, we just have to manually check their usages. Maybe we should go back to Hungarian notation!

I mean, snark aside, I really don’t think that variable/field names should be overloaded with this functionality. But I have to agree that this functionality is indeed not much more than your mentioned convention.


OO implies far far more than simply grouping data in a struct and passing it by reference.


Such as?

The only other things that come to mind when I think about "OOP" are:

* inheritance, which at this point even lots of proponents of OOP have stopped defending

* encapsulation, which usually gets thrown out the window at some point anyway in sizeable codebases

* design patterns, which usually lead to loads of boilerplate code, and often hide implementation behind abstractions that serve little purpose other than to satisfy a design pattern...the best examples are the countless instances of "Dependency Injection" in situations where there is no choice of types to depend on.


How is encapsulation thrown out the window in sizeable codebases?? It is the single most important thing OOP gives, and is used by the majority of all programmers and has solid empirical evidence for its usefulness.


Could I see some examples of this evidence?

Let's consider a really simple object graph:

    A -> B
    C
A holds a reference to B, C has no reference to other objects. A is responsible for Bs state. By the principles of encapsulation, B is part of As state.

What if it turns out later, that C has business with B? It cannot pass a message to A, or B. So, we do this?

    A -> B <- C
Wait, no, we can't do that, because then B would be part of Cs state, and we violate the encapsulation. So, we have 2 options:

    1. C -> A -> B
    2. C <- X -> A -> B
Either we make C the god-object for A, or we introduce an abstract object X which holds references to A and C (but not to B, because, encapsulation). Both these implementations are problematic: in 1) A now becomes part of Cs state despite C having no business with A, and in 2) we introduce another entity in the codebase that serves no purpose other than as a mediator. And of course, A needs to be changed to accomodate passing the message through to B.

And now a new requirement comes along, and suddenly B needs to be able to pass a message back to C without a prior call from C. B has no reference to A, X or C (because then these would become part of its state). So now we need a mechanism for B to mutate its own state being observed by A, which then mutates its own state to relay the message up to X, which then passes a message to C.

And we haven't even started to talk about error handling yet.

Very quickly, such code becomes incredibly complex, so what often happens in the wild, is: People simply do this:

    A -> B <-> C
And at that point, there is no more encapsulation to speak of. B, and by extension B is part of A's state, and B is part of Cs state.


Encapsulation (to lock down an object like this) is used when there's a notion of coherence that makes sense. If B could be standalone, but is used by both A and C, then there's no reason for it to be "owned" by either A or C, only referenced by them (something else controls its lifecycle). Consider an HTTP server embedded in a more complex program. Where should it "live"?

If at the start of the program you decide that the server should be hidden like this:

  main
    ui
      http
    db
And main handles the mediation between db and ui (or they are aware of each other since they're at the same level), but later on you end up with something like this:

  main
    ui
      http
    db
    admin-ui
And the admin-ui has to push all interactions and receive all its interactions via the ui module, then it may make less sense (hypothetical and off the cuff so not a stellar example, I'll confess, but this is the concept). So you move the http portion up a level (or into yet-another-module so that access is still not totally unconstrained):

  main
    ui
    db
    admin-ui
    http
    -- or --
    web-interface
      http
Where `web-interface` or whatever it gets called offers a sufficiently constrained interface to make sense for your application. This movement happens in applications as they evolve over time, an insistence that once-written encapsulation is permanent is foolish. Examine the system, determine the nature of the interaction and relationship between components, and move them as necessary.

Arbitrary encapsulation is incoherent, I've seen plenty of programs that suffer from that. But that doesn't mean that encapsulation itself is an issue (something to be negotiated, but not a problem on its own).


Those are called callbacks, we use those everywhere. If you don't want to use callbacks, you can use something like a publish-subscribe pattern instead so that X doesn't need to be indirectly linked to B through A and can publish directly to X.


The one thing that is extremely hard to fake well without some sort of language support is multiple dispatch, the ability to call a method on an object x of type X and have the calling expression automatically "know" which method to invoke depending on the actual runtime type of the object (any subtype of X or any interfaces implemented by it etc etc etc...).

This is extremely hard to fake in, say, C. I mean it's not really that hard to fake the common use case demonstrated in classrooms, a rough sketch would be something like

enum vehicle_type {CAR, AIRPLANE} ;

Struct Vehicle {

enum vehicle_type t ;

Union {

struct car_t { <car-data-and-behavior>} car ;

struct airplane_t { <airplane-data-and-behavior> } airplane ;

} data ;

//some common data and behavior

} ;

void func (Vehicle *V) {

switch(V->t) {

   CAR :<....> break ;
   AIRPLANE :<....> break ;
} }

But this is a horrible thing to do. First, it's a repetitive pattern, you're basically working like a human compiler, you have a base template of C code that you're translating your ideas into. Rich source of bugs and confusions. Second, it's probably hiding tons and tons of bugs: are you really sure that every function that needs to switch on a Vehicle's type actually does that? correctly? what about the default case for the enum vehicle_type, which should never ever take any other value besides the valid two? how do you handle the "This Should NEVER Happen" case of it actually taking an invalid value? How do you force the handling of this in a consistent way across the code base that always exposes the problem to the calling code in a loud-yet-safe manner ?

There's a saying called Greenspun's tenth law : "Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp". If you ignore the obnoxious lisp-worshiping at the surface, this aphorism is just trying to say the same thing as the article : there are patterns that appear over and over again in programming, so programming languages try to implement them once and for all, users can just say "I want dynamic polymorphism on Vehicle and its subtypes Car and Airplane" and voila, it's there, correctly and efficiently and invisibly implemented the first time with 0 effort on your part.

If you don't want the language to implement those patterns for you, you're still* going to need to implement them yourselves when you need them (and given how common they are, you will* need them sooner or later), only now haphazardly and repetitively and while in the middle of doing a thousand other thing unrelated to the feature, instead of as a clean set of generic tools that the language meticulously specify, implement and test years before you even write your code.

>inheritance, which at this point even lots of proponents of OOP have stopped defending

Not really, there are two things commonly called "inheritance" : implementation inheritance and interface inheritance. Implementation inheritance is inheriting all of the parent, both state and behaviour. It's not the work of the devil, you just need to be really careful with it, it represents a full is-a relationship, literally anything that makes sense on the parent must make sense on the child. Interface inheritance is a more relaxed version of this, it's just inheriting the behaviour of the parent. It represents a can-be-treated-as relationship.

Interface inheritance is widely and wildly used everywhere, everytime you use an interface in Go you're using inheritance. And it's not like implementation inheritance is a crime either, it's an extremely useful pattern that the Linux kernel, for instance, Greenspuns in C with quirky use of pointers. And it's still used in moderation in the languages that support it natively, its ignorant worshippers who advocate for building entire systems as meticulous 18th century taxonomies of types and subtypes just got bored and found some other trend to hype.

>encapsulation, which usually gets thrown out the window at some point anyway in sizeable codebases

I don't understand this, like at all. Encapsulation is having some state private to a class, only code inside the class can view it and (possibly) mutate it. Do you have a different definition than me? How, by the definition above, does Encapsulation not matter in sizable code? it's invented for large code.

>design patterns, which usually lead to loads of boilerplate code

100% Agreement here, OOP design patterns are some of the worst and most ignorant things the software industry ever stumbled upon. It reminds me of pseudoscience : "What's new isn't true, and what's true isn't new". The 'patterns' are either useful, but completely obvious and trivial, like the Singleton or the Proxy. Or they are non-obvious, but entirely ridiculous and convoluted, like the infamous Abstract Factory. This is without even getting into vague atrocities like MVC, which I swear that I can summon 10 conflicting definitions of with a single Google search.

The idea of a "design pattern" itself is completely normal and inevitable, the above C code for example is a design pattern for generically implementing a (poor man's version of) dynamic dispatch mechanism in a language that doesn't support one. A design pattern is any repetitive template for doing something that is not immediately obvious in the language, it's given a name for easy reference and recognition among the language community. It doesn't have anything to do with OOP or "clean code" or whatever nonsense clichés repeated mindlessly and performatively in technical job interviews. OOP hype in the 1990s and early 2000s poisoned the idea with so much falsehoods and ignorance that it's hard to discuss the reasonable version of it anymore. But that's not OOP problem, it's us faulty humans who see a good solution to a problem and run around like children trying to apply it to every other problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: