Hacker News new | past | comments | ask | show | jobs | submit login

OO tends to imply things like dynamic dispatch (for method lookup), implicit extra parameters to functions (this pointer and so on), accessors (potentially a lot of extra function calls), placing variables in memory based on grouping within objects rather than where lookup might be most efficient. Not to mention possibly run time type info, which takes up more memory. Those are all things that are mostly fast enough to be a non-issue now, or are optimized by compilers, but they are generally slower.



Seems like you're mentioning a few features that are either optional or inline-able in C++. It took me a long time to appreciate this aspect, but c++ in particular is all about not making you add costs without explicitly asking for it.


Once you start removing features, then you are only speaking of an object oriented-like programming language, not a true object oriented programming language.

Examples include inheritance, abstraction, and encapsulation which you can achieve merely by changing syntax. What requires extra resources at runtime is polymorphism, and that's one of the most important, functional aspects of object oriented programming.


And they say religion is dying out in the civilized world.

Consider if you will that the things you are describing as slow - chiefly the indirection of a virtual call - are slow relative to other things because of relatively recent CPU advancements. If a CPU does no branch predicting or caching, I think the overhead of a virtual call doesn't sound so bad. So, if we're talking about why to avoid OO in the late 80s, I'm not as convinced the cost of a virtual call is the barrier.

That said, my point was that C++ lets you do these things, but you have to ask for them. If you want virtual calls, go nuts. But if you don't, you're not going to suffer from mandatory bloat - your object code will still look good, as if you had written it in a language without such mandatory frivolities.


Well, talking about optimizations and all of that, the reason to be against this is not because of big sections of code being unoptimized. It's the death by a thousand paper cuts. Every small object requires its own (extra) information besides its fields. Even if that's one double word that identifies a virtual function table, if the object is 16 bytes that's 25% more storage space required. Not that a well implemented object oriented scheme wouldn't work nicely, but given a naïve or overblown approach.

In any case, modern CPUs and compilers can use aggressive optimization, sure, and the runtime speed is only equally as important as the ease of development and concerns about reliability.

On an older system, though, using a liberal OOP design in an operating system would be akin to creating structs that have arrays of function pointers associated with them, and having every function call routed through those pointers. Looking at the machine executing this code, you would reliably see jumps to similar codepoints in-between function calls, and a lot of wasted time or space. And I'm not sure anyone has mentioned this, but exception handling especially can add a lot of overhead.

Obviously, the benefits significantly outweighed the cons, especially in this case (NeXTSTEP and Objective-C), but I think that was helped substantially by the fact that projects Steve worked on always had much better computer hardware.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: