A lot of the nominal complexity of C++ is the result of the very precise control it offers over software behavior. In many programming languages this level of control is not even expressible. That extra level of control only seems like unnecessary complexity until you are designing software systems that require this level of control. At a minimum, precise control of memory management is absolutely essential for things like servers and database engines if performance matters at all.
That the author thinks C++ is competing with Go betrays a narrow perspective. There are many important things you can do in C++ that are not possible in Go, in part because of the features that make C++ "complex" that simpler languages do not have. As an obvious example, the renaissance of C++ for high-performance server systems is based in part on the fact that garbage collectors are a major drag on performance for these kinds of codes.
Which is not to say that C++ is not excessively complex, just that the author seems to lack an understanding of why some of that complexity is essential functionality.
The author points out the extra complexity that C++'s programmable value semantics have added to the language without also mentioning the important benefits.
First, you can create a lot of objects on the stack, which avoids any dynamic allocation at all and is far, far faster.
Second, you can precisely control the layout in memory of your data structures. For example, you can represent a set of vertices in an OpenGL scene as a contiguous array of structs. This gives you much, much better cache performance and also makes it easy to hand data directly off to hardware components like graphics cards that need input in memory formatted in specific ways. In languages like Java you have to resort to awkward, inefficient hacks like FloatBuffers to do the same thing.
Author of blog post here: It's true, I haven't done a good job of mentioning the benefits that you get in return for the complexity of C++, especially as far as value semantics is concerned. Thanks for setting that right.
I haven't done much C# programming but it looks to me from studying language tutorials that you can have the best of both worlds in many cases in C#. You can have value objects and densely packed arrays of structs if you need them but Java-esque reference semantics everywhere else.
Personally I think the notion that C++ offers "precise control" as you call it is overrated. For instance flexible memory allocation is touted as an advantage, but in practice this is limited by the fact that there is only one global implementation of new the operator.
Over the long term I think Go is going to offer a much more sane environment for implementing high performance servers.
C++ does allow overriding the new operator, actually.
It also allows for running just the constructor in an area of already-allocated memory, if you're particularly crazy.
With that said it's hard to argue that Go makes concurrency easier at this point, which can certainly make it more useful for many of the types of problems programmers are seeing today.
C++ allows both overriding (i.e., replacing) and overloading (i.e., creating multiple versions of) operator new. When overriding, you still have to call some memory allocation routine; and you should also override operator delete, and should probably override operator new[]/operator delete[]. There are some pitfalls in doing this on some platforms ( http://pavlovdotnet.wordpress.com/2008/01/12/jemalloc-builds... ).
The tricky part, to me at least, is when overloading. It's possible to overload new in a way that "new int" calls the original new, and "new int, foo" calls your overloaded allocator (foo could be an object used for the sole purpose of knowing which new to call, or it could actually be used by that new -- maybe it's a memory pool). That's easy enough to do. The surprise comes from the fact that while you can overload operator delete, it's a real pain to call the overloaded version (plus, of course, you have to remember which memory came from which version of new). Assuming you use "foo" to specify your overloaded new and delete, "delete p, foo" is a syntax error. You need to call something like "::operator delete(p, foo)". At least that was the case with C++03; I don't know if C++11 changed anything. Personally, I just overload new, but call a destroy() function instead of an overloaded delete.
I realize this, but you can't provide multiple global implementations of new.
It is difficult to implement multiple allocation strategies in the same application. In another time, I actually did quite a bit of work in this area: http://accu.org/index.php/articles/1308
You can just provide an allocate() static method and destroy() method on your object that use placement new and call the destructor manually. Or don't and just use placement new/manual destructor calling yourself.
Are you trying to argue that C++ programs can't use custom allocators because of language restrictions? That's demonstrably false; Gecko and WebKit, just to name a couple of C++ codebases I'm relatively familiar with, use custom allocators everywhere.
Fair enough. My point was poorly made, and truthfully it has been along time since I've done significant C++ work. But it is difficult for the users of libraries like the STL to mix and match their allocation strategies, so for all practical purposes, most applications end up using the default implementation, or a single override of global new.
Actually if you may create a policy for a particular type by implementing operator new at the class level. I think you're not well informed, because C++ is the language that gives you the most flexibility to design memory management schemes.
That's true, but I'm not sure what it would even mean to have multiple implementations of the same function at the same time.
Do you mean something like operator overloading where a given type has a different implementation of new that is used for it? Or a way to select different implementations of the global operator new at runtime? It seems to me that it should be possible for the latter to make it switchable, although the code would be inelegant to say the least.
AFAIK the former is possible as well, but the problem is that I think the type-specific operator new/delete have to be members of the type which means that you can't add it on later.
I'm going to dig deep here, and say that true flexibility in memory allocation would require multiple global implementations of new, and when an object is deleted it would be destroyed with an implementation of delete the corresponds with the implementation of new.
That would be a gigantic change to the language, which I'm sure I don't understand the implications of.
After thinking about this some more, I'm going to stand by my original, even if poorly argued, comment that C++ doesn't really live up to the promise of "precise control" when it comes to memory allocation, and this is one area that can have a big effect on performance. C++ has all the overhead of a low level language, with a promise of control that when you really need it is often difficult to access.
Ok, so let's count... Just from the top of my head, we have global new/delete overloading. At the class level you have new/delete overriding. Then you have allocators that can be used with any STL container. In you templates you can also have custom allocators. Then you have new with placement syntax, which means that you can allocate memory with anything, including something like malloc or gmalloc. Of course, you can also use the stack for automatic memory... You can even use a garbage collector such as Boehm. And you really want to argue that there are no options for memory allocation in C++?...
The problem with C++ is exactly the opposite, it has too many options, which makes it difficult to create a global memory management policy, unless you define you own complete framework. Quite the opposite of what you said.
I'm not sure whether the previous poster was thinking along these lines, but in some sense C++ makes it harder to control allocations compared to C due to a combination of increased abstraction and a general assumption that objects are fixed size. For example, if a C struct contains an array whose size is known at the time of allocation, there's a good chance the elements will be located after the end of the struct rather than being in a separate allocation. Of course, you could do this in C++, but neither language nor library provides sugar, so it ends up looking unnecessary compared to just using a std::vector.
I won't opine on whether it actually is unnecessary.
Well at a language level C++ shares more or less the same behavior as C. If you input the same "plain old data" struct into C++, with an array whose size is known at compile-time, it would be placed where C places it.
Now you're correct that the standard library provided with C++ does not quite give you this same level of control. If you want node elements placed somewhere specific you either have to pick the right container, write a custom allocator (including rebind support so that any required container-internal elements also can use your allocator), etc.
But this is a complaint about a C++-only feature that C doesn't even provide.
On the very few times that C does provide an standard library function that will automatically allocate storage from the heap, it is even less customizable than the C++ equivalent. Normally your only option is something allocated with malloc(3), you're told to use free(3) when appropriate, and if you need something different you run into trying to replace malloc/free and friends and then somehow telling your custom malloc when to use custom behavior and when to use stdlib behavior. But you could do this in C++ code too.
Now you can certainly easily write your own library with better support for custom allocation/deallocation. Things like the Apache Portable Runtime, for instance, or glib.
But once you've bought into the idea of simply ignoring the language standard library anyways, you've bought into something else that you could do in C++ if you really felt like it were useful.
The problem with C++ is that the standard library makes it almost straightforward to do what you want and still be able to use standard containers without having to write your own, but we have to remember that we're talking about a problem that essentially can't even be spoken about in C. So it's not so much that C "doesn't have a problem", as that this problem couldn't even be expressed in C.
> Personally I think the notion that C++ offers "precise control" as you call it is overrated. For instance flexiblememory allocation is touted as an advantage, but in practice this is limited by the fact that there is only one global implementation of new the operator.
I don't quite follow, who's stopping you from using sbrk(), mmap() and all the other low level allocation system calls if you really have to? AFAIK, many high performance applications have their own allocation mechanism. Furthermore, new/malloc is worlds apart from GC when it comes to precise memory management.
Now, to be fair, I have no experience implementing high performance servers. But speaking for games, GC gets in the way a lot. You can make it work by essentially avoiding allocation altogether, at least on systems where you can use tons of memory, but that's just a very crude workaround.
> I don't quite follow, who's stopping you from using sbrk(), mmap() and all the other low level allocation system calls if you really have to?
From a pure language perspective, nothing.
From a standard library perspective... it's probably more difficult. I won't say impossible, as the STL allocator concept certainly seems general enough to allow this, but definitely more difficult.
For more flexible memory alloc in C++, can't one use placement new? That makes destruction more complicated, but maybe that's further support for the thought that complexity in C++ gives one more possibilities for precise control of the computing environment. How often one needs or wants to actually exercise that level control is another question.
At any rate, Go certainly does offer some nice tradeoffs vs C++.
And assembly language even more control than C, with even fewer gotchas.
However, C++ occupies a sweet spot whereby it allows a lot of control, but also a lot of abstraction and structure--much more so than C, in my opinion.
Or, put another way: one doesn't build a skyscraper out of playdough or cinderblock (easy to use materials), but one doesn't mill the entire skyscraper out of a solid block of steel, either (that would be unjustifiably difficult). C++ is the language of skyscrapers.
More "gotchas" actually, from the secure coding perspective.
C is nicer in that its ABI is so simple that it's practically the lingua franca of inter-language linkage, but the fact that pointer types convert automatically to other pointer types is a huge source of errors, and so is the lack of language support for things like destructors.
Additionally there are many C++ concepts that cannot be implemented in C without the use of macros everywhere, such as using a sorting algorithm (which in C requires either a pointer indirection for a function call, or use of macros to generate a new sort function for each appropriate type).
Additionally in C++ you can use things like type expressions to select and compile the optimal version of a matrix-vector multiplication automatically, and I'm not even sure you could do that in C with macros as it involves programmatically generating types at compile-time.
C is certainly nice for some problems but your comment applies as much to going from C to asm as it does going from C++ to C.
As a data point, I would like to see a language much more like C than like C++ (including, somewhat but not entirely idiosyncratically for me, no classes), but with templates and some other improvements.
My ideal language is C but with C++ templates, better type safety (no implicit casting to void*), and with a smarter "include" system (ie: something that isn't just copy/paste one source file into another. Although it would be nice to be able to keep #include as well)
Datomic, an acid database, has comparable features and more scalable performance compared to SQL based C++ databases. Datomic is written in Clojure, a functional programming language (the highest class of high level languages) and runs on the JVM.
In 2013, not a lot of problems require that level of control.
C++, Java and JavaScript each have their own different kinds of complexities.
The complexities of C++ almost always arise out of the extremely high degree of power and flexibility it offers programmers.
The complexities of Java end up having to do with hyper-"architected" class libraries, rife with excessively-used design patterns to the point of being incomprehensible.
JavaScript's complexity arises due to core functionality that's missing (such as proper class-based OO, namespaces, and proper support for modularity), or core functionality that's limited in practice (like it's prototype-based OO), or core functionality that's unjustifiably broken (its comparison operators, semicolon insertion, its scoping, its type system, its awful standard library, among others).
Out of those three, JavaScript's complexities are by far the worse. The flaws are outright stupid to being with, and there's nothing that can really be done to avoid them in many cases. At least Java programmers can choose not to create and use bloated class hierarchies, for instance. And at least C++'s complexity offers superbly powerful features and excellent performance, and at least it's understandable how and why this complexity thus arises.
C++'s complexity does not arise from its power. With the exception of templates, C is just as powerful but far more orthogonal and simple.
E.g. the difference between references and pointers. E.g. the phenomenally baroque template syntax. E.g. phenomenally complex rules for multiple inheritance. Slicing problem. Syntax so hard to parse only a couple do it right. Lots of features that just don't carry their weight (operator overloading).
Its a trick pony. I don't think that use is enough to carry its weight. Also, abusing it immediately in the streams library just encouraged people to find elaborate nonsensical uses for it.
Operator overloading is very useful in the context of generic code though (through templates).
I've programmed C++ for probably more than a decade now and I've been bitten by many of its features at some point, but operator overloading would rank right near the bottom of my burn list.
It can be abused, that's for sure, but so can many C features.
You seem quick to denigrate JavaScript, and programmers who willingly choose it, wherever you can. But with regard to the kind of complexity the OP discusses, which roughly translates to added cognitive load, it's not at all clear that JavaScript is worse than C++ -- and I have real experience with both.
To avoid dismissing C++'s complexity too easily as simply the price of flexibility and performance, let's review what C++'s complexity looks like in practice. For example, read this:
Memory stomping bugs; all the ways to initialize a variable of a primitive type; copy constructors; overloaded assignment operators; move semantics; smart pointers; value versus reference semantics; the list goes on. Against the cognitive load imposed by complete control over memory management, and lack of verifiable memory safety, JavaScript's warts seem quite minor to me. I imagine that many programmers who work in "managed" languages would agree.
Author of blog post here: Thanks for your lucid remarks. JavaScript certainly has its share of annoyances, but likening those to the complexity issues of C++ seems inappropriate to me.
"at least C++'s complexity offers superbly powerful features and excellent performance, and at least it's understandable how and why this complexity thus arises"
There are certain aspects of C++'s complexity that are inexcusable, especially in C++11. Why force programmers to figure out how to break cyclic references in their reference-counted smart pointers instead of just providing a real (but optional) garbage collected pointer type? Why force programmers to figure out how variables should be captured in a lexical closure instead of just always capturing by value (if you want capture by reference, why not just capture a reference by value?)? Why is there still no reliable way to report errors that occur in destructors? Why is error recovery still so problematic in C++, when other, older languages manage to provide useful facilities?
Most of C++'s problems are the result of the attempt to satisfy everyone's needs simultaneously. Rather than doing one thing well, C++ does many things poorly.
> Why force programmers to figure out how variables should be captured in a lexical closure instead of just always capturing by value (if you want capture by reference, why not just capture a reference by value?)?
I agree with most of the points but this one seems suspect to me. This would break code like (forgive the possibly wrong C++11 syntax):
auto sum = 0;
std::for_each(my_vector.begin(), my_vector.end(), [](x){ sum += x; })
We actually made the mistake in Rust of making closures capture by reference or by value depending on what type of closure it is, which confuses newcomers immensely. It's scheduled to be fixed by making all closures capture by reference (and if you want to capture by value, use an object instead).
Hair-splitting point: your example is not very good, because for_each is the wrong way to do what you are trying to do. The STL already has the right way to do it:
It is also worth pointing out that you can always thread state through accumulate/reduce/fold and achieve the effect of capturing references / having mutable objects in iteration constructs like for_each. That is how you see things being done in functional languages, and I find myself doing the same in Lisp with some regularity (even though you are generally dealing with references in Lisp; pure functions are usually more readable and less error-prone, at least in my experience).
In C++ there is another reason that capture-by-reference is a sensible default: there is no garbage collector. It is up to the programmer to ensure that the lexical environment remains valid for the lifetime of the closure, and so you can only capture locals by value if you return a closure. C++ programmers wind up having to capture smart pointer types by value in that situation anyway, which is basically what I said: if you want to capture by reference, create a reference (or pointer or smart pointer or whatever) and capture the value of the reference itself.
> In C++ there is another reason that capture-by-reference is a sensible default: there is no garbage collector. It is up to the programmer to ensure that the lexical environment remains valid for the lifetime of the closure, and so you can only capture locals by value if you return a closure.
That's true, and it's why we applied the same reasoning to Rust. But we ended up with a very confusing situation. The definition of "closure" in an imperative language really implies capturing by reference; virtually all imperative languages that aren't C++ (or Rust) work this way (in Java it's a little muddy because of the final restriction, which is much maligned because it violates this intuition).
A closure that captures by value just isn't a closure by most programmers' definitions, and I think that the language would be better off treating it as something syntactically different from a closure.
This is the working code (you have to declare your intention to capture according to the g++ and clang++ error output, and x requires a type in the parameter list):
auto sum = 0;
std::for_each(my_vector.begin(), my_vector.end(),
[&sum](unsigned x) { sum += x; });
That code is wrong. Its obvious why its wrong when you think about the closure representation. Forcing a developer to understand the implementation is not inconsistent with the C++ mentality (see: everything having to do with inheritance and parameter passing).
> That code is wrong. Its obvious why its wrong when you think about the closure representation.
I don't follow you. There's nothing inconsistent about allowing the closure representation to capture by reference. Indeed C++ allows this, if you write the appropriate capture clause.
Author of blog post here: I believe JavaScript is another example where the problems (not sure if I'd call it complexity, but problems they are) can be understood by looking at the history of the language. Brendan Eich himself has said: "I created JavaScript in 10 days in May 1995, under duress and conflicting management imperatives."
True, that could very well explain its horrid state in mid-1995. But that was 18 years ago, and things really haven't gotten any better since then.
The Harmony work is somewhat of a step in the right direction, but its impact will still be quite limited, assuming it ever does become a standard.
C++, on the other hand, has seen significant improvement over the past two decades. Much of its complexity can now be avoided when using a "Modern C++" approach, all without losing access to its powerful functionality in those cases when it truly is needed.
C++ has evolved in a way that makes it more usable. JavaScript has merely stagnated, without the community showing any real interest in cleaning up what's a very unjustifiably bad situation.
Being a recovering C++ programmer who wrote some very clever code while younger and very dumb code while older, if you think Java gets out of your way perhaps you should try Scala or Clojure. Talk about getting out of the way, these LAn guages are something of a wonderment. Every day I get to code in them is a good day. It's true, you won't have the "power" of C++ but if you need that there's C.
Author of blog post here: I haven't found the time to do much with Scala or Clojure, but my impression so far agrees with what you say. Thanks for reminding me: I need to seriously try out Scala and Clojure!
I've seen some programmers get fancy and try to overengineer what they are doing. It's the temptation of premature optimization. They just need to be forced into doing some maintenance and refactoring work for a while to realize how much fancy code really sucks.
To be fair, the c++ programmer can also use reference semantics via pointers. Granted, garbage collection is not built into the language, but if desired, we can always write java-style code by using pointers for everything and using the Boehm collector. I don't know why we don't do this more often, most c++ codebases seem to have large sections that would benefit from such a style.
But when we want to use value semantics, for whatever reason, the more complicated value copy is, to some extent, just intrinsically complicated. In java, Object.clone basically has all of the same problems -- you sort of know what you'd like it to do for any given object in any given context, but you've got to read code to figure out what is actually going to happen.
> I don't know why we don't do this more often, most c++ codebases seem to have large sections that would benefit from such a style.
Because you will sacrifice performance over Java's GC. For maximum performance you really want precise garbage collection on both stack and heap, with generational concurrent operation and a two space copying collector in the nursery. Boehm can't provide this, because the language is not designed for GC.
I don't mean a garbage collected stack per se (though since you asked, SML/NJ has this, as do several Schemes). What I mean is that precise stack maps or register tagging is used so that the roots can precisely be found on the stack.
Given what we know today, yes a new language is warranted.
The reason I've stuck with C/C++ is because everything I've ever written still works. Meanwhile my friends are playing with new languages every year (and all the books, classes, and conferences to go along with them). All the new syntax's seemed driven by corporations vying for mindshare (money), not the betterment of our industry.
Give us a real alternative that is community driven and standardized. It needs to compile everywhere, have the option of a great IDE, and have libraries that make sense. We don't need another cute or toy language (JavaScript's 10 day incubation that I'm stuck living with now). Until then ... C++ is what we have.
Rust has the most potential at being a true successor to C++. But it's not very usable at the moment (it suffers from far too much change within the language and standard libraries), and it will still take a lot of work before it is a production-grade language.
Author of blog post here: Interesting that you say that. I came really close to saying in the blog post that Rust could be the alternative to C++ that I am looking for. But then I decided that I know way too little about Rust at this point to make that statement.
The one thing with Rust is that it seems to me to also offer so much fine control on memory ownership that it would start to develop its own set of "gotcha"s.
With that said I'm looking forward to seeing progress on the language as I think we've been needing a systems programming language where you can express ownership semantics as part of the language itself, and have the compiler check those for you.
D seems like a pretty good candidate as well (and even has some development effort from Alexei Alexandrescu, who has authored many good C++ programming volumes).
Totally agree. C++11 was a huge disappointment to me - rather than taking to opportunity to make changes that would help attract new C++ programmers, the committee decided to allow the "C++ nerds" to pile on even more complexity to their already overwrought libraries (cough boost cough).
C++ is a tool for power users, not a popularity contest.
Changes to the language should not be done just to attract more users. This is especially true if these new users would be today's PHP, JavaScript and Ruby users. The C++ community is much better off without these kinds of programmers.
It's much better for people to use C++ when they realize that they need the powerful functionality it offers, rather than dumbing down C++ in a way that'll make it attractive to less-skilled developers.
"It's much better for people to use C++ when they realize that they need the powerful functionality it offers"
Relevant anecdote: I know a number of C++ programmers who said they would use Haskell / Python / etc. to "prototype" a system, then switch to C++ when they needed what C++ has to offer. None of them made the switch back to C++ and their "prototypes" became finished products.
> None of them made the switch back to C++ and their "prototypes" became finished products.
This is because the vast majority of the time people's claims of performance requirements are complete BS. If you're writing true system software (an OS kernel, a database, an MQ system, etc) then shaving microseconds is going to matter because 10s, 100s, 1000s or even more applications built atop or using your code will be leveraging the minor gains you are creating. If you're building "Misc App X" then it's probably not going to matter.
Author of blog post here: That has been my experience as well. People tend to overestimate their performance requirements. Then, in the name of improving performance, they do all kinds of things that bring down their productivity and increase the complexity of their code (choosing C++ as the programming language is one of those things), but they never verify if any of those things really made enough of a difference, if any, to create value.
Another thing I noticed is that of those people who do have serious performance requirements (OS kernel etc.), many choose C over C++. We all know what the most famous example is. I have no experience in this area. I wonder if there is any data on the use of C vs. C++ vs. other languages like Go in that realm.
Another Relevant anecdote: I personally prototyped a complex data transformation in Clojure in about a week. I then had to re-implement it in C++ which took about three more weeks, but was actually able to perform near to the speed of the super fast raid HDD (~1gBps). The clojure code (even after a lot of type hints etc) never got past 60MB/sec throughput, which would be a showstopper for a tool that has to process a lot of data online.
Languages like Haskell and Python being suitable for many applications in no way changes the fact that there are many other situations where they would not be suitable, and a language like C++ would be needed.
Keep in mind that there are also many cases that are the opposite of what you describe. I've worked with many systems that were initially implemented in a non-C++ high-level language, yet the developers had to go back and start incorporating C or C++ code at some point for various reasons, if not re-writing the entire system in C++.
The hybrid approach that Python allows for is extremely powerful, given how it makes calling down to C and C++ code from Python code quite trivial, or embedding a Python interpreter within existing C or C++ code.
That quote by Bjarne Stroustrup:
"I--and my colleagues--needed a language that could express program organization as could be done in Simula...but also write efficient low-level code, as could be done in C."
in Masterminds of Programming, p.2 (Interview with Bjarne Stroustrup) O'Reilly Media, Inc. (2009)
This is one of those articles that really resonated with me although I feel a little uncomfortable admitting it. I think C++ is a great language but what Becker says rings true to me. Of course I've always been of the "safe-subset" mindset, e.g. the project and team should dictate how fancy your C++ should be allowed to get.
I'd like to see a language with the power of C++ without some of its complexity. As noted elsewhere in the thread [1], there are a lot of features that just aren't necessary for the combination of performance and expressiveness C++ offers; on the other hand, while C is simple and can usually offer equal or better performance, its lack of templates hurts its expressiveness badly.
But Java or JavaScript? If your performance needs are satisfied by those languages, great, but you're not the target audience - not today, anyway, now that those languages exist and are reasonably fast. But if not, good luck trying to beat the JVM into doing what you want. [2]
There are plenty of use cases where Java is too complex and performance is even less of an issue, so Python and Ruby become viable. There are plenty of points up and down the perf/productivity gradient.
I didn't think anyone uses Javascript unless they had to because the code needed to run in the browser.
I've heard of it, but the only reason you would bother with server-side javascript is if you were writing for the browser and had of browser-side javascript code to work with also. Does anyone choose javascript just because it is the best language from a productivity standpoint...for an application completely unrelated to the browser?
I've not used Node.js, but given that it runs on the other side of the HTTP network barrier (where you could have run Python or Go just as easily) it seems that the answer is yes, people really do use Node.js for non-browser applications.
C++ is hard and complicated, but it's very useful if you're strict with it. I agree with the authors post though, there's a lot of inherent complexity that's non-trivial to solve and often leads developers to try and be clever rather than building strong software.
The problem with C++ isn't complexity but unnecessary complexity. As the author points out this is a cultural and not an 'inherent' problem (people who claim that a language like C++ must be complex are simply wrong).
C++ did not start out as complex language, though. Quite the contrary. Just take a look at early C++ books. The turning point happened in 1995 when Stroustrup switched the C++ paradigm from object-oriented to the functional inspired 'STL paradigm' (remember 'multi-paradigm'?). Afterwards a clique of 'Boosters' (nomen est omen) took over the language development and bloated C++ to the current mess.
In evolutionary terms: C++, a versatile mammal that developed into a dinosaur.
> Trying to impress people with your code rather than with your software is a sign of immaturity and bad engineering.
That depends on what you think is impressive. A project that has clear and simple code is impressive, in any way needlessly complex and difficult to understand code is not.
Author of blog post here: Thanks, good point! What I meant was, "Trying to impress with the complexity of your code..." I'll change that in the blog post.
the wow effect in c++ 'yeah i can do this cool stuff i read about in another language also in in c++. i just have to use this fancy lib and its really cool what you can reach with templates.' i foud always stunning. anyway it is no good for efficiency - but having worked with c++ for some years makes you vety efficient in more simple languages. i ser following drawbacks of c++ - the default is very often not what you should do. this is the price for backwards compatibility this makes it hard for the novice but you get used to. - the incredible long build times esp when working with modern libs. how many hours did i spend waiting for the compiler and how many for optimizing ny precompiled headers. this is really a produxtivity killer. its has also historical reasons but i dont understand why no proper module system gets depeloped - the lack of support for runtime errors. for 7 years i have not seen a crashing java program without giving me a hint where it crashed. and actually i have seen far less total crashes. in c++ you do one small thing wrong and the whole program ends with an abnormal program termination and you dont have any clue what happened.
the wow effect in c++ 'yeah i can do this cool stuff i read about in another language also in in c++. i just have to use this fancy lib and its really cool what you can reach with templates.' i foud always stunning. anyway it is no good for efficiency - but having worked with c++ for some years makes you vety efficient in more simple languages.
i ser following drawbacks of c++
- the default is very often not what you should do. this is the price for backwards compatibility
this makes it hard for the novice but you get used to.
- the incredible long build times esp when working with modern libs. how many hours did i spend waiting for the compiler and how many for optimizing ny precompiled headers. this is really a produxtivity killer.
its has also historical reasons but i dont understand why no proper module system gets depeloped
- the lack of support for runtime errors. for 7 years i have not seen a crashing java program without giving me a hint where it crashed. and actually i have seen far less total crashes. in c++ you do one small thing wrong and the whole program ends with an abnormal program termination and you dont have any clue what happened.
That the author thinks C++ is competing with Go betrays a narrow perspective. There are many important things you can do in C++ that are not possible in Go, in part because of the features that make C++ "complex" that simpler languages do not have. As an obvious example, the renaissance of C++ for high-performance server systems is based in part on the fact that garbage collectors are a major drag on performance for these kinds of codes.
Which is not to say that C++ is not excessively complex, just that the author seems to lack an understanding of why some of that complexity is essential functionality.