Hacker News new | past | comments | ask | show | jobs | submit login
C++ and the Culture of Complexity (2013) (greaterthanzero.com)
211 points by contrarian_ on Jan 31, 2018 | hide | past | favorite | 279 comments



The bullet list near the end closely matches my own experience with C++. There's an inordinate number of features creating an even worse profusion of edge cases where they interact. Then the "solutions" for those edge cases usually add even more complexity. Worse, they force programmers to coddle their compilers. The ratio between what a C++ compiler will accept and what it will produce sane code for is huge. That's why every C++ codebase I've ever seen is full of code to do things that shouldn't be necessary, and goes through all sorts of contortions to avoid doing things that should be OK, lest the compiler spew out an un-debuggable mess.

I'm perfectly happy managing my own complexity in C, or avoiding it entirely in Python. C++, on the other hand, seems to have been designed by compiler writers for their own enjoyment and/or job security. Every other "systems programming language" from D to Objective-C to Go to Rust to Nim presents a more coherent face to the programmer.


> C++, on the other hand, seems to have been designed by compiler writers for their own enjoyment and/or job security.

Being a C++ compiler writer (Zortech C++, Symantec C++, Digital Mars C++) I can assure you this is not true at all.

As to why C++ is so complex, my opinion is it is because it was designed a long time ago, what is considered better practice in designing languages has moved on, and C++ is unwilling to let go of the old decisions.

(There's always some major and vocal user who has build their entire store around some ancient feature.)

For example, why does C++ still support EBCDIC?


> Being a C++ compiler writer (Zortech C++, Symantec C++, Digital Mars C++) I can assure you this is not true at all.

Yeah, after I wrote that I realized it wasn't quite right. C++ is designed by compiler-writer wannabes. Architecture astronauts[1] on standards committees. They think they understand how compilers should work, and that adding support for this or that should be easy. "You just need to..." is their favorite opening. I see plenty of this in distributed storage, too. "It's so simple, I'd do it myself, but it's not worth my time, you go do what I said." The C++ designers seem hung up on an abstract model of machines and compilers that's a poor match for any real machine or compiler ever, and the actual compiler writers have to bridge the gap. Thank you for your efforts, which are Herculean in an Augean-stables kind of way.

[1] https://www.joelonsoftware.com/2001/04/21/dont-let-architect...


You can assure us based on what? If you have insider knowledge or particular credentials please share.

So far it looks like you're ranting.


http://www.walterbright.com/

> Walter Bright is the creator and first implementer of the D programming language and has implemented compilers for several other languages. He's an expert in all areas of compiler technology, including front ends, optimizers, code generation, interpreter engines and runtime libraries. Walter regularly writes articles about compilers and programming, is known for engaging and informative presentations, and provides training in compiler development techniques. Many are surprised to discover that Walter is also the creator of the wargame Empire, which is still popular today over 30 years after its debut.

Granted it's his own site, but uh, seems legit..?


Thanks, but my reply was for notacoward. Unfortunately I can't edit or delete it any more.


Oh, er, sorry about that.


  "I can assure you this is not true at all."
Was quoted, not the user you are replying to.


He assures us based on him being Walter Bright. That’s good enough for me.


Walter Bright's response is the parent of the comment in question. This one seems aimed towards the top level commentor's response, which advanced from the pained venting in the first comment (which I understand and can sympathize with) to a much more assertive tone.


From context, the comment I was responding to seemed attached to the wrong parent and seemed aimed at Walter's comment instead.


Yes, sorry for the confusion. On mobile the final sentence in the quote looked like it actually belonged to notacoward.

My comment was aimed at notacoward.


> why does C++ still support

Stop right there! There is plenty of evidence that removing features from a language is fatal to adoption. Both Perl and Python have suffered from this.

Specifically for trigraphs (apart from these, EBCDIC support doesn't affect compilers on other systems) IBM have a vote and they voted not to remove it: https://isocpp.org/files/papers/N4210.pdf


How has Python suffered from this? They broke things going from 2 to 3... and it was still the fastest growing language from 2017. It clearly wasn't popular just because they removed features, but you can't say that removing things is fatal to adoption.


IIRC they did lose the final vote though, and trigraphs is one of the few features ever removed from C++.


I miss them - I enjoyed being able to write ??=include at the top of my files.


Hey, digraphs are still there, they are almost as much fun as trigraph!


THIS is the reason IMO too. C++ has taken on the very difficult task of remaining broadly compatible with C and with legacy features while at the same time has continuously evolved over the decades, incorporating whatever was the state of the art at that time, without new features breaking old code. That is not an easy task without increasing complexity.


The book "Design and Evolution of C++" is quite interesting in that regard.

For all its warts, C++ only got adopted inside AT&T and later by almost every C compiler vendor, because it just fitted on their existing toolchains.

Even lack of modules is related to that, C++ object files needed to look just like C ones.

Now that C++ is grown up and can live on its own, it needs to pay for the crazy days of its parties going out with C. :)


>The book "Design and Evolution of C++" is quite interesting in that regard.

I found that book very interesting in many regards. I had bought and read it several years ago (out of interest, though I have not worked on C++ professionally).

Stroustrup goes into a lot of details about the reasons for many design decisions in the language. While I'm aware that C++ has some issues, I was really impressed by the level of thought etc., that he shows in that book, when he talks about all the reasons for doing various things the way he did them.


Some might say the party never ended. :-)


> incorporating whatever was the state of the art at that time

State of the art or flavor of the month? For instance, the features from functional programming that C++ and Java recently (in the last decade) added weren't anything new. When functional programming started to become more popular was when their features started showing up in C++ and Java.

If people are concerned that your language is already to large than adding elements from other programming paradigms because they're suddenly what's hot doesn't seem like a great idea. It feels like some languages are chasing the crowd, which can lead to a messy language ("OOP is all the rage now? Our language is all about OOP! Oh, functional is all the rage now? Well, we just nailed on some functional features!").


Eh, C++ can hardly be criticized to pandering to the flavor of the month.

For example, lambdas were finally added only in C++11 even though while the STL had a functional flavor since the late '90s and sorely needed lambda expressions. Only after people went out of their way to build lambdas on top of macros, expression templates and whatnot, they were finally added to the language.


It's true that isn't easy.

It's also true that it may not be necessary.

What makes a good language? One that randomly accumulates state of the art ideas about programming without breaking old code, or one that gets out of the way and allows requirements to be expressed reliably and relatively simply?

Of course C++ is used because it's fast. It's fine in limited domains like DSP.

But what is the rationale for a language that whimsically accumulates new features every couple of years, while failing to deal with basic challenges like memory management?

It's not as if it's ever going to reach a critical mass and turn into Haskell.


> For example, why does C++ still support EBCDIC?

Because there are companies like Unisys and IBM that want to sell C++ compilers to their mainframe customers.


IBM can support it as an extension in their C++ compiler. No need to burden the rest of the community with it. It's not like C++ compiler vendors are shy about adding extensions :-)

Despite C++ supporting EBCDIC, I seriously doubt the overwhelming majority of string processing C++ code will work with EBCDIC anyway, because the programmers probably never heard of it, let alone tested the software with it.


> No need to burden the rest of the community with it.

yeah, the problem is when IBM goes with big checks to national standard bodies and complains "muuuuhh we won't be able to assure that the systems we sold you in 1970 will still work in 20 years if the C++ standard removes support for EBCDIC" and then these standard bodies write strongly-worded letters to the ISO commitee: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n291...


I think that implying bribery here is dishonest. 6/7th of the authors of the paper are IBM employees and they are just voting according to their employer wills.


The Committee recently got rid of trigraphs (required for EBCDIC compatibility). IBM was strongly against the proposal but was finally outvoted. They do keep the functionality as a conforming extension in their compiler, but now that they have been removed from the standard, the language might evolve in ways that might make the extension non-conforming.


Trigraphs can be supported with the simple expedient of putting a filter in front of the compiler that converts the trigraph sequences to the corresponding character. It doesn't have to be in the compiler itself.

In fact, trigraphs were designed to operate this way.

That is, until the addition of raw string literals broke that.


> For example, why does C++ still support EBCDIC?

You might imagine EBCDIC was a thing of the distant past, and you'd be wrong. As of at least 2013 there was still production systems using EBCDIC being actively developed. In COBOL. And not just at IBM.


What support does C++ have for EBCDIC?


https://en.wikipedia.org/wiki/EBCDIC#Compatibility_with_ASCI...

Most programmers manipulate characters as if they were ASCII, and that code will break if presented with EBCDIC. The C++ Standard is carefully worded to not assume ASCII encoding.

C++ presumably supports other encodings, but I've never heard of another one that will work. There's RADIX50, but that will never work with C++, for the simple reason that there aren't enough characters in it.

https://en.wikipedia.org/wiki/DEC_Radix-50

You can also see the EBCDIC support in oddities like the digraph support.


Sure, it's a large and complex language that takes time to master. But I'm interested to hear examples of what you call 'profusion of edge cases'.

> The ratio between what a C++ compiler will accept and what it will produce sane code for is huge.

As is the case for any programming language.

> C++, on the other hand, seems to have been designed by compiler writers for their own enjoyment and/or job security.

C++ is designed by its standards committee... If you know anything about the struggles compiler writers have had with implementing the standard, you'd know the standards committee definitely does not consist of compiler writers! It's really cheap to summarize their efforts as motivated by advancing their own job security if you ask me... I can recommend you to attend a meeting or to read some proceedings to convince yourself otherwise.


> I'm interested to hear examples of what you call 'profusion of edge cases'.

For one example, look at the profusion of cases in type deduction, depending on whether one is dealing with a value, an array, a pointer, or one of two different references, and whether qualified as const or volatile.

One might argue that these cases are too prevalent to be called 'corner' cases, but that doesn't exactly help! In C++11 and C++14 there was the indisputable corner case where auto and template type deduction differed with regard to braced initializer lists, though in a rare case of backwards-compatibility-breaking, it has now been fixed [1].

Scott Meyers, for one, has given examples of particular cases in the use of braced initialization, especially in the context of templates, that can be considered corner cases in that they are probably not likely to arise very often in most of the C++ code that is being written for applications.

[1] https://isocpp.org/blog/2014/03/n3922

[2] Scott Meyers, 'Effective Modern C++', pp 52-58.


One fairly common example of a bad C++ design edge case is the “Most Vexing Parse” problem [0].

I frequently find myself constructing objects using the () syntax will produce parse errors as the compiler is expecting a function declaration. Then replacing the () with {} just fixes it. It’s really frustrating that bad design like this is just maintained as a stumbling block for new users, instead of being fixed.

[0] https://en.wikipedia.org/wiki/Most_vexing_parse


> C++ is designed by its standards committee...

When it comes to design, C++ is a good example of why having a benevolent dictator is better than a committee. I still think it's a huge mistake to not have a standard ABI and rely on C's ABI.


Definitely agree about the ABI. In C# it's a pleasure to write libraries for others and use other libraries whereas in C++ it's almost always a pain.


Standard ABI only makes sense if you are targeting a single system (.NET for C#). That's not the case for C++.


It would definitely making interop between C# and C++ much easier. It would make interop between C++ and any language much easier. The only way to do it today is to extern "C" everything. It's ugly.


How would exactly an library compiled for 32 bit big endian POWER interoperate with a program written for 64 bit little endian ARM?


This doesn't work with C either or does it? C has a reasonable ABI so maybe we should get to the point where exchanging C++ libraries is as easy as doing this with C


The point is there is no standard C++ (or even C) ABI because it is not possible to standardize one. At best you can standardize a platform specific ABI. Which is in fact what happens in practice on most platforms, but it is up to the platform mantainers and the C++ standard itself can't have nothing to say.

Regarding the C ABIs, they are the lingua franca for interchange between languages because, a) as C is semantically poor, it is the minimum common denominator, and b) it is often the OS ABI.


> I still think it's a huge mistake to not have a standard ABI and rely on C's ABI.

And rust is going down the same path. Looks like c will reign supreme for shared libraries for the foreseeable future.


Disclaimer: I don't program in C++ day to day, so maybe my experience is atypical.

Moves and rvalue references (and whatever a PR-value is) and even RVO scare me. They make me want to pass pointers around, because at least I know for sure what'll happen then. (And, funnily enough, C++ seems worse than dynamic languages for this -- more magic around function calls and returns than C or Python or JavaScript.)


scare me ... because at least I know for sure what'll happen then.

Which is just because once you learned how pointers behave. Similarly, if you'd just take the time to learn the basics of rvalues, moving, RVO, ..., you won't be scared by them anymore. Might thake longer than pointers, sure, but it's worth it.


Maybe this is just my experience, but it took me far longer to understand the subtleties of move semantics, rvalues, and RVO than to understand pointers and references in C++. And this is not even getting into “universal references” (which I don’t have a comfortable understanding of either)


There's a YouTube video [1] of a great talk by Scott Meyers: Effective Modern C++ (part 1). He covers universal references and how they relate to other kinds of references in either that one or Part 2 [2]. I found the videos very helpful.

Edit - I may have the wrong videos linked. It could be his talk on Universal References [3] that I'm thinking of. It's been a while.

[1] https://www.youtube.com/watch?v=fhM24zs1MFA

[2] https://www.youtube.com/watch?v=-7qwpuA3EpU

[3] https://channel9.msdn.com/Shows/Going+Deep/Cpp-and-Beyond-20...


They really aren't that bad... in fact, they're pretty useful. Source: your everyday C++ programmer


They’re great, really. Yes, it takes effort to master, but then you can generate faster, safer, generic code with much less effort.

Source: Another everyday C++ programmer. There are dozens of us. Dozens!


It's unfortunate when your codebase is C++03 only and intends to remain that way for compatabillity with other vendors. Yay for consumer electronics!


Out of curiosity what breaks compatibility?


My understanding is that there are partner venders who have old toolchains set up only for C++03 and thus would have to upgrade their toolchains to build our API if we upgraded.


C++ is a mess but it's one of the few languages that gives you low-level control of memory and fast code with zero or near zero cost abstractions. So for a certain class of application it's still the best choice. Music software, for example, is pretty much exclusively written in C++. I don't enjoy C++ the language very much but you can build some very cool things with it.

Personally I'm hoping Rust displaces it from most of these remaining niches but even if it does it will probably happen slowly.


> it's one of the few languages that gives you low-level control of memory and fast code with zero or near zero cost abstractions. So for a certain class of application it's still the best choice.

For a certain class of program, you mean. For applications specifically, the advantages you mention are barely relevant. Usually only small parts of a whole application need low-level control of memory etc. Those can be written in C, with the rest written in a cleaner higher-level language that interfaces easily with C (there are many such)

C++ is proof that a single language can't satisfy all needs. It tries to do the low-level stuff as well as C, and fails because it can't produce libraries that are as easy to link to. Then it tries to do all the high-level stuff as well as other languages, and utterly fails because it can't get away from its low-level roots. D, Rust, and Nim all make better compromises that suit just about all C++ use cases. Go and Pony do a better job for some but not others. I won't say there's no room for C++ in a sane computing world, but its legitimate niche is very small indeed.


>For a certain class of program, you mean. For applications specifically, the advantages you mention are barely relevant. Usually only small parts of a whole application need low-level control of memory etc.

I'd say Rust has a similar level of control. I just rewrote our longest build step (37 minutes on a normal build) in Rust. By having control over when things are allocated I could get it down to about 20 seconds. The previous software is written in Java.

If you want speed you need to choose C, C++ or Rust. If you want it safely, then Rust. I'd argue in my case that Rust was probably the fastest choice. As I probably would have copied more in C/C++. In Rust I can trust that my memory is intact.

I'd also choose C over C++ though. I find it's a much more manageable language. I never found it hard to make the right abstractions in it, except for maybe a lack of generics (which C11 kind of solves for).


We are starting to see that trend on GUI frameworks and game engines.

C++ is still there, doing what it does best, driving pixels around with maximum performance.

But the upper layer, exposed to app developers and game designers, is done in another language.


Very very slowly.

Only now embedded development is starting to accept C++, and C still rules there anyway.

Which means it took about 20 years to reach this point.

And still Rust will need to go through the same certification processes that C, C++, Ada and Java enjoy for such scenarios.


I used C++ for embedded CPU 68332 (25 MHz CPU) with 4MB of SRAM in ~1996 for DNA sequencer machine.

~100 + classes, single inheritance, 1,2, 3 Axis motor controls, CCD Camera, Laser, serial com channel, scripting engines, etc.

No template, no virtual functions. Worked very well at that time.

The compiler setup at that time is AT&T cfront generate C from C++ code ran in Mac and embedded C cross compiler generated the target code.

The classes are shared within company for different machines (biotech robots) to maximize code reuse.


Very interesting, thanks.

I got introduced to C++ via Turbo C++ 1.0 for MS-DOS, in 1993.

So if it was good enough for 640 KB max, with 64KB executables, it shouldn't be an issue in most micro-controllers, but the biggest issue is the existing dev culture.


Forgot to mention couple other design decisions:

  No new, delete operators in any C++ code. 
  ISR code was also in C++.

  All objects are statically allocated - with 4MB of SRAM, one can easily see why.   It allows tightly control memory usage by the developer. 

  All regression tests are automated.   There were test scripts for all functional HW/SW components.  Found one bug triggered  in 24.9 days time frame (31 bit timer counter wrap around for 10 milliseconds timer call) - from that point on - all firmware release pass 3 months continuous test on multiple systems before release.  


  Agree with your point: Dev culture matter a lot.   This was a mac (powerpc mac) base dev house.   C++ was the big thing in the SW (Mac) side of the dev team.    


  In my career, I worked on  15+ projects - most were embedded system projects.   Only two projects are C++.   The other project only small subset is C++.   On this project - 90% of code base running in target are C++ and full OO design and 80% of the classes reuse from other projects.


Thanks for sharing.

I eventually moved into Java/.NET stacks, but still follow on C++, as my to go tool when they need some unmanaged help.


Very interesting. Was the build environment all inside MPW?


I don't remember. Long time ago.... Likely just steps inside makefile.

Not really a big fan of the Mac at that time - It was before Steve Jobs came back and merge the OS with Next? The Mac was very unstable. I remember it crash 3-5 times a day for my daily tasks - editing, cross compile.


The C++ you wrote in 1996 is basically a different language from the C++ that you're encouraged to write today.


> The C++ you wrote in 1996 is basically a different language from the C++ that you're encouraged to write today.

We can differentiate between (a) what the language spec says, and (b) what various individuals advocate.

The code we write is generally constrained by (a), but we can usually substitute our own best judgment for (b).*

* Except when the people mentioned in group (b) have sway over the C++ standard.


Sounds like a sweet spot kind of project for C++.


There has been recent pushes to use Rust in embedded systems, but I agree it'll take a long time. Check out this blog [1].

[1] - http://blog.japaric.io/


Thanks.


What do you mean by embedded systems?

Here's an article from 1998 that gives an example of C++ being used in military avionics systems:

http://www.cs.wustl.edu/~schmidt/TAO-boeing.html

It's been used in civilian avionics for a long time, too. Not that it's necessarily the best choice in those environments, but "starting to accept" seems like a mischaracterization.


I mean the embedded systems where Assembly and C89 still rule, and there is very hard to change, because the problem of adopting anything else is cultural.

Basically, while there are projects being done in C++, and many companies are finally migrating from C to C++, the large majority is sticking with C.

If you prefer to listen to someone actually relevant in the C++ embedded community, here is what Dan Saks has to say about it.

https://youtu.be/D7Sd8A6_fYU

http://cppcast.com/2016/10/dan-saks/


The embedded world has accepted c++ for all but the smallest microcontrollers. I was using c++ for embedded development 6 years ago and I was late to the party. All that prevented adoption be me before that was the cost of RAM.


Depends on the embedded system. I've worked on embedded systems running embedded Linux, with megabytes to gigabytes of storage, for the last dozen years, using C++.

If you're still dealing with an 8051, I'll agree that you're less likely to have moved to C++.

One other thought: Embedded toolchains are typically set at the start of the (original) project. I don't remember ever seeing a compiler upgrade happen in an existing embedded code base. And some embedded code bases live for decades.


I fully agree with you, and given my preference for type safe languages, I find interesting that other languages are also an alternative, depending on the use case constraints.

However from a few CppCon and Meeting C++ talks, it appears that in such scenarios moving beyond C, is more of a cultural issue than technology one.


>>Only now embedded development is starting to accept C++

Well largely on RaspberryPi kind of platforms, which aren't even even embedded systems. More like miniaturized desktops.

Then there is just C and only C. C dominance there isn't going to be replaced anytime soon, if ever.


During 2017, BMW among other car companies, and Sony have migrated from C to C++ as their main language for embedded development.

https://archive.fosdem.org/2017/schedule/event/succes_failur...

https://www.autosar.org/

"Developing Audio Products with Cortex-M3/NuttX/C++11"

https://www.youtube.com/watch?v=T8fLjWyI5nI

Unless you consider their devices Raspberry Pi kind of platforms

Of course with companies like Microchip still focusing on Assembly and C89, C is going to stay around for a very long time

What is done in C and macros, can be safer done with C++ and constexpr templates and better optimized, problem is to change the culture of those companies.


> Well largely on RaspberryPi kind of platforms, which aren't even even embedded systems

no, for instance Arduino uses C++ as a main language. And an arduino pico has 2.5kb of ram... that's firmly in the "embedded" scale of things.


I feel like embedded development should just avoid c++ and go with a managed languages. I was hopeful abut go, but they kinda wrecked it for embedded.

The thing with embedded is you have two cases, hard real time and just don't care.


Embedded devs always care. Managed languages with non-deterministic GC will never be popular there.

What embedded devs end up needing are:

- ability to shove bits directly in and out of a particular memory address

- ability to write interrupt handlers with low and bounded latency (i.e. emit a different function pre/postamble)

- global mutable state


They go with Java in very specific cases, but you need big bucks to pay for something like PTC Perc Ultra, JamaicaVM or WebSphere Real Time.

Then there is microEJ as well, but their target is that performance critical code is anyway written in C and exposed to Java.

Also depending how embedded one might consider mobile phone hardware, there is Android and Android Things.

Then there are some OEMs still selling Basic, Pascal and Oberon compilers.

But in what concerns embedded domains where only Assembly and C get used, currently C++ seems to be the only one that they care to adopt. Specially given that it already enjoys the same kind of safety regulations (MISRA, DOJ and similar).

And not everyone mind you, C++ conferences keep having sessions on how to advocate to those devs.


I've seen some people using MicroPython combined with C in embedded devices. Or plain Python on embedded Linux doohickeys. Company I work for has a few embedded Linux devices running Python.


Robotics is pretty much exlusively in C++ as well.


I agree that C++ is bad but I actually find C much worse for projects bigger than a few thousands of lines. Reasons for this:

* lack of namespaces - all names with long prefix look the same to me

* just text based macros

* no generics

* error handling usually based on int constants and output function parameters - in big projects is hard to use them consistently without any type checking

* no polymorphism

* ton of undefined behavior (almost the same as C++)


All your complaints about C are valid, except I'd say defined but stupid behavior buried somewhere in a gargantuan language spec is effectively the same as undefined behavior.

The difference is, C lets you control how much baggage you carry along and C++ doesn't. If I want a higher-level abstraction in C, I can usually implement it pretty well using inlines and macros, void and function pointers. Will it be pretty? Hell no. Will it have all of the syntactic sugar that the C++ version does? Nope. But it will work and be usable and most importantly the complexity/risk that it carries along will be exactly what I asked for (because I know how to manage it). Using a feature in C++ means carrying along all of the baggage from its dependencies and interactions with every other feature.

If programming languages were cars, it's like the dealer saying I can't add one option to the base model. No, I have to buy the top-of-the-line model with dozens of features I don't actually care about and will never use, costing far more and oh by the way that model has door handles that will cut you. That's about when I go to the Python dealership down the street, or just stick with good old C.


> Using a feature in C++ means carrying along all of the baggage from its dependencies and interactions with every other feature.

Only if you use every other feature. Don't do that. Use the features you need (and understand well), not every feature. It actually becomes much like you say C is, except that you don't have to write the features.


> Only if you use every other feature.

Not true at all. A lot of the features have either an interface or an implementation driven by the possibility of combination with some other feature. The cognitive and/or performance costs remain even if that other feature isn't used. For example, it's easy to get mired in writing extra constructors/destructors and virtual hooha just because someone using your class might also use some feature besides the one you used yourself. I've seen that happen on many projects. The only way to avoid it seems to be to abandon most of what makes C++ different than C, at which point it would usually make more sense to start with C and add what you need.


> For example, it's easy to get mired in writing extra constructors/destructors and virtual hooha just because someone using your class might also use some feature besides the one you used yourself.

I don't think I've ever seen that. (In a library, sure. In application code, no.) If you know it's going to be needed (or you know it's very likely), sure. If not, every place I've worked in added the extra constructors when needed, and only when needed.

Extra destructors? Other than empty destructors, I don't think it's possible to create an "extra" destructor, because each class can only have one. (And, in the case of virtual destructors, adding them is good practice. But that's not "combination of features", it's part of the deal you sign up for when you start using polymorphism. (Though I guess you could describe it as the combination of destructors and polymorphism, which is true, but it's simple enough I have a hard time regarding it as out-of-control complexity explosion.))


> In a library, sure. In application code, no.

It has been a long time since I worked on an application so trivial that parts of it weren't broken out into separate libraries. Most often, those libraries end up being maintained by people other than their users, and the users always end up using the library in unexpected ways or contexts so these protective measures always become necessary.

Perhaps more to the point, I don't think there should even be multiple constructors each called in different situations that it takes several pages to describe. This complexity mostly exists because the people who defined C++ don't seem to understand the significance of value vs. reference semantics, as the OP also noted. Move semantics and rvalue references represent a tacit omission that the previous semantics were broken, but they just introduce even more non-intuitive syntax and another few pages to describe what happens in which situations. That's exactly the kind of spurious complexity that makes compiler writers want to go on killing sprees and folks like me want to avoid the whole morass.


Never really had a problems with the lack of namespaces actually (and not due to insanely prefixed names), if everything is properly seperated there just doesn't seem to be enough chances for name clashes: headers should have only things which really are publicly needed, sources should only include headers they need, and one of the design targets should be high cohesion/low coupling. Though it probably depends on the project.


> no polymorphism

C has plenty of polymorphism. The compiler just doesn't do it for you. In fact, C++ started out as C-with-classes since it was a pain to keep recreating the OOP-in-C boilerplate over and over. Besides, there are more kinds of polymorphism than virtual methods. You can be polymorphic with an array of function pointers.


>"C++, on the other hand, seems to have been designed by compiler writers for their own enjoyment and/or job security."

It's not what a compiler writer would want by any stretch. Having helped write a C++ compiler I can a test to that. I will agree that C is a nice language. It does exactly as you tell it.

The complexity I would say is what you get when you "design by committee"

Web standards have a similar problem they keep growing and getting more complex. Try to write a browser from the ground up these days.


Strongly agree on the inordinate number of features point. Which other mainstream OO language supports dynamic & static dispatch, single & multiple inheritance, interface inheritance via abstract classes, generic types and functions via templates, lambdas, type inference, and operator overloading? Roll that up with C compatibility, a 30 year history of evolution, and the complexity of commonly adopted libraries like STL & Boost, and there's simply no avoiding it. There's a lot of C++ out there, and I'm confident that will pay the bills for years to come for this 50 something 25 year C++ veteran!


I think that's spot on. I coded quite a lot of C++ in the early 2000s. Now I'm considering to jump onboard again. C++11 and successors are almost a different language.

More importantly, for my niche I don't see anything that can readily replace C++. Rust has very little support for scientific computing. Julia is great, and will replace my high level statistical inference code, but it's not designed to let me design low-level close-to-the-metal data structures. Scala has memory issues, which will be hopefully less problematic once value types are implemented in the JVM. OCaml and F# look interesting, I haven't evaluated these carefully.


If you are doing scientific computing, have you considered Fortran? The gfortran (GCC Fortran) compiler supports Fortran 2003, which has object-oriented features, and since Fortran 90 the language has had array operations and syntax similar to Matlab or Python with numpy.


I'm not a big fan of Fortran for my particular domain, which includes lots of strings (biological sequences). Here Fortran is not as quick. See for example the k-nucleotide benchmark:

https://benchmarksgame.alioth.debian.org/u64q/performance.ph...


I worked with a customer that was using .NET for DNA sequencing.

Not sure what tricks they pulled off regarding unsafe code and parallelism, but it was fully done in C#.


What are Scala's memory issues?


C++ is a language with a lot of features, and not all of them should be used in every code base. It's quite possible to write simple, portable, and relatively clean C++ code if you use only those features you need.

It's not the prettiest language by any stretch, but it's quite capable and fast and has excellent support across just about every platform.


I saw Stroustrup give a talk and he said exactly this - C++ is a toolbox of different paradigms that aren't meant to be combined. If you avoid frameworks that impose a particular paradigm and/or shield the parts of the codebase with different paradigms (for example when using Qt you really should have your "engine" running aside the Qt rather than embedding the code in the UI, extreme example I know but the same idea) you'll have a grab bag of different approaches for the specific problem you're having for zero cost. When people talk of "sticking to a subset" they are unknowingly doing exactly that (especially since everyones subset is slightly different).

There might be a "culture of complexity" in the community, but to remove the conflicting paradigms from C++ is to destroy what makes C++ useful. I don't believe C++ is complex in it's DNA, but highly experimental, overwhelming to newcomers and experienced developers alike (since you have to truly understand any feature before using it), and easily misunderstood. It requires more strictness in design and implementation than other languages, and isn't my first choice for anything that doesn't require high performance. But since I'm in game development and audio synthesis, it's often my only choice since nothing else hits that sweet spot of abstraction and performance.


> If you avoid frameworks that impose a particular paradigm and/or shield the parts of the codebase with different paradigms...you'll have a grab bag of different approaches for the specific problem you're having for zero cost.

Avoiding frameworks and libraries which use unwanted language features and paradigms is very hard. Once these libraries are integrated, it is nearly impossible to restrict a team from using said features elsewhere in the project. Every C++ developer has pet features and features they hate and will never use, but these sets are rarely compatible between developers.


just a contrary opinion. I read the c++ book in the very early 90s. someone told me it was the future of programming.

every single c++ shop I've worked at since has said, 'well, yes the language is a mess, but if you stick to a well controlled subset, its really pretty good'

and all of those shops, without exception, have dragged in every last weird and contradictory feature of what is a really enormous language. so I guess the 'sane subset' argument is ok in theory, but really not in practice.

i've actually seen some really* clever mixin/metaprogramming with templates. it was a total disaster, and in a different environment it could be a really great approach. i could never understand it in complete detail, but if C is a 38 that you can use to blow your foot off, C++ is a 20ga shotgun with a pound of c4 strapped to your head.


> and all of those shops, without exception, have dragged in every last weird and contradictory feature of what is a really enormous language. so I guess the 'sane subset' argument is ok in theory, but really not in practice.

To be fair, this happens with every language I've been associated with, even C. Just look at those people who do metaprogramming with the C preprocessor. It's madness!

I used to do that too, as a young programmer. It took about 10 years to grind that out of me. One advantage of us older programmers is we show how clever we are by writing amazingly simple and understandable code. :-)


> To be fair, this happens with every language I've been associated with, even C. Just look at those people who do metaprogramming with the C preprocessor. It's madness!

To be fair, C++ metaprogramming with templates makes C preprocessor look like an insignificant ant in front of a truck.


"dragged in every last weird and contradictory feature of what is a really enormous language"

This happens with every language that has a lot of features. You have to try them before you can form an opinion.


> This happens with every language that has a lot of features.

Seems like the key is to not have a lot of features, then.


But languages with not a lot of features trap you unless you never need that feature (or unless you're fine with implementing that feature yourself in what the language gives you).


Then we should be using FORTRAN 77 forever?


>C++ is a language with a lot of features, and not all of them should be used in every code base.

That's not up to the individual coder coming later to a codebase. Or wanting to use a library that enforces those features, etc.

And the design of the features can impact how other features are implemented, even one doesn't use them.


"Sticking to the features you need" is pretty hard to implement. Either you end up with a stale code base that doesn't use new and useful features. Or you start using new features and only later realize the headaches they may cause. Neither situation is desirable.


Regarding D, it looks like that, but any big code base enjoys meta-programming (mixins), templates.

They have a ton of warts regarding annotations, usually worked around by using templates, because on that case they are inferred.

The semantics of shared are still being worked on.

The way const/immuatable works, makes some devs just give up and remove them from their code.

I can equally tell some Objective-C issues.

Yes, in general they are better than C++, but not without their own warts.


> The way const/immuatable works, makes some devs just give up and remove them from their code.

That's true. The thing with D const/immutable is they are transitive, and the compiler means it. It's not a suggestion.

The advantage of enforced transitive const/immutable is, of course, is it's foundational if you want to do functional style programming.


Not to do the c vs c++ thing, I too prefer c. For over 20 years actually I have preferred c to c++; all the while learning and enjoying other languages like rust. c still is missing some crucial stuff in my opinion-- there are times where I wish generics existed, and when you tie that to a trait system like in rust it's really a pity to not have something like that in c. Couple that with really great package managers for libraries in other languages, and a coherent way to use the libraries, makes one long for more when working in c; it's really missing great and easy code sharing that so many other languages have at their front and center.


> there are times where I wish generics existed

They kinda do...


> C++, on the other hand, seems to have been designed by compiler writers for their own enjoyment and/or job security.

If that were the case, the Edison Design Group (https://en.wikipedia.org/wiki/Edison_Design_Group) wouldn't exist. It exists because compiler writers don't want to have to deal with parsing C++.

(Then there's Dinkumware, which serves the same purpose for library functions.)


How is Rust less complicated than C++? I don't use it, but from what I read it seems to be even more complicated, and getting even more so with the myriad of features they are adding each release.


> How is Rust less complicated than C++?

In pretty much all senses of the word?

> getting even more so with the myriad of features they are adding each release.

It's far from adding a "myriad of features" with each release, and most of those it adds are library stuff, see for 1.23: https://github.com/rust-lang/rust/blob/master/RELEASES.md#ve...

And with respect to non-library features currently in-flight, by and large they are "playing catch-up" to C/C++ for compatibility or to fulfil use cases for which the language is not currently convenient or sufficient e.g. generics-over-values, const functions, allocators.

Rust has an upfront feeling of complexity in lifetimes and the borrow checker, but here's the ugly truth: pointer lifetime issues don't exist any less in C++, the only difference is the compiler doesn't help you with them.


> pointer lifetime issues don't exist any less in C++, the only difference is the compiler doesn't help you with them.

There's also the case though where the Rust compiler isn't helping you, but is just wrong.

Must of them are going to be fixed by non-lexical lifetimes soon though.


> There's also the case though where the Rust compiler isn't helping you, but is just wrong.

Bugs aside, compilers are not wrong, they may be too limited for what you're trying to do. Which is a different situation.


Being too limited for the intended purpose is still being wrong on some level. Not a bug, but still.

Granted, sometimes you don't care: stuff like `true?1:"foo"` is of type int an has value 1, but the compiler will rejected because of a type mismatch in the conditional—but you don't care because the code smell is too big and obvious to ignore.

Sometimes however you do care, if only a little: the value restriction in ML for instance prevent some function from being polymorphic because some side effect might break the type system. This forces you to eta-expand the function definition (no partial application, you need to write the arguments explicitly), which is a bit of a hassle in some cases.


> Being too limited for the intended purpose is still being wrong on some level. Not a bug, but still.

By that criteria, every single statically typed language is "wrong on some level", you'll always find something you want to do/express which a specific language's compiler will not accept. That's not very helpful.

> Sometimes however you do care

I'm not saying people should not care, caring about lexical lifetimes being a pain in the ass is perfectly sensible (there's a reason why non-lexical lifetimes are being implemented after all). I'm saying there's a gulf between "the compiler does not allow X" and "the compiler is wrong", and lexical lifetimes are the former.


Well, most compiler bugs are cases where the compile "is wrong" and "being too limited for what I'm trying to do" also sounds like a bug. So this reads to me as "aside from when they are wrong, compilers are not wrong" ;)


> Well, most compiler bugs are cases where the compile "is wrong"

Most compiler bugs are situations where the compiler allows stuff it should not or generates incorrect code.

> "being too limited for what I'm trying to do" also sounds like a bug. So this reads to me as "aside from when they are wrong, compilers are not wrong" ;)

Most every compiler is "too limited" to do some things, that is a big reason why dynamically typed languages are still a thing. For instance I can't tell Java to just take any object with an attribute "foo" despite it being the only thing I want to use (and I don't even care about its type) — bypassing the compiler via reflection aside. Do you think that's a bug in the compiler?


Hm ... I wouldn say thats a limitation of the compiler but of the language itself (no ducktyping). Definitely not a bug though.


Here's one example:

In C++, creating an instance of a class is fantastically complicated. The class must be initialized, and there is a zoo of different initialization forms: value, direct, aggregate, default, list, copy, etc. Which one foo {} invokes has been the subject of a spec bug. Some of these invoke a constructor, which is like a function, but isn't a function, multiplying the number of concepts involved further. Constructors suffer from bizarre syntactic limitations. They need a special syntax for handling exceptions. The form foo f(); famously declares a function instead of default initializing f. The [dcl.init] section of the spec is about 16 pages long and there are about another dozen about how constructors work in the special member functions section.

In Rust, there is exactly one way to create an instance of a struct: you provide a value for each of its fields.


> In Rust, there is exactly one way to create an instance of a struct: you provide a value for each of its fields.

As a user of both rust and and C++, I certainly wouldn't consider the the lack of default constructors in rust to be a good thing. Especially for std types like Vec it is really annoying to have to initialize it explicitly.

> The [dcl.init] section of the spec is about 16 pages long and there are about another dozen about how constructors work in the special member functions section.

That is a bad argument if you're comparing to a language that doesnt have a spec. Maybe a complete, exhaustive rust spec would be much longer than the C++ standard?


> As a user of both rust and and C++, I certainly wouldn't consider the the lack of default constructors in rust to be a good thing. Especially for std types like Vec it is really annoying to have to initialize it explicitly.

That is fair but orthogonal to the issue at hand, namely:

> How is Rust less complicated than C++?


A good, but difficult question - it's difficult to quantize complexity. Both languages have aim that their users can have much as much control as possible, even if it entails complexity in the language desing.

Perhaps one answer could be, that much of Rust's complexity arises from the concepts of 'borrowing' and 'lifetimes' and how they are encoded in the language.

In C++, complexity much of the complexity arises from the copy/reference semantics, and the possibility of overriding standard behaviour. You need to have wider context to understand local code. So you need to understand how the C++ works on quite a low level and you might need to know more specifics on your C++ codebase than might be the case in Rust.


It seems to have many fewer language-level features. No lvalue/rvalue distinction, pointers are not elevated to a language-level feature (only references), only a very limited exception mechanism (panic) that doesn't try to generalize to support general validation (rather general validation is done with Result which is a plain old datatype written in the language, though admittedly it relies on a macro for use - more generally this approach is also used for things like I/O, a lot more is done with plain old library functions written in the language rather than added to the language standard), the macro mechanism is much less of a special case than the C++ preprocessor, no "const" and associated language-level complexity (e.g. "mutable"), smaller and more consistent syntax (no comma operator, no sequence point rules), more unifying things from the start (e.g. traits as values) rather than ad-hoc conversion rules in the language later on.


FWIW

> Result which is a plain old datatype written in the language, though admittedly it relies on a macro for use

It does not. It relies on a special construct (!) or macro (try!) for convenience, but these are not necessary (a popular alternative is to use the various HoFs instead) and desugar to pretty trivial (if possibly repetitive) code:

    macro_rules! try {
        ($expr:expr) => (match $expr {
            $crate::result::Result::Ok(val) => val,
            $crate::result::Result::Err(err) => {
                return $crate::result::Result::Err($crate::convert::From::from(err))
            }
        })
    }


I don't think Result would be seen as an acceptable replacement for exceptions without the macro/special construct. But I should've been clearer.


(It's ? not !)


(You're right of course, not sure what I was thinking)


For one thing, Rust has had the benefit of learning from the evolutionary lessons that languages like C++ went through. Another point is that really arcane stuff that's applicable only to very few legacy systems need not be supported, and aren't, while C and C++ don't have the luxury of dropping such support.


Rust mostly avoided accidental complexity. The complexity it has is mainly from tackling a complex problem of providing memory safety and preventing data races at compile time.

In C++ a large chunk of complexity comes from legacy of being a C superset and having to preserve backwards compatibility with all of its old features and syntax quirks, and often surprising edge cases arising from interactions between different features.


Oh give me a break. Following this logic, why aren't you writing your code in English?

The real world is complex. Don't confuse hiding complexity with minimizing complexity.


>Following this logic, why aren't you writing your code in English?

Because slippery slope arguments never shone light on any situation.

>The real world is complex.

Which is neither here, nor there. We're talking about programming languages -- where you can be 10x as complex as another, but as long as you're both Turing Complete, you can't really do anything substantially more.

So, this is not about more power to handle "the real world", but about ergonomics. Which nobody ever said were C++ strong point.

>Don't confuse hiding complexity with minimizing complexity.

Well, we should also not confuse adding complexity with adding expressiveness.


See Brooks's "No Silver Bullet". C++'s complexity is accidental.


I thought of Brooks when I saw the title; as I mentioned in my other comment, I see a kind of amusing pendant to Conway's law here: a complex programming language will produce a complex codebase.


I agree that c++ is complex, but remember that while trying to update the language the standard committee is trying not to do breaking changes. If you can do a greenfield implementation like rust, you don't have the baggage of an installed base, yet. If you compare c++ to a language that is also around for more that 25 years it starts to make more sense why it is complex. Add to that, that it started as an extension to c, which is the reason why it became popular in the beginning. Also, zero-cost run-time abstractions.

It is whatever you give priority design-wise, and if you look at what is important in c++ (zero-cost runtime abstractions, control of the resulting code), then ending up with something as complex as the c++ language is hard to prevent.


Yeah by keeping backwards compatibility they avoided something like the Python 2/3 mess which I would say is worth the effort and cruft.


Looking at the current programming language space, it honestly doesn't look like the Python 2/3 issue hit them that hard in the long run. And while it wasn't popular it might have been the right way forwards. Python is one of if not the most rapidly growing language. In large part due to the data-science movement which for anyone I've talked to is based on Python 3. I still occasionally talk to 2.7 proponents, who explain to me that they need to have this and that to handle all the complicated edge cases and problems in dealing with bytes vs. strings... Issues that you just don't encounter at all in python3.

If I had the choice of having a C++2020 with absolutely no backwards compatibility but a greatly cleaned up language or having yet another step down the path it's heading now, I'd chose the former even if it breaks backwards compatibility. But the language is going to follow the developers and they want to continue down the path where they get to explain again and again and again the intricacies and delicacies of rvalue semantics, exotic template meta-programming and what-ever the next big thing is going to be. All the while we seemingly still don't even have a clear road-map for providing simple modules.

I was enthusiastic back when C++11 hit and it felt like a great big push going on in the language development, I had the feeling that things were really going to take off... but now it feels like it just died out, and now there's nothing that really gets me exited about the future of C++. I'm worried It's going to get more and more complex and less and less used.


I know I'm in dangerous territory here, but what if the C++ community, as in committee, compilers, etc., pull a Javascript strict mode?

Basically subsequent C++ compilers provide the possibility of some subset of C++ which is considered modern. Then files can be marked with it, at which point the C++ compilers reject deprecated idioms. Over time this should take over, I think most of the JS in the wild is now using strict mode.

I can't be the first person to think of this. There must be a reason something like this hasn't gotten traction.


The header inclusion model (where you will be textually including hundreds of thousand lines of random code in a translation unit) doesn't make it easy to have version flags.

Also many of the issues are intrinsic to the way the language has evolved (the template compilation model for example), it is not just a matter of deprecating a few features here and there (although that might help)


Headers are what need fixing first. They are why I would never consider using C++ for any new project and avoid jobs that require maintaining C++ code. C++ is the steam power of computing and keeping legacy monoliths running doesn't do anyone any favors. That shit should have been piece-by-piece rewritten by now.


You're forgoing all of the sweet cash that C++ legacy maintainers will make in 20-odd years. Like the COBOL people right now.


Hey! That's my retirement plan!


I also would like to drop backwards compatibility in favor for a clean simple language, because I happen to write completely new functionality. If you ask yourself the question: "Do I want to break backwards compatibility in 2020?", then that same question could have been asked in 2003. Which would mean that the language lost traction (at least that is my opinion), because I could not recompile my old code and therefore I keep using the old before C++2003 compiler. So, the reason that it is popular, _is_ because it takes care of installed base and does not break existing code (or at least minimizes breakage).


Exactly.

There have been plenty of from-scratch languages that are designed as a better C++. Many have died, of those left no one has yet gained significant traction [1].

It is not clear why yet another an incompatible language would be better just because it happens to be called C++. The effort would be better spent on improving one of the existing alternatives.

[1] Plenty of newer languages have had breakaway success, but no language that is meant to be a full C++ replacement has yet. D has passed peak hype without a breakout. Rust might still make it, we will see.


> If I had the choice of having a C++2020

Would be nice to just do a fork and you either pass -std=c11 or -std=c20.


On the other hand, I find Python 3 much more predictable/less astonishing than 2. The jump was definitely a mess, but judging by the continuous spread of the language it seems to have paid off in the long term.


Both extremes (preserving backwards compatibility at the cost of maintainability/complexity, and abandoning back compat at the cost of existing code breakage) are bad. One may be worse than the other, but they're both bad options. The minimum-pain solution is somewhere in the middle.


Lots of posts here frame it like "Yes, it's a bit of a complex language, but the C++ committee has the difficult job of keeping everything backwards compatible, so it's understandable." But that's not the reality. The reality is that they (or Stroustroup) made that decision, and that decision was a mistake.

The correct way to deal with backwards compatibility issues is the way Go does it, namely to write tools to automatically upgrade code to a newer API [1]. And as that smug Google guy explained here [2], they have something like this for C++, too. They just don't bother sharing it with us peasants.

The fundamental mistake of C++ is to religiously keep backwards-compatibility with everything for all eternity. It's so easy to say that backwards-compatibility is a good thing. It's obvious! But ultimately that decision is the reason for all the problems that C++ has.

[1] https://golang.org/cmd/fix/

[2] https://www.youtube.com/watch?v=tISy7EJQPzI


Apple can't write a tool to properly update automatically between Swift versions today, but according to you, this could have been done in the 80s or 90s for C++ which is notoriously hard to parse...


I didn't say that.

It's not the 80s anymore, and the reason that C++ is still and will always remain hard to parse is... backwards compatibility.


Backward compatibility with C semantics and tooling was fundamental for C++ success. We wouldn't be having this discussion otherwise as nobody would have used C++.

Tools are great for minor upgrades of APIs and syntax, but key to C++ was compiling with existing system headers (no, realistically you do not want to maintain your own 'fixed' version), and most importantly OS and library ABI.


AT&T had a Java like variant of C, before Java was even a thing, called C+@.

This DrDobbs article in an 1993 issue dedicated to possible successors to C, is the only proof that it ever existed.

http://www.drdobbs.com/article/print?articleId=184409085&sit...

Print view is the only way to read it properly.


I agree. I said that it was a mistake, but in the beginning it really wasn't. It's just not a viable long term strategy in my view.


This comment is far overestimating the capabilities of gofix. And this isn't a denigration of gofix (which is a good idea in general that more languages should copy), it's simply that Go actually really didn't change very much between 2009 (when Go was publicly announced) and 2011 (when Go 1.0 arrived and things stopped changing). It certainly didn't change nearly as much as "idiomatic" C++ has between 1990 and today.

The lesson here is that languages should restrict their backwards-incompatible changes to only those that are amenable to automatic migration via limited gofix-like tools, but that runs counter to your argument that C++ ought to be casting off its C heritage (which I quite sympathize with).


While my pet peeves might be different than the author's I agree that C++ makes you work very hard to keep things simple and produce something that is beautiful. Just "avoid the complexity you don't absolutely need" doesn't really work: some basic things are so fundamentally convoluted that I rather just write idiomatic C because it's simple and doesn't get in my way. I want the language to originate from simple axioms so that I can concentrate on the complexity of the problem I'm solving instead of wasting brain cycles on battling the complexity of the language. I would like to like C++ but its benefits barely zero out the extra hoops.


Author of the blog post here. I think you just made me aware of the true and deeper reason for my discomfort with C++: it's the fact that the convolutions are in the basic things. I am perfectly willing to put up with convolutions, annoyances, bugs, weaknesses etc. in anything I use, be it a programming language, a piece of software, a gadget, whatever. Nothing is perfect, and who am I to criticize others for the imperfections in their work. What annoys me is when the annoyances are in the basic things, in the most commonly used features. Make the basic things reasonably good, and keep the crap off to the side.


I think that one reason the convolutions are in the basic things was the principle that C++ should be a superset of C, which is nominally a value-semantics language, but which, with the use of pointers, very commonly (and intentionally) leads to code with aspects of reference semantics. One example of this is that C++ adopted The C practice in which arrays decay to pointers.

In addition, C++ not only inherited C's static-typing rules, but strengthened them, and complicated them with inheritance (multiple, and polymorphic or not, according to your needs), function overloads and operator overloading. Now add generic types through templates, and the cases to be handled explodes to the point where you need a Turing-complete parser.

A good case can be made for each of C++'s features individually, which leads to justifications of the form "As we want both X and Y (and without sacrificing run-time efficiency), then there really isn't any alternative to [insert counter-intuitive issue here]..."

That it works at all is a testament to the skill of the people involved. The complications that remain seem qualitatively different than the sort of gratuitous inconsistencies that you find in PHP, for example.


I'd be very careful with the claim that you are willing to put up with bugs.

Bugs in language or compiler will make the language useless for any project which is expected to work.

I'm gonna be bold any say that you nor anyone else would put up with it. There's enough bug in one's own code, there is rarely any space for unreliable infrastructure.


This reminds me of a goal Larry Wall for Perl when he was designing it: "make the easy things easy, and the hard things possible".


Could you give an example of "some basic things are so fundamentally convoluted"?

Also note that almost all C code is legal C++, so when you "just write idiomatic C", you're still writing C++. (Perhaps not idiomatic C++...)


A simple case is hiding the implementation and exposing a public API. Let's use objects to make it something C++ ought to be good at and C ought to be bad at. In C, you write a header file adder.h:

    struct Adder;
    struct Adder *adder_create(void);
    void adder_setup(struct Adder *, int, int);
    int adder_operate(struct Adder *);
    void adder_delete(struct Adder *);
You can easily imagine how one would trivially implement these functions in adder.c, allocating an object, mutating its state and deleting it. The caller doesn't know anything about struct Adder nor does it know how the adding of two ints is implemented. It only needs know that struct Adder can be pointed to. The header file defines a clear interface that can be compiled against. The implementation can be changed without having to recompile all callers. Tried-and-true and boring but it works: this is the way C has done it for decades.

Now, C++. You create a class Adder, declare public constructor and destructor as well as setup() and operate(). Then you declare a couple of private ints to hold state and maybe some private helper methods, and then you realise that 1) you've just exposed parts of the private implementation in the public header and 2) you can potentially break compiled binaries even if you only change the private/protected parts of the class. Yes, that's a textbook example of how to define a class that completely sucks for any real-life encapsulation purposes. You see how things are getting complex quick? This is where people began to think of more novel applications of C++ to fix the language itself.

So you define an abstract class IAdder in adder.h with pure virtual methods to act as a truly public interface, and derive an implementation class AdderImpl in adder.cpp. Great. Except you can't instantiate the private implementation. You'll need a public factory function outside the class or a static method such as IAdder::create() to construct an AdderImpl and return it as an IAdder. This isn't very clean and beautiful anymore and this was a simple example. There are more branches to be explored in the solution space but at this point we've basically had to create an ugly reimplementation of something that we thought would come free in a language that namely supports object oriented programming whose one fundamental selling point is easy encapsulation. And all that while the C counterpart is actually easy, understandable and simple, and requires no re-engineering to get it even work.


About your two realizations:

1. I've "exposed" parts of the private implementation in that they were in the header file, yes. They were also labeled "private". That means that someone can read that and gain more information about my implementation then they could from just public information, I suppose. It also means that nobody can actually use them in code, because they're private. So you can think of that as "exposing" if you want, but it's not something that I've ever recognized as any kind of a problem, let alone one worth solving.

2. If you change only the private parts of the class and try to link other code to it without recompiling the other code, yes, you can get a broken compiled binary. That is certainly true. But I could do that in C almost as easily (if everything that uses the struct layout isn't in the same source file). And makefiles that keep track of dependencies aren't exactly rocket science. Neither are clean builds.

Maybe I just don't have your problems, but I'm still not seeing this as much of an issue at all.


Maybe I just don't have your problems, but I'm still not seeing this as much of an issue at all.

Surely problems are always personal, this all did start with the term "pet peeve".

For me, one of the peeves that comes back over and over is indeed that it seems that's near impossible to write a separation between the API and the implementation in C++ that is clean and beautiful. That's one of the first things in any project, drafting out the interfaces. And things get ugly quick there.

There are others but I drafted this particular case as an example that you requested. I don't want to go too deep into the discussion here but I'll reply to the two points below:

1) Exposing fields and functions under the private label is just a matter of principle. It doesn't matter if the private parts cannot technically be used outside the class: it still means there is information in the public API that shouldn't have to be there in the first place. That's just a stupid restriction of quirky language. A header file is mainly about the interface and its documentation: the last thing I want in it is to have cluttering bits of internal crap there only to be skipped over.

I could also cram all the code in a single source file and use global variables because hey, it can be made to work and it's easier to write a linker that way. The same applies with the public interface issue here.

2) Check back my C header file again. The struct layout is only visible to the implementation. As long as the ABI stays the same we can even use a different compiler to build the private implementation into a binary and the interface still works with all existing code.

Yet this is still about the fact that the private bits do not have to be visible to the public, just the implementation. So the language that forces me to do just that for the sake of convenience for the language designer and compiler writers is just plain stupid.


> So the language that forces me to do just that for the sake of convenience for the language designer and compiler writers is just plain stupid.

Nothing prevents you from getting the same ABI hiding in C++, but the users of your code will greatly benefit: no possibility of memory leaks, ownership is enforced since copy & move are disabled, etc:

    /// Adder.hpp ///
    struct Adder {
      public:
        Adder(int, int);
        ~Adder(); 
        
        int operator()();

      private:
        struct Impl;
        std::unique_ptr<Impl> impl;
    };

    /// Adder.cpp ///
    struct Adder::Impl { 
        // your private stuff
    };

    Adder::Adder(int x, int y)
      : impl{std::make_unique<Impl>(x, y)} { 
      
    }

    Adder::~Adder() = default; 

    int Adder::operator()() { 
      return impl->stuff * 2;
    }

    /// main.cpp ///

    int main() {
        Adder adder{1, 2};
        return adder();        
    }

    /// your main.c ///

    int main(int argc, char** argv) {
        struct Adder* adder = adder_create(); // must not forget
        adder_setup(adder, 1, 2); // must not forget
        int res = adder_operate(adder); // note how all lines are longer
        adder_delete(adder); // must delete
        return res; // we had to introduce a specific variable just for the sake of keeping the return value else we wouldn't be able to delete
    }

Of course, your code wouldn't pass code review in either cases: introducing indirections through implementation hiding like this kills performance, in C and C++ likewise. And it's not like hiding your stuff in another object file will prevent anyone from knowing your impl... disassembly & code recreation tools are extremely powerfuls nowadays.


It's convoluted because the idea of forcing an object to always be allocated on the heap is convoluted. The real fix for this problem is the modules system, where the compiled module can expose an object's size without exposing its contents. Your example shows why header files suck more than anything about the core semantics of C++ as a language.


> The real fix for this problem is the modules system, where the compiled module can expose an object's size without exposing its contents.

Exposing the object size is already too much, it's part of the ABI. The truth is : we can't have our cake (maximum perf due to stack allocation) and eat it too (hide all implementation details)


What's your point? Yes, those two goals are in conflict. Modules are still an improvement to the current situation with no downsides.


> What's your point?

My point is that if you want to be entirely safe from the ABI point of view, modules don't help you at all


Well, most of his points regarding complexity are not C++ specific.

The thing with equals applies to most languages that allow to redefine operators.

Even Java has gotten quite complex, I bet he would have quite an hard time succeeding at one of those quiz questions certifications (SCJP and friends).

And simplicity can also turn into more complexity, manual memory management in C vs C++, "templates/generics" in Go, libraries to work around language limitations in JavaScript (npm, yeoman, gulp, grunt, webpacker all at the same time).


In my experience this disease afflicts more than just some C++ programmers, the immaturity is boundless. I’ve noticed some colleagues even think there must be something wrong if code looks simple.


Yes, C++ is a complex language. Yes, it has a ton of warts if you know where to look. But, you can write beautiful software with it, and it can even look beautiful too.

Yes, there are alternatives, but when it comes to performance, expressiveness and actual deployability, C++ is pretty awesome.


Meh. The performance is almost never worth the huge complexity jump. Instead, profile an app in a higher level language and if any part's too slow, implement that in C/C++. That'll quite often be a low, single digit percentage of the app. Get the performance for substantially less engineering/maintenance cost.


C++ is a complex language no doubt. But software written in C++ need not be complex.

Java, on the other hand, is a simple language. But the kind of unnecessary complexity I have seen in Java-land (EJBs, Spring, etc.) has no parallel in the C++-land.

So going by your argument, I would choose C++ over Java to avoid the complexity jump, then profile the app, and if any part's too slow, improve that again in C++.


Oh boy, you must be really young.

No parallel in C++?!!!??

That is where EJBs and Spring come from.

The enterprise architects that created those kind of designs, were the same ones that on earlier decade were doing them with C++.

Apparently micro-services are now a thing, well on the late 90's we had Sun RPC, CORBA, DCOM. All tied together with a cluster distributed transaction management.

Sprinkled with code generation tools, based either on UML or Booch diagrams.

It was lovely, then came Java with CORBA support out of the box, RMI and GC. So they moved camps.

And yes, I am also to blame for a few CORBA objects, maybe still runing on a couple of HP-UX systems.


We should all be thankful that the Java and C# communities have offered a welcoming home to all the architecture astronauts :)


In the C++ community, that OO centric coding style is called the Java style, which is completely unfair, as it really originated in C++; it was just wholeheartedly embraced (and made more usable mostly thanks to GC) by Java.


It seems to me there is a minority of the community of any language that want to turn it into Java.


*cough Zope


They are now busy building micro-services architectures in more trendy languages. :)


I suspect (but I am not certain) that any language that wants to work in that space is going to turn into Java EJB or C++ with CORBA and/or DCOM. I think it's the space, not the language, that produces such appalling monstrosities.

(And, if you needed that kind of thing, even EJB or CORBA was better than implementing that same functionality by hand...)


Yep.

Somehow I am starting to feel that micro-services are the new EJBs.


Java just pushes the complexity into the user's code. For example, a friend of mine once gushed to me about Java IDEs - with one click of a button, a hundred lines of boilerplate are automatically added!

I replied a good language shouldn't need boilerplate code automatically inserted.


This is called 'Waterbed Theory' and is ascribed to Larry Wall.

https://en.wikipedia.org/wiki/Waterbed_theory


On the other hand the refactoring tools for Java/C# are miles ahead of anything for C/C++.


Boilerplate bureacracy usually happens, when Configurations are not itteratable and/or a language does not enforce the providing of meaningfull defaults. I find both sides guilt and charged here...


Metaprogramming enables insertion of custom boilerplate without need of an IDE.


>>language does not enforce the providing of meaningfull defaults.

There is more to this. This can can also happen if the language doesn't have features to heavy lift complicated code patterns well. You have to then use massive amount of code wall texts to make the same thing happen.

Part of the reasons why C based languages seem to die all the time, is because you sooner or later have to add features to catch up with the complexity of software getting written around. That either causes enormous amounts of ugly unusable bloat, or you have to go decades of backward compatibility breakage. The languages are just too brittle to work with change.

To give you an example. The best innovation that has come out of Python as a language is they changed the print feature from being a keyword to being a function. This is the biggest innovation they could manage in decades. And even this requires breaking backwards compatibility and having the entire world's Python code bases to go through several decades of upgrade cycles.

You see all this and just move on the next new language, like Go. And then the cycle starts again.


I find Spring to be great. I grant that I've used it since 2.0, but really it does everything quickly, easily, and is well documented. Especially with Spring Boot.

I'm writing a core service in Go because Java 9 broke a Maven plug-in (that I don't need now). Holy crap is it painful. I know that I'm learning, but it's hard to layer the application. Most tutorials show passing the database connection through all of the functions, or use closures that define all of your routes in such a way as to make the db visible.

Even if you get past the difficulty of hiding everything, it's oddly not a good language for web services. The handler interface in mux doesn't allow you to return an error! No error in GO! So you're either going to have a tonne of boilerplate for the routes, or use panic, which by all the reading I've seen is a terrible thing to do. Regardless of which you choice, Go appears to lack the niceties of prebinding the JSON into a struct. Boilerplate. Oh, and you have to do the same to write the JSON out!

Now Java allows for annotations, and types with constructors in the input parameters of the route handler. Spring will inject interface implementations by name or type. All of this and Transactions! It will automatically bind the JSON to the input variable and automatically create the JSON format on the return.

Ever tried to use transactions with Go? You have have to personally keep track of it. Every SQL call has to peg against the transaction because the connection pool won't manage it for you. Also the DB and Transaction structures have the same general interface, but DON'T implement a COMMON INTERFACE! That idea, in 2018, is an experimental feature that MIGHT NOT GET PICKED UP. The maintainers of the standard SQL library said figure it out for yourself.

Really, I don't think that Java brings that much complexity now. It use to. But you can get a new developer up and running in Java within 2 months. It will be better structurally, easily testable, and safer than Go.

Go is like a tricycle. It's simple and it will get you there, but doing anything complex will require a lot of effort. Java is like a 10-speed Schwinn. Fast, moderately complex, but easily understood when kept within the wheelhouse of Spring + Core Language.


I don't use go but some of your more general complaints sound like good things to me.

> Regardless of which you choice, Go appears to lack the niceties of prebinding the JSON into a struct. Boilerplate. Oh, and you have to do the same to write the JSON out!

The more time I spend doing maintenance the more I've learned to love this sort of boilerplate. It makes it much easier to trace where things are being used across the system, no reflection magic that causes the trail to run cold and forces you to use a debugger. I know programmers hate writing boilerplate, but it really isn't so bad and makes maintenance that much easier.

> Ever tried to use transactions with Go? You have have to personally keep track of it. Every SQL call has to peg against the transaction because the connection pool won't manage it for you.

Again I like this explicitness and don't want this hidden. Whether I'm operating inside a transaction scope or not in a given piece of code should not be a mystery.

> Also the DB and Transaction structures have the same general interface, but DON'T implement a COMMON INTERFACE!

I agree that a common interface might be nice, but given that there is seldom a good reason to operate without some kind of transaction is it really that big a deal? Just always use the transaction interface.


YES. 100x yes to this. I started a new job some time ago where Java / Scala were mainly used (prior C++ background).

I can logically step through flow control and figure most things out. Except the amount of "magic" for the sake of reducing boiler plate made debugging some issues really tricky.

Anytime I'm about to reduce boiler plate somehow, I try to ask myself whether it's going to introduce some type of tribal knowledge. That is, will someone without any knowledge of e.g. this codegen be able to step through and understand what's going on.

I feel that people reduce boiler plate without constraints far too often


Also, if you really want to write code that's agnostic to whether you're working directly on the DB object or a transaction, you can just declare that interface yourself; it's not like in java where the implementing class has to opt in.

The absence of the interface being predeclared for you does mean more boilerplate, but I agree that transactional vs. not doesn't strike me as a good thing to be hiding; this particular case seems like maybe a very reasonable omission. I try to avoid abstractions that leak.


>>Really, I don't think that Java brings that much complexity now. It use to.

That's because these days you don't learn Java, you learn things that manage Java madness. Like Eclipse, IntelliJ or Maven etc.

You don't really do much using Java these days. You do things using tools, which write bulk of the Java code. In essence learning Java today is learning Java ecosystem tools.

>>But you can get a new developer up and running in Java within 2 months.

That's because it doesn't much time to learn the IntelliJ UI :)

>>Java is like a 10-speed Schwinn. Fast, moderately complex, but easily understood when kept within the wheelhouse of Spring + Core Language.

This is true for most languages today. Java owes much of its success and power to two things. Libraries and Marketing. Marketing money is gone as Oracle doesn't spend a dime on anything that doesn't bring in two in return. And other language ecosystems have caught up with libraries.

So you could use anything instead of Java and it would all still work, in fact in most shops it already does. Haven't heard any major project being started in Java in most places in a long time.

Legacy projects will carry the Java carcass for a lot of time. But If you want to work on projects worth working, Java isn't the language you should be with right now.


What you do see is people abandoning the "prototype languages". At least twice a year HN has a story where some company dropped Python, Ruby, etc. in favor for Java. They didn't just port to the VM for Jython or JRuby. They re-made the whole thing in Java.

Why not skip this step and just use Java. Yes, there's great tooling that makes Java easy to use. That's the point. You don't have to use JavaBeans for JSON. Expose the properties as public. Jackson's got your back.

Fundamentally, a good design has bounded contexts. The web API should not directly expose your domain objects to the Internet. There should be a mapping between the service boundary to at least the use case boundary. Exposed properties are find for DTOs like JSON bodies. Map those to your domain objects via a translator and you're good. A lot of bulk drops with this approach.

As a language, yes Java is a bulky. But it's wonderfully readable. I know what type a variable is without having to dumpster dive into the method invocation (looking at you Python and Go). I can know directly through a type hierarchy which components implement an interface. In Go you have to explicitly set the supposed implementation to a throwaway variable just to have the compiler tell you if it works. Yes, checked exceptions are a pain. We know that. When it came out, it was a theoretical good. Unlike Go, at least you can wrap a checked exception with a runtime and move on with your code.

To this day Java is the balanced language between performance and usability (along with C#). It is showing its age. It does need some renovation, but on the whole it's a fine language to do new things.


> Haven't heard any major project being started in Java in most places in a long time.

Come to European enterprise shops, Java and .NET rule with no signs of changing.

Including greenfield projects.

Even alternative JVM and CLR languages are hardly an option.


I big part of learning Go, once you've done a lot of Java is to unlearn the things that you had to do as a successful Java programmer. You're used to looking at problems through the lens imposed by the imposing bag of tools and abstractions that is Java, as well as obsolete language design decisions (I'm looking at you, inheritance).

Resolve to look at things and think about things differently.

I have to give credit to Spring for saving Java from itself for a while, but it jumped the shark at some point. I've talked to many devs who've come from teams where only a small number of people understood the magic going on, and they never felt like they could really contribute because they couldn't scale the complexity hurdle of the modern Java environment.


This doesn't seem a Java-oriented criticism:

> I know that I'm learning, but it's hard to layer the application. Most tutorials show passing the database connection through all of the functions, or use closures that define all of your routes in such a way as to make the db visible.


Either you are confusing verbosity with complexity or you have never actually written a "complex" software in C++. There are too many things to think about while writing a piece of code in C++, most often than not, you are bound to get things wrong.


I've written and shipped plenty of software with modern C++. Ownership and move semantics mean that after writing a few helper and utility functions, things go pretty smoothly. The most concise and direct software I've written has all been in C++. I can organize and transform data directly instead of jumping through hoops or suffering from enormous amounts of overhead and indirection in a scripting language.

Not to mention I can create stand alone executables that have no dependencies and wind up smaller than most scripting language's interpreters for entire openGL based programs with GUIs.

One thing that doesn't often seem to get considered here is what your user wishes you wrote it in. I don't want my software to come as an assortment of scripts that depend on interpreters that themselves might have dependencies. One file is all it has to be.


There are plenty of directly compiled languages with good performance, linking to C APIs, and higher level abstractions.

Rust, Haskell, Go, and Erlang come to mind.


I am not confusing verbosity with complexity and I have written all kinds of C++ software over a career of 15 years ranging from legacy core banking systems to hardware drivers. In fact, the simple C++ code often tends to be verbose.

When you are writing complex software, you are bound to get things wrong in any language, not just C++.

What I am arguing for is that C++ code need not be complex even if the C++ language itself is complex or the software is complex.

If you have an argument against this, please make it substantiatively without resorting to insinuation about the kind of work I do or not do.


I apologize if i sounded condescending, and coming from a BFSI background myself, it was such a relief to write software systems in Java after years of C++. I agree "C++ code need not be complex", but the language itself is so designed that most often than not, you end up heading down a slippery slope.


Neither latest EJB nor spring are complex.


This is not same for all cases. For example scientific and high performance computing, image processing, or other tasks which need short run times or applications where a single digit optimization results in hours of run time difference is always worth it.

Also, since C++ is native, very memory efficient data structures can be written. This property can be leveraged to write very small, very efficient and very resilient loops to be written which can run for months at a time, and since you have the absolute control over memory management, you can prevent unwanted bloating or leaks pretty easily.

Last, but not the least; CPUs weren't that powerful 10-15 years ago. My desktop computer was as fast as the first Raspberry Pi. Python, JS, even Java were very unpleasant experiences back then.


Well, you can spend 5 months writing your software in C++, and a month fixing your bugs and doing a bit of optimization. Or you can spend 3 months writing C# or other modern language and have another 3 months to optimize, all the while enjoying substantially faster compile times and super easy refactoring.


This is not an apples to apples comparison. I've given examples from runtime, not from development time.

OTOH, refactoring in C++ is not hard, at least from my experience. With some knowledge of the language, everyone can write reasonably bug-free C++ (or any other language) application in one go.

Compilation in C++ is a different story. C# is not native. It's converted to CLI (was that the name, I don't use Win32 for ~15 years), then JIT compiles it to machine code, and optimizes over and over in every execution.

Yes, C# is reasonably fast, but it's not native, cross platform (mono doesn't count), and designed to be a systems programming language.

People tend to think C++ is overly verbose, no. Win32 programming model forces simple operations to be long and time consuming for a programmer to implement. Opening a file, serial port, or anything is a single line, because of the (while not everyone likes it) "everything is a file" philosophy, which makes UNIX and C++ much easier, verbose, and accessible.


C# is surely native as well.

It can be compiled to native code just like C++ via NGEN, .NET Native, CoreRT, IL2CPP, Bartok, Mono AOT.

Or if you prefer, C++ is not native. It's converted to LLVM bitcode, OS/400 TIMI, WASM,....

Lets not mix languages with implementations.


> It can be compiled to native code...

In that sense, you are absolutely right.

However, for simplicity I only thought about the out of the box experience, or by the de-facto implementation of the said languages, since it’s the most popular usage scenario as well.


Well, since Windows 8 .NET store apps don't support JIT, they only run AOT compiled to native.

And NGEN is a standard component of the .NET SDK since the beta days.

So AOT compilation has been part of the default tooling since ever.


I guess native needs to be qualified. I would say that Native means it speaks the ABI of and fits into a specific platform.

I would say that the C family languages are traditionally native of most operating systems [1] as they are polyglot and can directly speak whatever C ABI dialect is used there; they also work with the native platform tools.

C# is really native of the .Net platform and although it can be AoT compiled it either can't talk directly with Win32 or the interfacing is not seamless. Also it has its own set of tools.

Android and iOS are interesting because although they are clearly unix derivatives, the unix interface is not the primary interface; so arguably C and C++ are not really native there while Java and ObjectiveC/Swift respectively are.

[1] Or to be fair, of unix and any operating system that can be made to look like unix (Windows).


> Win32 or the interfacing is not seamless

Well then Rust, D, Swift, Go, Rust, Haskell, OCaml are not native by that measure.

C# integration with Win32, COM and UWP (aka COM reborned) is painless comparing with any of them.


I think Rust and D have seamless integration with C style ABIs. Swift is native on iOS. Haskel and OCaml have, as far as I understand, painful FFIs and a relatively heavy runtime so I wouldn't call them native.

edit: to be clear, using 'compiles to object code' as the definition of Native, is both a misuse of the word, and meaningless as it is a implementation detail that can change easily and bound to be obsolete quickly.


Try to write a GUI WIndows app, COM server, or service daemon in Rust and C#, then let me know which one is more native.

Bonus points if it is a UWP one.


I wouldn't want to be caught dead writing a GUI Windows app (I might do Qt if forced at gunpoint though), so I'm probably not the best person to test this :)


Well, I’m not very knowledgeable about Windows world, as I said before, I’m not using Windows for 15 years or so. So, my bad, sorry.


If you, or others want to know a bit more about it,

NGEN

https://docs.microsoft.com/en-us/dotnet/framework/tools/ngen...

AOT Compilation for Windows 8.x store apps, based on Singularity's Bartok compiler

https://channel9.msdn.com/Events/Build/2012/3-005

https://channel9.msdn.com/Shows/Going+Deep/Mani-Ramaswamy-an...

AOT Compilation for Windows 10 store apps, using Visual C++ backend

https://channel9.msdn.com/Shows/Going+Deep/Inside-NET-Native

https://docs.microsoft.com/en-us/dotnet/framework/net-native...


Thanks a lot. I will read them as soon as possible.


Also known as the credo of the software industry: why build something performant, robust, secure and usable when you can ship an Electron app that's barely fast enough and patch the bugs later?


I used to argue the same thing, but passing things across language barriers makes debugging painful and is usually pretty error prone, tools can't understand your codebase as well, and there are performance costs to all the conversions.


Depends on the languages involved.


That only works if the surface area to volume ratio of the heavy lifting parts is low. If it isn't, you'll spend all your machine cycles crossing the bridge and human cycles maintaining the bridge rather than getting work done on either side.


> Yes, it has a ton of warts if you know where to look.

I've never had to go looking to find warts in C++.


Part of the problem here is that OO sits nicely with references, but C++ (like C) is a value based language. This can be seen clearly in most of the design of the standard library -- it's all value semantics, and as such relatively simple and obvious to use (so long as you're not trying to cram OO into it).

A line like

    if ( a == b )
In C++ is pretty obvious what it does. It'll always be a value comparison, and if a and b are pointers to objects you are comparing the object identities and not their values. The meaning is exactly the same in Java of course, but the fact you're dealing with pointers is hidden so you generate a lot of confusion about the correct use of `==` and `.equals`.

The author certainly isn't wrong about a culture of complexity in certain elements, but, to be hones, I see that everywhere else too and it needs to be fought wherever it occurs.


Object Pascal, Eiffel, Ada, Mesa/Cedar, Oberon, Modula-2, Modula-3, Oberon, Oberon-2, Oberon07, Active Oberon, Component Pascal, Sather, Swift, D, Go, Rust are OO languages[1] and value based as well.

[1] - As usual, there are many ways of doing OO, not just C++/Java style.


There are, but they all have at their core an idea of an object which has identity, which values don't have. You can easily model objects in value languages, but sometimes it isn't so easy to do the other way around (like Java's strings -- Smalltalk had similar problems).


> There are, but they all have at their core an idea of an object which has identity, which values don't have.

Not at all, objects need to be explicitly allocated on heap, otherwise they are value based.

I took care not to list any language that only allows objects as references.


Argh, I misread what you wrote. It's very late here now :)


Interestingly half of those are Wirth or Wirth derived languages. He seems to like value based programming.


I just remembered the quip about "a PASCAL programmer knowing the value of everything and the Wirth of nothing".


Object Pascal is definitely reference based, not value-based, at least not when I used it back in Delphi 7.


You used Delphi, not Object Pascal.

Object Pascal was created by Apple for Lisa and Mac OS, with collaboration from Niklaus Wirth.

Borland then adopted it to Turbo Pascal 5.5 for MS-DOS.

Turbo Pascal 6.0 and 7.0 for MS-DOS, followed by Turbo Pascal 1.0 and 1.5 for Windows 3.x took up ideas from C++.

Borland then rebooted Turbo Pascal with Delphi, but to avoid creating too much confusion among Pascal developers, they kept calling their dialect Object Pascal, even though they completly rebooted the object model.

This is how you declare objects in Object Pascal.

    type
        PTPoint = ^TPoint;
        TPoint = object
           x: Integer;
           y: Integer;
        end;
        
    var
      valuePointOnGlobalMemory: TPoint;
      valuePointOnTheHeap: PTPoint;

    begin
      {Can use it right away}
      valuePointOnGlobalMemory.x := 10;
      valuePointOnGlobalMemory.y := 20;
      
      {Need to heap allocate first}
      New(valuePointOnTheHeap);
      valuePointOnTheHeap^.x := 10;
      valuePointOnTheHeap^.y := 20;
      Dispose(valuePointOnTheHeap);
      
      {....}
    end.
Easy to test with Free Pascal using Turbo Pascal compatibility mode.


Nitpick: Borland didn't reboot the object model, they extended it to add the 'class' types, but the 'object' types remained unchanged. The code you wrote works in both Delphi and Free Pascal :-). Object types are used often when you want to allocate objects on the stack or when you want to "embed" objects in other objects/classes (i don't know about Delphi but Free Pascal also extends object types to support newer stuff like properties).


Thanks for the correction, always welcome.

Delphi 1.0 was about the time I started focusing on C++, so I am not that knowledgeable about its features. Just remembered that was one of the key changes.


I don't really know about the others but Rust isn't really OO. Neither is Go.


Sure they are, just not on the classical C++/Java style hence the footnote, as I was already expecting that kind of comments.

There isn't "The ONE true OOP way", just like there isn't "The ONE true FP way" or "The ONE true LP way".

Each paradigm has a set of concepts of what they might mean, and many languages cherry pick from there.

Rust and Go implement polymorphism via traits and interfaces respectively.

Rust and Go implement encapsulation via struct methods and modules.

Rust and Go implement containment.

Rust and Go allow for delegation.

Rust allows for type extensions via a mix of generics and traits.

Go allows for struct embedding as a kind of type extension.

Class based inheritance is not a requirement for a language to be OO.

CS literature is full of languages that explored other kinds of OO.


On that topic, Joe Armstrong famously said that Erlang (Erlang!) might be the only "real" OOP language out there: after all, you can have encapsulation, polymorphism, delegation, and so on, by just designing appropriate exchanges of messages between processes.


I suggested many many years ago that Erlang was an OO language and of course the community ripped me to pieces for it :) I was very pleased to see him eventually come out on my side.


Are you sure? I can make

(a==1 && a==1 && a==3)

Evaluate to true.

It is reasonable to assume what you say, but don't say "always", when it's not.


Me too, but not only in C++.

I can start with Python as first one.


It seems like this article is essentially complaining about the language giving you too much control (e.g. in his copy assignment operator example), when that's exactly what its goal is.


> What does that do? It makes the variable x refer to the object that y is referring to

In Java/C#, it doesn't always:

int x = 1; int y = 2; x = y;

The variable `x` does not refer to the object that `y` is referring to(as there is no reference involved at all).

Assignment is a procedure that makes `x` equal to `y` without modifying `y`. However, if `x` refers to `y`, then we can modify `y` after assignment to `x`. This destroys the ability to reason locally about the code, because each object essentially becomes as good as a global variable.

Even though C++ has areas where things gets complicated, this is the one thing that C++ keeps very simple and consistent. There is only one definition of copy, assignment, and equality, whereas in java there is multiple definitions(deep copy vs shallow copy, `==` vs `.equals`).

> That’s called value semantics, although I would prefer the term state semantics: objects do not have a value, they have state, and that’s what’s being transferred here.

No. Its value semantics, as these objects represent some entity. The interpretation of the state(or datum to be more precise) is the known as the object's value. For example, when copying a `std::vector` the internal state of the vector will be different on copy, as it will point to new piece of memory, however, its value will still be the same.

> But experience shows that making a copy of an object is hard.

The compiler already generates a copy constructor and assignment for you. Its only necessary to write one, when one is dealing with low-level pointers. Using the standard built-in types and containers, writing a copy constructor is never needed.


There are two kinds of developments:

1. Application level: Typically manipulating lots of strings and data massaging. I prefer Java or python for this. The IDEs and eco-system just is so much faster to start with

2. Systems level: Typically a high-performance system like a DB manager or a fast processing library like a message producer etc. These things are time critical and need performance.

I used to really love C++ but I agree, it takes far too long just to start making things run. Sigh!


When I was young, I thought I understood C++, because I had been taught it in school. I did not. Having interviewed several recent graduates who thought they knew C++, I believe this is fairly common, particularly among people with advanced degrees in engineering and the sciences. It's very easy, in C++, not to know the scope of one's ignorance.


This is very true. I've seen very ignorant (on c++) people say that they dont find c++ complex at all, but very simple.

Most of the time this only shows the lack of deep knowledge on the language.


You can do the same with someone that says C is easy to master.

Just pick a copy of ANSI C and randomly ask a couple of UB questions.

You have a catalog of about 200 cases to chose from.


The author is relating a personal experience As a former C++ dev I saw some wonderful code base written in Ç++ and some horrible code base, as any other language. The only specifity Ç++ have in my opinion is that some features have a high learning curve and are not necessary shared with other languages. Once you mastered Ç++ it's easy to move to other languages but it's hard to do the reverse path.


It takes balls to imply that one may use Java in lieu of C++ so as to reduce complexity.


Author here. Quote from the first paragraph of my blog post: "The decision to implement the math backend of GreaterThanZero in Java was driven by other considerations, primarily the appeal of Google App Engine as a hosting platform."


C++ is the only language that is compiled to binary, compatible with many compilers and OS, and can do both low and high level constructs.

It is true that it is a complex language, but this is true for every tool that allows you to do so many different things.

The complexity is not a goal in itself, it's just that it can do all of those things if you need them.

Simple tools are great and will save you a lot of time, but the truth is that you won't always be able to do everything with them. C++ allows you to do everything, and it is true that the cost can be very high and requires a lot of thorough knowledge and expert learning of what happens, but there are projects and cases where you just need to use C++ because that's the only choice you have left.

I agree that an alternatives like rust or D would be great as replacements, but the problem remains: if compilers are not mature on most platform, and if you don't have a large programmer base because the basics of the language are not simple enough, the language won't grow.


Rust uses LLVM, so we share the same characteristics here as clang does.


The author states:

>Neither the performance issue that move semantics address nor the perfect forwarding problem exist in classic OO languages that use reference semantics and garbage collection for user-defined types."

I understand reference semantics but what are move semantics?

Also what is the "perfect forwarding problem"?


Author here. A few years back, I wrote an article about the issues that you're asking about. Since the article still comes up as the #1 result of Google search for "C++ rvalue references", I believe it's ok for me to recommend it here:

http://thbecker.net/articles/rvalue_references/section_01.ht...

Warning: Read only if you have a serious interest in C++.


Thanks, I thought this was well-written.


One word: Boost.

C++ went off into template la-la land some years back. C++ templates were not supposed to be a programming language. But they turned into one, and not a good one.

Look up the implementation of "max" and "min" in C++ templates.

Now there's a push for "move semantics", to keep up with Rust and the cool kids. But not with language support and a borrow checker. With templates and cute tricks with the type system.


>Rvalue references provide move semantics, and they solve the perfect forwarding problem. Neither the performance issue that move semantics address nor the perfect forwarding problem exist in classic OO languages that use reference semantics and garbage collection for user-defined types.

I don't get this. The performance issue move semantics solve doesn't exist if you just use automatic garbage collection? Is this what the article is saying?


Those specific issues do not obviously exist if the language lacks value semantics. Other issues of course do exist, namely allocation overhead and pointer chasing and the difficulty of doing escape analysis.


I think it depends on where you sit on the stack. A library can take a lot of the complexity away from the higher levels in C++ code. The user code can look fluent and understandable. On the implementation side of the library being used(depending on where it sits in abstraction) is where some ugly complexity shows. But this also generally reflects the competence of the authors.


On the other hand, it's hard to grab a half dozen C++ libraries off the shelf and just use them together without issues.


True. That is getting better though and lots of people are putting time into solving that. It's tricky because no one is talking about the same thing either. Like my project structure isn't your's.

It's not byte code so distributing binaries is a crappy problem too. I am of the mind that a dependency system above the build system(s) is probably the best bet. Not as low level as binary interface, but I need libary X >= version n.m.o...


I think there's a "sweet spot" in complexity (at least for software): too complex gets rejected, but being too simple lacks traction. I think artifacts like C++ memetically infect the brain better than a more elegant PL might, exactly because they require you to think about them a lot, but you can still get things done and not be (totally) overwhelmed.


Isn't there anyway some of a pendant to Conway's law in this? I mean, C++'s standard library is - or at least used to be - rather complex, so the people that would go there and use C++ for their project - would more likely than not - carry at least some of that complexity over into the project...


I like the fact that in C++, the line

x = y;

behaves the same if the types are int and vector<int> (unlike e.g. Java and Python).


Except you are not always assured on what = does when reading other people's code. :)


Well, you can define a function in Java called setFoo, but that instead sets bar. Similar.

You can intentionally confuse in most letters if you want to.


Does anybody know a good tutorial or book for learning modern C++ 17, focused on best practices (new language paradigms, TDD/BDD, etc.) ? Something like Michael Hartl's Ruby on Rails Tutorial


Well, if you haven't encountered any brain-bending Javascript, you haven't gotten very far into Javascript.


C++ is not more complex that its competitors like Haskell or Rust.


There are different dimensions of "complex":

1. How cumbersome a language is to use (as a beginner, as a confident developer, etc).

- C++ is rather easy to start, but takes ages to master and surprises even powerful users daily.

- Rust is hard to start, it states complexity of systems upfront in a tightly packed knot that should be handled all at once, but once you're past that, it's rather consistent.

- Haskell is very hard to start with, it is basically unlearning every imperative habit, but again, after that it's a powerful and mostly consistent tool (not without its warts, but drastically less than C++). There is some tolerance of complexity in the ecosystem, but it is clearly encapsulated in libraries and justified by papers and research interests.

2. How complex is the abstract core of a language.

- Haskell has had an amazingly simple, elegant and consistent core for decades; on the other hand, modern Haskell has accumulated a lot of research-y stuff (which is still optional to use), which may be nice/cumbersome depending on the situation. Run-time behavior feels woefully underspecified though, and can be flaky and hard to reason about.

- the mental model of Rust is bigger and more complex than in Haskell (traits, generics, lifetimes/onwership, the underlying C-like memory model), but it's definitely practical (if somewhat spartan) and consistent.

- C++ does not have a coherent vision at all: it is a pile of organically grown features with interesting (and sometimes useful) interactions. This pile is outright impossible to reason about formally.

The only definition in which "C++ is not more complex" is the definition of being easy to use for a beginner in the language.


Having used c++ for many years, this does not come across as a ringing endorsement for looking into Haskell or Rust.

I'm quite happy with my current gig using Go. Looking back, the culture of complexity surrounding c++ is obvious, but talking with my peers who have only ever done c++ - it's like they have Stockholm Syndrome.


Any language (or tool for that matter) that you have to invest heavily in, and use for a while will produce a symptoms of "Stockholm Syndrome". Moreover, once you are hard pushed to switch from it to something else, withdrawal symptoms are quite pronounced too.

Go has its own share of idiosyncrasies, and it drives me nuts sometimes even more than C++ did, but these bursts of mental grind are less common and much shorter in comparison :). The arcane complexity of C++ (and tooling around it) is something that I don't miss at all.


I sometimes like to dabble in C++, late at night, while sipping some whisky. For short periods of time.

Last night, I decided to "practice" writing iterators. Specifically, I wanted to write a class that i could use in a new-style C++ range loop that would give me all files in a directory. Useless but fun practice, I thought.

I had Bjarne's latest book by my side, and the final draft of the C++17 spec in open in a PDF, and it took me a good 2-3 hours of trying before I "got" it.

I'm still not sure I'm doing it correctly, btw. It does compile without warnings, and it works, but I can tweak the iterator function signatures in seemingly incompatible ways and it still doesn't complain and still works, even when I swear it shouldn't. "Ok, surely by doing this I'll break it!" - nope, still works.

I don't love everything about Go, and once in a while I'll wish for more expressiveness, but I was never, ever, as 10% confused with it as I am with whenever I attempt to do something seemingly trivial in C++.


C++ is not more complex than its competitors like <insert two languages that have little in common yet C++ can be considered competition to both of them because it is such a complex beast>


What they have in common is a complex compile-time static typing system.

The source of complexity isn't a mythical "culture of complexity", the complexity is there because it's inevitable if you want to implement powerful compile-time type reasoning. (And you definitely want compile-time reasoning because it's the only way to guarantee performance and correctness of programs.)

The case of Haskell and Rust proves that the issue isn't cultural, it's inherent to the problem domain.


What exactly do you mean by "a complex compile-time static typing system"? I'm not familiar with Rust, but C++'s and Haskell's typing systems don't look very similar, or even of similar complexity.

I do agree there's an inherent complexity in the problem domain. It's just that some languages are more helpful than others in dealing with this complexity :)

PS: my own bias: Haskell's seems both easier (in general) and more helpful than C++'s.


> C++'s and Haskell's typing systems don't look very similar, or even of similar complexity.

yet they are. The difference between the Haskell type system and the C++ type system is that generic constraints on types are explicitely specified with typeclasses in Haskell, while they are implicitely specified with templates in C++ (though this changes in C++20 with concepts).

To make a "conceptual leap" beyond the C++ and Haskell type systems, you have to use a language with dependent types such as Idris or Coq (and actually, dependent types can be simulated in C++ ! http://pfultz2.com/blog/2015/01/24/dependent-typing/ but like all simulations, it will be slower than if it was implemented directly by the compiler).


That is not the only difference between both type systems. It's very different to use one or the other. Especially in practice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: