Hacker News new | past | comments | ask | show | jobs | submit login
Why should I have written ZeroMQ in C, not C++ (2012) (250bpm.com)
223 points by marcobambini on April 21, 2020 | hide | past | favorite | 163 comments



1. That "C equivalent" for error handling is not an equivalent at all. If one could handle the error in the same function then there is no reason to throw. The "C equivalent" is returning an error code, which has other problems.

2. One can handle errors in initialization without exceptions and half-initialized objects: Make your constructor private and expose a static member function that returns an optional<T> (where T is the given class).

3. Throwing in destructors is not a C++ problem, it's a general semantic problem around resources that aren't guaranteed to be freed up successfully. They are a pain in any language and you can only do best-effort approaches for not leaking them.


> 2. One can handle errors in initialization without exceptions and half-initialized objects: Make your constructor private and expose a static member function that returns an optional<T> (where T is the given class).

This comment criticizes an article written in 2012 about a project developed starting in 2007 for not using a technique that debuted in the 2017 edition of ISO C++. For projects that could force all consumers to use C++11 or later, it'd be straightforward to add your own.

The idea of doing so in a cross-compiler way when you need to support the original ISO standard or C++0x in the late 2000s sounds very problematic.


It's not hard to write an optional<T> in C++ 2003, and likely can be done in C++ 98.

If you don't like that, return a T*, null if not created correctly.


I didn't mean to suggest the language spec would have made it hard so much as that I'd have really not wanted to approach it in a way that was going to work with VC6/2002/2005/gcc 2.95/sunpro/ibm/hp compilers of the era. (I think it was 2009 before I got to push VC6 off the raft.) ISO support in C++ compilers was really bad for a long time.

Our typical pattern was to return T*. But that fits well with the "I should have just written it in C" argument IMO.


Completely agree. I was about to write something along these lines but you did a great job. :-)

In a decade using C++, I haven't encountered these problems. I never use exceptions and I do exactly what you suggested in #2: if construction can fail, have a factory method and a private constructor (I use a simple ValueOrError<T> type, rather than optional<T>, to be able to communicate information about the error, though: https://github.com/alefore/edge/blob/876c4328610262b11fda553...).

Cheers!



> If one could handle the error in the same function then there is no reason to throw.

Thank you, this made absolutely no sense to me. I use exceptions for "panic" situations -- when you can't recover from an error. If you can recover...why throw an exception?


> 3. Throwing in destructors is not a C++ problem, it's a general semantic problem around resources that aren't guaranteed to be freed up successfully. They are a pain in any language and you can only do best-effort approaches for not leaking them.

C++ is quite good here, arguably quite better than Rust, since in C++ you can add a `noexcept(true)` clause to your destructor, and if it throws, your program terminates.

In Rust, you cannot really do that (e.g. catch_unwind won't catch exceptions thrown by sub-object destructors).

And well, C++ tries to completely ban throwing destructors by default, while Rust tries to support that as much as possible.. with the most common consequence being memory leaks..


I am not sure what you mean here. "throwing in a destructor", aka "panic during Drop" in Rust terms, runs the risk of seeing "thread panicked while panicking. aborting." In general, Rust treats panics as being something that terminates execution, not something for error handling. Where are you seeing these memory leaks in Rust code?


In Rust, this destructor is perfectly fine:

    impl Drop for Foo { 
       fn drop(..) { 
          self.cleanup();
          self.free_memory();
       }
    }
even if `self.cleanup()` panics. When that happens, `self.free_memory` will never be called, and memory will be leaked if `panic=unwind`, which is the default behavior. If there is a `catch_unwind` somewhere, these leaks will grow over time.

The same code in C++:

   ~foo() {
       this->cleanup();
       this->free_memory();
   }
does not have this issue, because if `this->cleanup()` unwinds, that unwind makes `~foo` abort, because `~foo` is `noexcept(true)` by default.

In Rust, one cannot make `Drop::drop` be "nounwind", so to emulate C++'s behavior, one needs to write:

    impl Drop for Foo {
        fn drop(...) { 
           if let Err(_) = cath_unwind(|| { 
              self.cleanup(); self.free_memory();
           }) {
               abort();
           }
        }
    }
or use a `DropGuard` to make sure that `self.free_memory()` is called if `self.cleanup` throws, etc.

Destructors being fallible are very weird things. In C++, destructors are infallible (at least by default). Rust supports fallible destructors, with all its cost (every time an object is dropped, doing this can fail, adding another "return" point from your function), yet minimal or zero value.

At least, I couldn't find any examples in the Rust book / reference / rust-by-example that show in which situations unwinding from a destructor is a good thing to do.


> In Rust, this destructor is perfectly fine:

Sort of; it's very rare. You don't really write free_memory, unless you're doing some very specific unsafe things.

> When that happens, `self.free_memory` will never be called, and memory will be leaked if `panic=unwind`, which is the default behavior.

It will be leaked if panic=unwind and if you use catch_panic. catch_panic is not used very much. Partially because panic=abort is also a valid semantic, and partially because if this Drop impl is called when panic happens, regardless of panic settings, you'll get an abort if self.cleanup() panics.

> Rust supports fallible destructors,

This isn't really true. Fallability in Rust is spelled "Result<T, E>", and drop does not return one. You're talking about a non-recoverable error, which, it is true, Rust does not give you tools to prevent, really. That's because it's very, very rarely used, because the whole intention is to end the current thread of execution.

> At least, I couldn't find any examples in the Rust book / reference / rust-by-example that show in which situations unwinding from a destructor is a good thing to do.

There isn't, which is why it isn't done, which is why your point confused me :)


Well, rarely done isn't the same as never done. docs.rs has exactly the tradeoffs the parent comment is talking about: we have a long-running daemon thread that uses `catch_unwind` and builder threads that occasionally panic. We've had [issues with memory leaks](https://github.com/rust-lang/docs.rs/issues/656) in the past - they weren't related to unwinding that I know of, but it's still possible that they were.

However I'm not in favor of the proposed solution - if the docs.rs server aborted every time a thread panicked we would have a lot of outages!


> Well, rarely done isn't the same as never done.

Absolutely. But I do think the difference between "idiomatic" and "rarely done" is valuable, here. And as you said yourself, it's not clear that these leaks are caused by this kind of thing.

If someone saw Rust code leaking memory all the time due to catching panics, I'd want to know about it, because it's very contradictory to my own experience, and I think that's interesting.


Correct me if I'm wrong, but panics themselves are fairly rare.

That's not the case in C++ where "throw" is a keyword you are expected to use for error handling.

If you take this to a Java example, a panic is more like an "Error" and less like an "Exception". That is to say, in rust, if something panics it is a sign of a major bug that shouldn't have any option for recovery.

With all that said, the same concept exists in C++. Try doing a 1/0 in C++ and see what happens. You don't get some nice exception to catch and you can't add a "noexcept" clause to stop it from happening.

In the same original example, if you have that 1/0 error in the underlying method and you handle it instead of crashing by tying into the OS specific "div by zero" garbage. You to can create a memory leak in code thought to be safe.

It goes to show that you can't (and shouldn't) expect to recover from everything. Crashing, IMO, is usually far safer than trying to fix things up and move forward.


Conceptually, panics should be rare, because one firing means that some sort of unexpected problem has occurred.

However, the real world is not "conceptually." I don't think there's any real data about how often they happen, but at least my experience is that tools I write in Rust rarely end up showing me panic output.

> That's not the case in C++ where "throw" is a keyword you are expected to use for error handling.

Yes, that's correct. The intended semantics of the two features are very, very different.


A fun example here is rust-analyzer: we implement cancellation via unwinding. This is not technically a panic, but the mechanism is the same, and it more or less is invoked every time a user types something in a file.


Where and why does rust-analyzer unwind from destructors ?


> However I'm not in favor of the proposed solution - if the docs.rs server aborted every time a thread panicked we would have a lot of outages!

Where is the proposed solution?

Where and why does docs.rs unwind from destructors?


The solution for docs.rs is handling errors correctly and not panicing.


I was going to talk about some panics we've had recently, but those will hopefully be fixed soon so I don't think that quite fits my message. Instead I want to talk about why 'handling errors correctly and not panicking' wouldn't work in general.

In order for that work, we'd have to have 0 panics - not just few, but none. That requires none of our code to panic, none of our dependencies to panic, and none of our uses of the standard library to panic. If you gave me a limited time frame to run the server - say a week - I think it's possible to make docs.rs that robust. However, for a server that's meant to run 24/7 for weeks on end, I just don't think that's realistic.

What `catch_unwind` lets us do is localize panics to a single web request or crate build instead of it affecting the whole server. Of course we don't want to have 500s for any user, but we _especially_ don't want the whole server to be down until a team member has time to ssh in and restart it manually.

Of course, after that there's whole question of whether making the server that robust is a good use of time in the first place. Is it better to fix a few 500s every week or to fix long-standing bugs? I don't think it's clear that the 500s are more important if they don't affect many users.


Look how Erlang handles this. Crashing on error is the encouraged policy. A managing process will notice a crashed process and restart it.

Basically crashing is the safe way to release all resources in a problematic situation, at the cost of terminating the process. It's easy when you don't have shared resources at all (Erlang's case), and harder with threads: one threads crashes and frees its resources, another tries to use a shared resource already deallocated. If this can be avoided, life becomes vastly easier, at the cost of higher resource consumption.


I think we're saying the same thing from two different perspectives :) The 'managing process' is the daemon thread. The 'crashed process' is a worker thread. There's no need to worry about corrupted memory since all state is shared through the database.


The main practical difference is that if you use a process for isolation, the OS will garbage collect all process resources on termination (memory, file descriptors, threads, mutexes, etc.).

If you are using a thread, you better not be leaking anything. Otherwise you are "ulimits" resources away from putting your "main" process into a crashing loop, e.g., if you leak 1 file-descriptor per crash, then you can crash your thread < 1000 times.

Performance-wise, you are probably worse with threads as well. On linux, you can initialize a web-server on your main process, and spawn new processes by forking it. Forking isn't only pretty much instantaneous, your child process is initialized with the same state as the parent, so you instantaneously get a fully initialized web server (e.g. with multiple threads already started in your task pool, etc.).


Hmm, this is an interesting point about the OS cleaning up resources. I'll see if it's feasible to switch from threads to processes. Thanks!


> There isn't, which is why it isn't done, which is why your point confused me :)

This shows why it was a bad decision. It allows doing something in the language (unwinding from destructors), that a lot of code needs to protect against (e.g. all the standard library collections, all collections in general, all types that own a resource, etc.), for absolutely no added value (doing that isn't a useful thing to do).

Aborting the process if `Drop::drop` unwinds would have simplified both the language and the programs written in the language without downsides.

That doing that was a good idea was known (CERT C++ requires non-throwing destructors), and C++ actually made a backward incompatible change in C++11 to fix that (making destructors `noexcept(true)` by default, e.g., see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n316...).


> for absolutely no added value (doing that isn't a useful thing to do).

You yourself said that web servers are a place where this is a good idea.

> That doing that was a good idea was known (CERT C++ requires non-throwing destructors), and C++ actually made a backward incompatible change in C++11 to fix that (making destructors `noexcept(true)` by default, e.g., see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n316...).

Again, throwing in C++ and panicking in Rust are used very differently idiomatically, and in practice, due to the double panic issue, as I mentioned, this is effectively the same for Rust. Yes in theory you can panic in Drop but it will often end in an abort, so doing it for some kind of recoverable problem makes nearly no sense.


> You yourself said that web servers are a place where this is a good idea.

I said that "panicking from Drop::drop is a bad idea, but that panicking is a good idea in general (e.g. in web servers)".

We are in agreement that panicking is a good idea in general, but you seem to be turning that around, arguing that "Panicking is a good idea in general, therefore panicking from Drop::drop is a good idea". One does not follow from the other.

If you believe that unwinding from Drop::drop is a good idea, enumerate the value this feature adds, its costs, and make a case about why this trade-off is worth it.

All languages with unwinding in the same space as Rust do not allow unwinding from destructors, because it adds no value, and adds significant costs. For example, C++ and D do not support it. C++ used to support it, like Rust, but considered the value it added as "negative" (that code deserved to be broken), and changed its semantics to forbid this by default. AFAICT, the same arguments that apply there, apply 1:1 to Rust.

In Rust, the cost of this feature is real, e.g., from basic impls like the impl of `Drop` for slices, reused by most collections, to pretty much every Drop impl of every type guarding an important resource in a module that uses unsafe (e.g. the many Iterator drop guards, etc.).

So I stand by my original claim: panicking from Drop::drop is a bad idea: there are no worthy use-cases for it, its costs are real, pervasive, and unnecessary, because in practice nobody does this, but every writer of unsafe code must write code to defend from it, which makes trickier code even trickier to write, read, and properly test.

C++ is the sane language here. Rust is not.


> If you believe that unwinding from Drop::drop is a good idea, enumerate the value this feature adds, its costs, and make a case about why this trade-off is worth it.

I don't believe it's a good idea, which I have repeated in this thread numerous times. You seem to keep implying that it's an often used and good idiom, and I keep saying that it's not, which is why it's not used. Most of your points are about unwinding generally, not unwinding from Drop. I think it's really confusing the issue here.

I don't really think this is productive. It seems we're in agreement about one thing, though I am very confused as to how. Let's just leave it at that.


> You seem to keep implying that it's an often used and good idiom,

I said that this is code that safe Rust allows you to write [0] - it's actually safe code that beginners could write by accident [1] - and therefore all unsafe Rust programmers need to actually write Rust code that defends against this happening: otherwise their safe Rust APIs over unsafe Rust code are unsound, because if someone were to write safe Rust programs like [1], their unsafe Rust code would exhibit undefined behavior (the standard library and the language itself being the prime examples of this [2]).

The claim that panicking from Drop::drop will often cause a double-drop or terminate the program assumes that all unsafe code is written by people that properly defend all their unsafe code against it happening. We agree that no reasonable programmer would purportedly write such Drop impls, so I doubt that. It suffices for execution to reach a path that isn't properly guarded for the program to exhibit UB, and at that point, there aren't any guarantees about double-drops panicking or aborts doing anything meaningful.

That is, we have a feature we agree on does not do anything useful, yet adds a cost to all unsafe code.

[0] As opposed to C++ or D, which do not allow you to write Drop::drop implementations that unwind/fail, and therefore their programmers do not need to write code to protect themselves from something that nobody does, just in case somebody actually ends up doing it by accident.

[1] As easy as:

    struct S; 
    impl Drop for S { 
        fn drop(&mut self) { unimplemented!() } 
    }
[2] These safe code example are actually used by the standard library test suite to test, e.g., the collections. So it isn't just the cost of having to read/write unsafe code that protects against something that does not make sense doing, but also the cost of then adding and maintaining the tests for those code paths.


> Fallability in Rust is spelled "Result<T, E>", and drop does not return one.

This isn't really true. All Rust functions that can panic are fallible, independently of whether their return type is `Result` or not.

There are idioms to use `Result` for recoverable failures and panics for "harder-to-recover" failures, but that's about it, and people do use `catch_unwind` on `main` to make their web-servers live forever, and they are doing it right, IMO (this is one of the many justified usecases of `catch_unwind`).

If the errors would actually be non-recoverable, `catch_unwind` wouldn't need to exist.

Also, even if you think as panics as "errors that will kill this thread", your program has multiple threads, and resources leaked by one thread continues to be leaked (e.g. memory is not reclaimed by the OS until the whole process exits), so you still have a leak.


> This isn't really true. All Rust functions that can panic are fallible,

Yes, in the abstract sense of fallible, but not in the terminology of Rust, or how the features are used generally. Defaults and language matter, and we've seen the actual usage follow. I don't ever remember seeing a crate suggest catch_unwind for error handling.

> our program has multiple threads, and resources leaked by one thread continues to be leaked

Yep, my bad, I simply made a mistake here.


> It will be leaked if panic=unwind and if you use catch_panic.

This isn't fully accurate. It is leaked if you use `panic=unwind` and the panic unwinds the function. At that point, the program is still running, but the memory is unreachable through a pointer in the program, and therefore leaked.

Whether you actually catch the panic afterwards doesn't matter. Not catching it will terminate the program, but then the program will be terminated with "leaked" memory, and tools like valgrind will report it as such.


(This comment is basically a duplicate of one of your other ones)


> It will be leaked if panic=unwind and if you use catch_panic.

An uncaught panic terminates the thread. If that's the main thread the process terminates, but on other threads it continues with leaked memory, even without `catch_unwind`.

> catch_panic is not used very much.

Are you sure about that? I'd expect most multi-threaded (web)servers to continue after a panic. Otherwise any bug causing a panic (common in my experience) would force a potentially expensive restart of the server.


> If that's the main thread the process terminates, but on other threads it continues with leaked memory

Yep, my bad, I simply made a mistake here.

> Are you sure about that? I'd expect most multi-threaded (web)servers to continue after a panic.

The two main use-cases for panics seem to be:

1. web servers 2. FFI, since panic across the boundary is UB

On 1, well, this is actually a contentious point. In general, the web world has moved more and more to more and more disposable web servers. You have to be resilient to something killing your web serer process, so you need the infrastructure here anyway. Servers re-starting isn't very expensive because you have a bunch of them already, and they don't start serving requests until they're up. Of course, "defense in depth" is a good idea, so doing both makes more sense than just one, but you can't get away with just catching panics if you want a robust service. And, a lot of web servers are single threaded these days....

Regardless, while web services are a big market for Rust, they're only one thing that it does, so I still think of this as "not that popular." Maybe that's wrong :)


A reproducible panic can easily take down your whole fleet of servers, so you'd end up with a one process per request model without planning for this.

Plus if a webserver handles more than one request in parallel (affects both multi-threaded and node.js style async designs), you want to finish handling all the other requests before shutting down gracefully.

In my experience with C# even exceptions caused by bugs/failed assertions (which would map to panics in rust) almost never cause issues which require a server restart. It's a bit worse in Rust, since a GC collects memory more reliably in case of errors than Rust (but even in Rust leaks should rarely happen on panic)


It seems like they're trying to solve a problem (manual destructor call to free memory) that is important in other languages but not rust on account of borrowing. Even in the author's examples I feel like it's a problem directly solved by lifetimes, since all the reasoning about freeing that memory is already there.

Interestingly they picked the best possible case to showcase rust's strengths, where the free happens on drop. If it were a case where you needed strict control over where that memory came from and _when_ it gets freed, C would seem like a nice fit, but rust is great when you only care that the memory is gone once your variable's out of scope.


Unless I'm misunderstanding your comment, you can make panics abort rather than unwind.

https://doc.rust-lang.org/edition-guide/rust-2018/error-hand...


You are misunderstanding the comment, because it's specifically talking about the case when panic is set to unwind and there is catch_unwind handler.


if `self.cleanup()` panics, the program crashes, though? How is a memory leak even a relevant concept in that case?


Only if someone specifically catches the panic, averting the crash. That's what the catch_unwind stuff was about.


Huh, I didn't know about catch_unwind. Very cool


Note that noexcept(true) in destructors has been the default for a while. It was a breaking change, but there is very little code that can handle throwing destructors that in practice jad little consequences.


Author asserts that "The decoupling between raising of the exception and handling it, that makes avoiding failures so easy in C++, makes it virtually impossible to guarantee that the program never runs info undefined behaviour." (and all the woes he encounters afterwards stems from trying to avoid exceptions due to that assertion). But that is not my experience at all. Exceptions are much more reliable than error codes - there will always be a case where you forget to propagate an error you got, but you cannot "forget" to propagate an exception... (and if you do, you can always run a debugger post-mortem to get a nice stacktrace of where the exception was thrown - `coredumpctl gdb` is a super nice tool for that on Linux systems with systemd). And languages without exceptions just end up with a galore of `if err != nil { panic(err); }` or similar horrors...

Also, regarding "compiler is likely to produce more efficient code.", benchmarks have shown that exceptions are generally faster : http://nibblestew.blogspot.com/2017/01/measuring-execution-p...


> you cannot "forget" to propagate an exception

Not only that, but you don't have to write a ton of boilerplate to manually propagate errors back up the stack since the compiler will do that for you. And it will do it in a consistent, well-defined manner.


Furthermore, properly structured C++ applications with RAII use the same path for normal cleanup for exceptional cleanup meaning that you're always covering that code. You don't have an entirely separate error handling path that only gets executed for errors.


ZeroMQ is a library to be called from other languages. You do not want to ever accidentally throw an exception from C++ to a function that is called from outside C++.


That’s fine, the library can catch exceptions at the boundaries of the C API. That’s still usually going to be a lot less tedious than manually propagating error codes all over the internals of the library.


> there will always be a case where you forget to propagate an error you got

Not if the ecosystem supports Result types a la Rust (and others). You don't get to access the return value unless you deal with handling the error first.


> You don't get to access the return value unless you deal with handling the error first.

Well, that's exactly what I was referring to. 80% of the time you don't / can't know what to do with the error at a given call site, so from what I could see, either people end up doing nothing and "abandoning" the error (bad) or panicking (worse). It's better to just let the error go transparently so that if someone actually can do something meaningful upper in the call stack, it's possible.

source : https://github.com/search?l=Rust&q=panic&type=Code


> Well, that's exactly what I was referring to. 80% of the time you don't / can't know what to do with the error at a given call site

If you're working with a `Result` type, you can pass the wrapped value up to wherever you want to/can handle it. Either way, you have to handle it. And the type checker ensures that you do.

Typed error checking as available in Rust, Scala, OCaml, ReasonML, Swift, Haskell etc, etc is a far, far better solution than exceptions. I've worked on large code bases with both. There's no comparison.

Even Java's checked exceptions are far, far better than unchecked exceptions like C++. People complain, but it's shitty coding practice not to properly handle a function that can throw (and I speak as someone who's done this and gotten badly bit by it a dozen times before I wised up).

If there's absolutely no recovery possible from an error, then yes, an exception may be acceptable. Otherwise, error values, which can be type checked, are the way to go if offered in your language.


People may do it wrong in rust but I personally like that rust gives me the tools to handle errors in my preferred way: explicitly passing them up the stack with minimal verbosity (using ?) or explicitly having to handle them in some way.

I don't like verbose error checking on every function call which is ugly and which I might forget to pass up (c-style) and I don't like exceptions which aren't expicit and which I might forget to handle locally if I wanted to.


Hm, we must read very different Rust code. ? is even easier than a panic. And that search has a ton of false positives: both in general, as well as "unwrap" being the more common way to get a Result to panic.


c'mon...

    $ cd servo/components
    $ rg -F 'panic!' | wc -l
276

Likely a fair amount is justifiable, but things like this ? https://github.com/servo/servo/blob/master/components/script... or this https://github.com/servo/servo/blob/master/components/gfx/pl... ?

that's a really strong code smell (or at least doing a similar thing in a C++ program, e.g. calling abort() / SIGTRAP, would be, even with a signal handler, a really really really long discussion in code review)


loc in that same directory says 289958 lines of Rust code, that is a ratio of 0.00095186199. That is pretty small. 175 unwraps, which I am actually shocked is smaller than panics.

And yes, I do think that both of these are pretty justifiable: both of these are effectively assertions, which is not unheard of in C either.


> benchmarks have shown that exceptions are generally faster

The blog post you linked to says "Immediately we see that once the stack depth grows above a certain size (here 200/3 = 66), exceptions are always faster. This is not very interesting, because call stacks are usually not this deep (enterprise Java notwithstanding). For lower depths there is a lot of noise, especially for GCC ..." So ... not exactly "generally faster".

Also, the test is only for Linux. The same test on Windows/VC++ will probably run a lot slower ... again not "generally faster"


Look at all the results. Even before that exceptions are more often a win than a loss.

> The same test on Windows/VC++ will probably run a lot slower ...

By default on 32-bit, with SJLJ exceptions, that's likely. But on 64-bit windows the default exception handling (SEH) uses a similar mechanism than Linux and should have comparable performance.


> But on 64-bit windows the default exception handling (SEH) uses a similar mechanism than Linux and should have comparable performance.

AFAIK SEH in Windows calls RaiseException() which in turn causes a user/kernel mode transition, probes for exception/termination handlers, and vectored exception handlers depending on the severity. It's been a while but I'm not sure the code GCC generates in Linux is quite like this.


I missed this in my previous reply ...

> Look at all the results. Even before that exceptions are more often a win than a loss.

The blog post you linked to effectively says "it depends"


This article is simultaneously both obsolete and completely current.

Obsolete in that the C++ committee has addressed almost all of the issues raised (e.g. constructor semantics, xception semantics et al) though the discussion of site-of-error/handling-of-error continues unabated.

Later versions of C++ (17and 20) are powerful and expressive systems programming languages that aren’t like the object-oriented messes of old.

The reason that this article remains completely current is that I suspect the majority, and likely vast majority, of C++ is still written in C++03/C++11 and full of dreadful issues like the ones in the article.


Feel like the adoption of new versions of C++ is largely determined by sectors and behaves like a bimodal distribution. Most BigTech / Internet companies I stayed already transited to at least C++11 (and a significant amount has transited to C++17), but for others it's really dreadful because they are stuck in random binary blobs / legacy compiler toolchain / relying on UB or hacks, and if no unicorn appears the case just continues.


I haven’t written anything nontrivial in C++ for many years and so far have only learned and used a couple minor features newer than C++11. Would you be kind enough to point to some resources on why C++17/20 are “powerful and expressive systems programming languages that aren’t like the object-oriented messes of old”?


I wrtote a reply to this question in a comment parallel to yours.

In addition:

I don't write many classes in the sense of the Gang of Four book. I do write classes that are really just ways to speak with the type system, e.g. a "class" that has one instance variable that is an integer, so takes up only as much space as a native int, yet perhaps acts a special kind of table index.

Sequence manipulation, auto, and destructuring bind provide clearer ways to express algorithms without getting hung up in towers of class abstraction (an evil fetish) yet not lots of low-level manipulation and boilerplate like you'd see with C.

In c++20 generics and templating are almost unified which makes expression so much clearer (most of the old templating syntax is unnecessary in most cases). It lets you manipulate the type system much more clearly. You can do a lot more at compiler time -- you'll never get the expressive power of the Lisp macro system, but it's a lot closer with a lot of the old syntactic junk discarded.

Auto declaration has transformed the use of the type system and genericity.

In terms of resources to learn this stuff: I haven't found any single good source of info for these revisions.

I once wrote a comment on this but it took a while and of course vanished into the whirlwind of YC comments. So instead I'll suggest a web search for "new in c++17" and "new in c++20". I know that's kind of lame, but that's the best high level answer I can suggest.

I will say www.cppreference.com is excellent, but when it comes to newer features you often need something a little higher level to really grok what it says. It might answer your specific question though.

I follow the trip reports from committee meetings and read proposals that happen to interest me which often give context to a design decision.

I started a new project (blank editor buffer) using C++-17 in 2016. I hadn't written c++ since around 2000 (though I'd previously been involved in g++ development since around '87.* So I treated it as if it were a brand new language I'd never seen before. The only book I bought was Stroustroup's "tour of C++" which introduced c++14, then used searches like the ones above to keep up with developments in C++.

If you're in the same boat I'd say choose C++20, even though the full language isn't available yet, which is what I did with C++17. Some features I was able to bring in from Boost (which is where many new features begin) and some I just couldn't use until it was time to refactor something anyway. Still, this really made a difference.

* yes before g++ was released by the FSF -- tiemann and I both worked at MCC at the time and used to have dinner at 3AM to discuss what each of us was working on, which is how we later came to start Cygnus.


Thanks!


Yes, please.


Look for some parallel replies.


True, I recently returned to C++ after many years absence and I'm thrilled with all the good stuff that has been added BUT it took me quite a while to learn how I'm going to unlearn/relearn my old practices.


I decided to treat it as if it was a language I had never seen before. That helped a lot.


Likewise, but also I now realise, after a long break I really like programming with "system languages". I get a certain amount of satisfaction that I don't get with higher-level languages, where I spend so much time plugging together half-built crappy modules.

Partly this is because I have realised, I like knowing the nuts and bolts of what is going on in a computer and solving problems with consideration of how it will run. It also has less magic and crappy layers going on.

C++ has evolved a lot from what I used many years ago, and it looks mostly for the better, thankfully we have escaped the 'thou must OO' age.


Totally agree. I enjoy the productivity in Python but when you look what's behind numpy, scipy, etc. there's a lot of C, C++ and even Fortran. It's good to know both.


Had a brief look at boost-python (not actually done anything with it), and that looks quite straight forward for exposing C++ to Python.


I'm not very familiar with the C++17 and 20 changes, how do they address the errors-during-constructor/destructor issues that the article mentions?


Note: I also wrote a parallel reply with some other commentary.

On the specific issue of constructors, a slight change to the semantics of error signalling in the constructors and the addition of the ":" syntax to constructors made a lot of difference.


I don't get it. Exceptions are bad -> don't use exceptions. Constructors can fail with exceptions -> use a factory pattern and handle errors explicitly. Destructors can't fail: right, so don't, and yes, if you need to do finalization that could fail then that has to be explicit (as in C) and you can handle errors there, and yes, that leaves you with half-destructed objects, but so what.

No, I think high-level static data typing is very important. Today Rust would be a better language than C++ for this, but C++ is still better than C.


It seems like the author's main complaints are avoided if one follows the Google C++ style guide, which says:

- Don't use C++ exceptions

- Don't do work in constructors (prefer "Init" or factory functions instead)

https://google.github.io/styleguide/cppguide.html

There may be other reasons to prefer C over C++, but if you don't like exceptions, you don't have to use them.


> Don't use C++ exceptions

That's not what Google's C++ style guide says at all. They only argue that Google's old C++ code base is not exception-tolerant, thus as they don't want to waste time and they don't want risk adding bugs by refactoring their legacy code then they just decided to not use exceptions.

That's it.


You're right. My point wasn't that one should blindly follow Google's style guide and avoid using C++ exceptions, only that there is an alternative to the C/C++ dichotomy that the author presents. It's possible to write millions of lines of C++ without using exceptions. There's a trade-off, which is described in more detail in the style guide, and the choice depends on the specifics of one's project.


The google style guide says “ On their face, the benefits of using exceptions outweigh the costs, especially in new projects. However, for existing code...” and they use a lot of existing code.


> they use a lot of existing code.

Doesn’t everyone? Lots of C++ code is not exception safe, including large libraries such as Qt.

If I’m writing code that uses such a library, would it be better to take Google’s approach and avoid using exceptions myself, or try to “wrap” the non-exception-safe library somehow?


There is no single answer to that — depends on the library and how you use it. But if it’s not exception safe it’s not exception safe.

There is no way to tell if a constructor failed without catching an exception.


“prefer "Init" or factory functions instead“

I vote for factory functions. They bring you pretty close to how C would do it.


Regardless of what the style guide says, multiple studies of real-world exception handling correctness have been done, and the results are clear: Very few (if any) real-world codebases correctly handle exceptions.

As the article points out, correct exception handlers would grow exponentially with program size. People can’t reason about that, so they don’t handle things like multiple error paths, or they conflate one error code with another, and so on.

The results hold across programming languages.


I've always felt that object-oriented programming as a concept (that is, define structures and then define functions to operate on them together) is useful: object-oriented programming as a language design is not. Once you understand what encapsulation, inheritance and polymorphism are, you're much better off using them as guidance to structure your program in an otherwise procedural language like C than wrestling with a few dozen new keywords and an opaque specification.


C++ is a multi-paradigm language, and programming in it does not force you into the object orientation. Sure, there is a lot of encapsulation going on in the standard library, but it does not mean that you have to follow the same approach in code you write (although I find that it is the ability to specify destructors that separates C++ from the plain old C, and you cannot do that if you do not write classes).


Well, using C++ as a C with objects has always been a choice. One I personally believe in. That means you basically write a C program, then when it makes sense you create c++ classes/templates as needed.

At first this sounds like the C++ features are infrequently used, but the last major project I wrote like this, probably 90% of the code was encapsulated. But there was very little class->class communication outside of a few global classes due to the use of a dbus like abstraction allowing all the individual classes to communicate out of band. (sort of like a collection of micro services if you will all running in the same process as different threads using a global message broker).


> I've always felt that object-oriented programming as a concept (that is, define structures and then define functions to operate on them together) is useful: object-oriented programming as a language design is not.

What you describe here are abstract data types. Unless it involves inheritance and/or dynamic binding I wouldn't call it object-oriented programming.


I use C++ for decades, but I used it always for the projects that were supposed to last long. With that limitation, I've constantly avoided two things:

1) dependence on external C++ libraries, especially boost.

2) trying to always have the code using the latest changes in the standard proposals or the standards.

With these two limitations, and a lot of additional internal "practices" it's possible to maintain a long-running C++ project without doing what I consider "wrong" stuff.

Note: I grew up on assembly, so I was always biased by considering what the optimal code should be as presented to the CPU, after the compilation.

The smaller the project and the environment around the project is, these my preferences could be relaxed. There are surely use cases where "anything could go." But then, the question is always, why not something "more convenient" than both C and C++?

But now I also almost think that if I would be forced to reduce these my "hard projects" "rules and practices" to a minimum number of words, I would probably state it "only C allowed."


Oh, the irony of preferring to use C in order to avoid undefined behavior. C is every bit as much a minefield of undefined behavior as C++ is.

https://www.i-programmer.info/news/184-cc/11862-c-undefined-...



I was trying to figure out if this was satire.

C++ gives you strictly more tools, particularly for guaranteeing reliability, and they're all optional.

What particularly laughable here is the split ctor/init pattern is allegedly driving them towards C, whereas C doesn't even give you dtors! If you're forgetting your init calls, are you really telling me you're not going to forget your destructor call? There are plenty of approaches besides split-init anyway, but it feels like the author feels pulled towards C for other reasons, and is looking for reasoning after the fact. C is not a magic bullet for simpler code!

C is useful for solving social problems with deciding what features and patterns of C++ to use, if you've given up on code review and style guidelines. If you decide that's you, good luck! You'll need it.


> C++ gives you strictly more tools, particularly for guaranteeing reliability, and they're all optional.

You are right. But it doesn't help. When I used C++, I constantly shot myself in the foot by using its features. It did not feel like overusing, but I just wanted type-safety and to replace macros. It was impossible to restrict to useful features, because they are all useful in their way, only they come at the price of spreading everywhere until you don't understanding what is going on. Typing rules are insanely complex. Overloaded template error messages and hilarious amounts of boiler plate member functions just to satisfy the constructor rules and to catch the best overload for every integer type were the result.

I went back to C. It's a relief.


You wanted type-safety and to replace macros, but then you went back to C?


> You are right. But it doesn't help. When I used C++, I constantly shot myself in the foot by using its features.[...] I went back to C. It's a relief.

I'm sorry but I really can't stand this mindset.

For me it is like to say "I got a car, but driving fast was so dangerous...I got scares and I stayed pedestrian for the rest of my life. It's a relief."

No it's not a relief, it's non sense. It's not because my car goes to 200km/h that I have to do it. Same for C++ usage.

Just learn which features you need instead of throwing the baby out with the bathwater


this is ridiculous. Every remotely powerful programming language gives you an opportunity to shoot yourself in the foot. c++ is so simple, and with every new standard it becomes even better and easier to use. How can anybody possibly complain is beyond me.


A language with four different pointer metatypes for memory management and (unrelatedly) four different pointer type meta-casts is not "simple".

The subdialect of features that you have settled on may be simple, but the language itself is a nightmare of random feature archaeology and creeping throw-it-against-the-wall-and-see-if-sticks extensions, which somehow still fails to provide clean simple tools for polymorphism.


There's a difference between "an opportunity" and "every opportunity". Not all languages have the same amount of footguns.


Its simplicity and ease of yse pales only in comparison with its safety and beauty.


One good alternative to ctor+init() is a static factory method with a private ctor. It's just as simple as in C.


Not sure what a "factory method" is, but

    struct Foo {
      private: 
        int a = 0, b = 0;
        Foo() = default;
        Foo(int a, int b): a(a), b(b) {}
      public:
        static Foo init_zeroed() { return Foo{}; }
        static Foo init_ones() { return Foo{1, 1}; }
    }; 
    
will do. You can add as many static methods acting as "named constructors" as you want.


...and now you know what a "factory method" is.


Almost. One of the biggest advantages of a class factory is that it can return a pointer to an interface rather than a concrete class.


Yes but that doesn't meaningfully change what a "factory" is: a factory returns an already-constructed object or smart-pointer-to-that-object you can work with. Whether or not that object is concrete or an interface to a hidden implementation is less important IMO.


I'm a bit confused about the current state of ZeroMQ and its relatives. There's nanomessage, which for some reason didn't pan out and is superseded by nng(?), but ZMQ still seem to be the most popular option. If I were to pick a broker-less message queue today, what should I pick?


My understanding: Pick zmq if you want something safe and battle-tested with good client library support. Pick nng if you want to be at the forefront of new tech or need some of its unique features over zmq (https://nanomsg.org/documentation-zeromq.html). Performance-wise nng still seems like it's inferior to zmq because it's not as heavily optimized. There are lots of performance-related unresolved Github issues.

nanomsg is not an option because it's essentially abandoned.

I've been using nng for a project of mine and I'm happy with it so far.


Would you say either is something you would put in a decently scaled production system?


zmq is used in a many large-scale production systems. It's definitely safe to use and has been stable for a long time.

nng is relatively new. Personally I have not had any issues with it. I'd look through the open Github issues and see if you can find any dealbreakers. The deciding factor may be language/library support. I would also classify it as "safe to use in production" though.


Using ZMQ (from C) at high volume, we found it dropped data, so we abandoned it.


What did you replace it with?


Direct writes locally to ebs and then stream to S3 and async collation after.


nanomsg was abandoned? I know the previous maintainer moved on to nng, but no one is maintaining it now?


I don't know. Maybe it's not abandoned and maintained by someone. I was using the word loosely as in "focus has moved to nng" and it makes little sense to use nanomsg for a new project at this point.

In my experience, if the core contributor moves on and it's not a company, it's only a matter of time before it's fully abandoned. Better to get away from it soon ;)


But gdamore is still maintaining nanomsg https://github.com/nanomsg/nanomsg/commits/master.

gdamore took over from sustrik a long time ago.


I'm planning on using core NATS (as opposed to NATS streaming) for a lot of my message queuing. A brokered message queue is the same as a brokerless message queue where message forwarding (brokering) is not performed...


I believe the placement new operator does exactly what is needed here. It'll take a self-allocated block of memory and run the constructor on it, thereby avoiding the risk of system generated exceptions as execution stays purely in user code. Basically, it allows you to use a C++ constructor like a C init function.


Are there any good resources (websites/books) to learn some of the new ways of doing things in C++ 20 vs older. Eg something that shows the old patterns with pitfalls and what replaces them in C++ 20?


Bjarne Strsoustroup's short book "A Tour of Modern c++" fits the bill, and was written for the same purpose.


I recommend, reading it now, it really fits the bill for someone coming back to C++ after many years.


Has it been updated for C++20?


"It covers C++17 plus a few likely features of C++20.", from http://www.stroustrup.com/tour2.html


From Java pro view:

In Java it's a sin to do anything complex in constructors. You only assign fields and maybe, maybe, call some pure static function to assign a computed field.

That mainly comes from unit-testing perspective -- so you can mock the parameters for a unit object _before_ the object does anything.

If you really need to do something complex to initialize those fields, it's better to extract them to different service class (and call it, ahem, Factory).

So the code looks more like his C example, but wrapped in classes.


This has become one of those “classic” articles. Worth a read if you haven’t. The bits about exceptions reminded me of an article on Nim‘s “goto” exceptions implementation [1]. The implementation appears to be a combo of error codes and goto’s to automatically propagate the exception. It performs well and works on embedded devices, although I’ve only done a smaller project in Nim on an Arduino based board. It’s odd having a serial port print stack traces. Careful a slow serial line can lead to a slow printout though!

Nim with the new ARC GC and move semantics seems to make a nice language for embedded development or for creating re-useable libraries. Rust seems promising but so much embedded work is still C/C++ only at some level, while Nim compiles to C/C++ nicely.

1: https://nim-lang.org/araq/gotobased_exceptions.html


ZeroMQ is a behemoth; for something that just handles communication, it compiles to like 660KB of i386 machine code. Is there an OS kernel with networking, virtual memory, USB and a file system hiding in there?


This is a complete straw man argument against C++ based on exceptions vs return codes. C++ can handle C return codes, and the compiler comment needs some data to back it up. I would give this more than a grain of salt if the author had an 'alternatives considered' section or some other legitimate arguments like "I'm OCD and I just can't handle the fact that if I want to alphabetize the methods in the declaration for a base class it's a breaking change in terms of binary compatibility for a shared library."


Given the article’s age, the issues brought up are reasonable.

Now that it’s 8 years later, I’d summarize it as:

- exceptions were a bad idea (in any language); don’t use them.

- implementation inheritance and constructors / destructors that contain non-trivial logic were a bad idea; don’t use them.

This basically means you shouldn’t be writing idiomatic object oriented code.

Fortunately, C++ supports other programming paradigms, and in current C++, you can avoid both these historical warts; they shouldn’t affect day-to-day systems programming or error handling.


I think there's a lot of merit to this post, but it really shows its age. Modern C++ typically suffers from only the last point raised - exceptions during destruction, but that goes against the all good C++ practices.

Overall, the author's approach to exception use is flawed. It's bad form to use exceptions for code flow purposes, as per their example. If you throw an exception in the same block that catches the exception, you've wasted a whole lot of time doing something an IF-statement could solve trivially.

All projects that I have worked on that use C++ has declared that exceptions should be used in exceptional circumstances only - that is, when you cannot handle the error in the current code block and the function interface does not support returning enough information to the caller to detail what went wrong. An exception is more than just throwing your hands in the air and saying "I can't do that" - it has context, purpose, history. Getting rid of exceptions breaks some of the fundamental design concepts of the language.

2-step initialisation is the only way around errors during construction whilst forbidding exceptions. But 2-step init breaks a fundamental rule of RAII - the constructor acquires the resource and establishes all class invariants or throws an exception if that cannot be done. If the constructor is not allowed to throw exceptions, as per the language's standardised interface for constructors, then my library or application needs to be modified to support whatever process a third party library has defined as appropriate. There is a wide surface for bugs to creep in, let alone costing me time, money and effort in supporting whatever interface they've come up with.

If an object has been constructed, it should be in a valid state unless an exception has been thrown, in which case I'm told what went wrong. If I can fix it before the stack is fully unwound, I can save the day. but if there's nothing I can do, the exception has to roll to the top as that's the only other option. That then begs the question; should I catch all exceptions at the top level and prevent crashing, or should I crash and allow whatever system I'm running under to restart me? That depends on the project, but typically I'd let it go. systemd should bring me back, docker should restart me, kubernetes should restart my pod, etc etc. If I focus on what is within my control, and delegate everything else, the system will be cleaner and much more maintainable.

I've never come across a situation where exceptions during destruction is a problem, but am very interested in any examples. C++ standards define that you _can_ throw exceptions, but you shouldn't for the exact reason raised in the article - the process will be terminated as there's nothing else that can be done. If there aren't any destructors containing the throw keyword, it's not likely to throw an exception - OOM or other system exceptions are still possible, but why are you allocating memory in a destructor? destructors just need to release resources and clear down the object, it shouldn't be requesting more resources. Thinking about saving the object state before exit? wrong place to do it.


>2-step initialisation is the only way around errors during construction whilst forbidding exceptions.

Static methods returing optional<T> or some more refined either type work fine and don't suffer the from the dreaded zombie state issue.

Having spent the last week fixing bugs after a refactoring of a class hierarchy that would somtimes leave base classes half initialized, I say please, for the sake of maintainability, dont use init methods!


A problem is if you use RAII for other resources than memory, for example files. You want to put the close function in the destructor then, but close can throw an exception. http://www.cplusplus.com/reference/fstream/ofstream/close/ (Of course the solution is easy, you wrap close in a try catch block, but as alwaya all the solutions are easy in c++ but this is one of the surprising corners, which might be done more often wrong then not)


Interesting corner case that I can't say I've ever seen before.

fstream is a bit of a mess. Even the documentation for close is a bit contradictory. Its exception safety states that an exception is caught and rethrown after closing the file if one is thrown by an internal operation. That to me sounds like the actual resource being managed will be closed (assuming the fd is valid), but any data may not be flushed out of the write buffer.

sounds to me like iostream needs some serious attention in future specs, but that's unlikely to happen unfortunately. Creating new toys > fixing old ones.


Fstream is hardly a paragon of good design. Then again there is no reason to call close in the destructor as fstream destructor itself will call close.

If you care about reliability and want to make sure your stream is flushed, you probably want an explicit commit interface and reserve the destructor for rollback (which necessarily shouldn't possibly fail).


> I've never come across a situation where exceptions during destruction is a problem, but am very interested in any examples. C++ standards define that you _can_ throw exceptions, but you shouldn't for the exact reason raised in the article - the process will be terminated as there's nothing else that can be done. If there aren't any destructors containing the throw keyword, it's not likely to throw an exception - OOM or other system exceptions are still possible, but why are you allocating memory in a destructor? destructors just need to release resources and clear down the object, it shouldn't be requesting more resources. Thinking about saving the object state before exit? wrong place to do it.

std::ofstream is a good example. Flushing and closing the file happens in the destructor if not done explicitly. Not being able to throw IO exceptions from the destructor breaks object encapsulation by requiring explicit error-checking. One could argue that it's simply a bad abstraction; an open file handle is not in a fully consistent state and so a C++ object pretending to be valid while representing a file handle may have been a poor choice as opposed to e.g. record-based or transactional IO where writes could be atomically flushed or fail.

In general any objects representing IO can be victims of error conditions in destructors because external state can change unexpectedly between the last valid state of the C++ object and its destruction, especially if the object abstracts away some non-deterministic behavior.

Destructing objects may also require modification of other data structures: Updating special containers or indexes, unregistering from a queue, waiting for thread completion, etc. This can force destructors to eat exceptions that should otherwise be passed up to callers.


Another way to support RAII without throwing exceptions is to somehow make the error case an explicit configuration of the class with well-defined behavior. For example, in a "File" class with no filedescriptor you could return a defined "error" from any function calls which would require a filedescriptor. Receiving such a return value or return structure would then signal that the File class is not in a working state.


yes.

> when you cannot handle the error in the current code block and the function interface does not support returning enough information to the caller to detail what went wrong

Exceptions are, fundamentally, an alternate way to return from a function. That's why the general practice is to avoid using them for control flow - there's enough in the language already to support what you want to do, we don't need to instrument the entire stack when I could return a negative value to indicate the error.

The engineering problem to solve is deciding when to turn a void return into a boolean or enum. Just because there is space in the interface for an error code, doesn't mean it's the right choice. But likewise, just because exceptions are available, doesn't mean every error should be thrown.


It should ultimately be a matter of readability. If it's more readable for you to use Exceptions, by all means use them. But if you're working in a domain where you have control over the type of the return, then choose a range of return values which contains a representation of all possible results of your function and not just the "everything is working" result to be explicit. You don't have to return only primitive types. You can return data structures too, so you can encode the error as part of a return type and treat it and the rest of the data on the same level.

In short, it's a little simplistic to go void->integer or void->boolean. You should really be going void->struct or void->class. That gives you wide latitude in defining the range of your output types.


I think that's something called 'the Maybe pattern (a monad)'.


Writing in C++ means that the C++ library you are using becomes a dependency. Probably not an issue, if you have the source code to all libraries that you use, but maybe an issue if you are vendor and try to support not only all possible modern compilers, but what they target to (platforms, runtimes, models, etc.)

With just "C", not "C" only at the interface part, you can avoid that. Even to the point that your binary artifact (library) can be reused safely between DEBUG and RELEASE. Alternatively you need to ship this as shared statically linked to CRT library (which lots of vendors do), but it's general pain in the ass... for all platforms.

So that is the hidden cost of C++, even if your interface is still "C".


Not sure why this got downvoted? Anyone to comment? I use C++ daily on my job, and I'm very happy with its developments - especially C++11, C++14, C++17 and what's coming in C++20 - but statically linking problems are real, and pain to solve (long term). We do mostly MSVC, where some teams are still in VS2017, others VS2019 - and while they are compatible - the compatibility works only if your final app is compiled with the latest compiler from each version, which puts our team (doing mostly libraries, and providing precompiled ones) that we should use the lowest version, but then our CI infrastructure might use even earlier.

With "C" there is less to care there, as most is set in stone (unless you use some really compiler vendor specific extensions, that appear during link time).

Just an example - https://devblogs.microsoft.com/cppblog/making-cpp-exception-... - this is great improvement, but also means that if you've compiled your lib with VS2019, and it throws an exception under application that uses VS017 - it'll not work.


That exact same problem exists in C in the exact same way. It's somewhat less of an issue just because C is essentially frozen in time, but it has the same inherent problems & design issues.

But if you statically link your C++ runtime & only expose C interfaces then you're fine. You can be "just like C" only at the ABI boundary, that's well supported and works great. It's a quite common setup even.

Like for your particular example I don't think the solution to "in some situations I can't throw exceptions across an ABI boundary" is "switch to a language where exceptions don't exist at all and error handling is 'good luck'". It'd be nice if there was a way to flag an exported symbol as being ABI sensitive & letting the compiler patch up differences, that'd be a nice feature. Nuking it entirely seems like the opposite of a solution though? But you're still free to do that in C++ if you want. noexcept all your exported symbols & use error returns, just like C. Or even just compile with exceptions disabled entirely.


"But if you statically link your C++ runtime" - you need to take careful measures your symbols not to leak as visible. Doable, but takes some time. Also might be easier with gcc/clang - e.g. -fvisibility=hidden, but nothing like this in msvc AFAIK.


Msvc symbols aee hidden by default, but why would symbols be visible be a problem ?


That's not true at all. Sorry they are completely visible, unless all of them are "static". Most of all it also leaks what imports need to be done - as in I require this from MSVCP140.dll or MSVCP140_1.dll - like "_CxxFrameHandler4"

Again I'm talking about statically linked libraries, linked to the dynamic CRT (MSVCRT - e.g. /MD, not /MT)


@mods can we change the title to "Why I should have" instead of "why should I have"?

The current title reads like a question with the question mark omitted ("Why should I have used C, not C++?") which, at least to me, implies that it's saying C++ was the right decision. The thesis of the post looks like the opposite.




The only thing I dislike about C++ is that I can't declare private fields privately, I can use private: but it is there in header... in C I can just declare struct Foo; in header then implement it without expose its gusts.


What do you mean? You can do exactly the same thing in C++.

You can't instantiate a value of that type [1] as it requires knowing the size, but, via base classes and pointers you can still call functions on it.

[1] but see the letter/envelope idiom.


Is that a security concern, an obscurity concern, or a readability concern?


usability, say I use external library Bar, then I declare

private: Bar bar;

the end use will have install Bar library as well to use my library despise Bar being embedded in my static library, I dislike that.


Are you familiar with the PIMPL idiom?


very nice, I do that but never header of PIMPL


I wouldn’t use PIMPL unless you get something out of it (and “not letting anybody see the private stuff” counts as getting something). It’s a hassle, but it’s also the standard answer to the problem.


Somehow it reads like a love letter to Go, but I bet he wouldn't like the Garbage Collector ;-)


When I did C++ I always ended up using a very small subset of it. No exceptions, no inheritance besides a few interfaces. But in general I agree with his point. There are a few things C could improve though.

One would be a way to automatically clean up resources. Maybe something like “defer” in Go.

I wonder if templates would fit into C. STL is super useful.

A real string type would be good too.


Yeah, I know what you mean by using a small subset of C++.

To be perfectly honest (and I'm just a hobbyist who writes simple tools for myself... so I can get away with this a little easier than a professional could) but I got really frustrated when I was learning C++ and almost decided it wasn't worth it. Then I decided that I'd basically just treat C++ as C with strings, classes, and a few nifty, ready-made containers and useful algorithms. And doing this allowed me to enjoy the language quite a bit and do some very fun and useful things in it, all without becoming overwhelmed by its size/complexity. But again... this is as a hobbyist, nothing more.


All the things you mentioned can be found in Zig! [1]

It's exception semantics are very similar to Go, but unlike Go, you have to handle the error. It has constructs like `defer` and `errdefer` which allow you do clean up at the end of the scope or on exception respectively. It has support for Generics and Metaprogramming without having to resort to preprocessor macros.

It just released v0.6.0 and is actively in development with an engaging and helpful community.[2]

[1]: https://ziglang.org/ [2]: https://github.com/ziglang/zig/wiki/Community


Zig definitely looks interesting. I may use it in a smaller project to try out.


There was a language called clay that was basically C with templates, operators, destructors and move semantics (with better syntax). It produced C ABI compatible compilation units and could pretty much slide into a C project. It used llvm but has been defunct for a long time. It is a shame because I think a lot could have been learned from people at least looking at it before coming up with their new llvm based native languages.


That new dialect of C exists, and it's called C++!


By now he probably realizes that the Java version is what he was looking for all along: https://github.com/zeromq/jeromq

Why C/C++ coders don't even write "Hello World" in Java SE before drifting away into rewriting the ball bearing (1907) part of the wheel is beyond me.

The Throwable class is exceptional in every way, it wastes almost nothing and gives the programmer the ability to catch all problems that occur in the VM.

  public static void main(String[] args) {
    while(1) {
      try {
        // do everything, and always throw unchecked
        // Exceptions unless you need the programmer to
        // handle the problem
      }
      catch(Custom c) {
        // here we can react to the exact problem the
        // parent programmer devised
      }
      catch(Exception e1) {
        // something "normal" happened that would have 
        // generated completely random behavior with
        // machine code
      }
      catch(Error e2) {
        // something "critical" happened that would have 
        // generated a segfault with machine code!!!
      }
      finally {
        Thread.sleep(10);
      }
    }
  }
The above code will never cause you to loose sleep. You can pretend there is a way to do this in any other programming language, but only C# would be able to and it is lackluster in so many other domains.

C will not help you with exceptions or concurrency, it will punish you into believing Rust is the solution.

C++ Exceptions are complete garbage, the only good part of C++ are std API (not implementation apparently) and Objects for structure, not for data (cache-misses).

Java does not crash and has the same performance as machine code, that is why it is the leading server language in the world.

Goodbye karma! Xo

Edit: that was fast!

Edit2: Please comment if you downvote.


> Why C/C++ coders don't even write "Hello World" in Java SE before drifting away into rewriting the ball bearing (1907) part of the wheel is beyond me.

I write C++ & Java on a daily basis.

I don't know why I would ever willingly choose Java for a personal project. It's not fun & it's inflexible. And the C++ side of me cringes with how inefficient & slow "simple" things are. Especially for anything that doesn't last long enough for the JIT to come along and make it not horrendous.

If I want to pay for a JVM I'd at least go with Kotlin. But I'd take C# over Java if I had a choice.

And I don't know why you think C++ means "rewriting the ball bearing"? There's a pretty decent standard library these days, and no shortage of libraries to add anything else. Depending on what you're doing there's a richer set of available libraries for C/C++ than there are for Java even (such as anything to do with graphics).

> Java [..] has the same performance as machine code

It so incredibly doesn't. Maybe when value types are added then Java can regain some ground on the performance front, but right now you're absolutely paying a price for Java. Often a price well worth paying, but still a price.


I find Java a lot more fun if you don't use getters/setters for everything.


> Java does not crash

    #
    # A fatal error has been detected by the Java Runtime Environment:
    #
    #  SIGSEGV (0xb) at pc=0x00007fae709d4a10, pid=18032, tid=140386298201856
    #
    # JRE version: Java(TM) SE Runtime Environment (8.0_60-b27) (build 1.8.0_60-b27)
    # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.60-b23 mixed mode linux-amd64 compressed oops)
    # Problematic frame:
    # V  [libjvm.so+0xa8aa10]  Unsafe_SetNativeLong+0xf0
    #
    # Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
    #
    # An error report file with more information is saved as:
    #
    # /opt/production/hs_err_pid18032.log
    #
    # If you would like to submit a bug report, please visit:
    #   http://bugreport.java.com/bugreport/crash.jsp
    #


Main reason: C/C++ is compiled and can be faster than Java, basically because you control the memory, object lifetimes and there's no garbage collection.

If C/C++ speed/low-level management is not required, I'd rather go to Python or, if want static typing, C#, than Java. Python gives easy development and broad library support, and C# has more features than Java (one being way better metaprogramming support).


As a Python developer, I would only use a Java library as an absolute last resort, possibly even below implementing it myself or abandoning the softtware.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: