Prohibiting exceptions is a toxic antipattern. Once you have more than one thread you want to propagate fatal errors in a sane way. ("Just crash the whole program, #yolo" is not a sane way.)
Exceptions are not a sane way to handle many error conditions either. Predictably and efficiently handling fatal error conditions will require custom handling code regardless. For many types of software, the downsides of exceptions are not offset by a corresponding benefit in real reliable systems. I’ve worked on systems that work both ways and I have a hard time recommending exception use.
On the other hand, once your embedded system is sufficiently large, people will want to use (and inevitably will use at some point) standard containers such as std::string or std::vector. And without exceptions, all of those might invoke UB at any time (I have yet to see a standard library that understands and honors -fno-exceptions; usually they just drop all try-catch blocks and all throws)
Could you elaborate on that? I'd love to see an example of this. Are you saying that even relatively simple code (using eg. std::vector) could easily cause UB if -fno-exceptions is enabled?
Your compiler vendor has to pick a (reasonable) behaviour though and apply it consistently, and while they are not required to document it (IIRC - I think that's just for implementation-defined?) you can probably get them to tell you if you have a good support relationship with them. Or you can just figure out what the compiler does, and hope they don't change the behaviour too much with the next release :-)
Most standard containers have no way to communicate allocation failure to the caller in the absence of exceptions (think of constructors that take a size). Worse, the implementations I’ve seen would eventually call operator new, assuming it would throw if it fails. That is, subsequent code would happily start copying data to the newly created buffer, without any further tests if that buffer is valid. In the absence of exceptions, that won’t work.
I guess my hope was that the program would just terminate immediately the instant it tries to throw an exception while -fno-exceptions is set, thus ideally preventing any further action from the program.
Well, what do you expect std::vector<T>::at() to do if the index is out of bounds and it can't throw exceptions? Or std::vector<T>::push_back() if it can't reallocate to a larger size?
These are just some obvious cases. Not to mention that any use of operator new is UB if memory allocation fails and the system can't throw exceptions.
In principle, I would agree with you, but the biggest problem is that the whole C++ ecosystem works the opposite way.
The main reason people use C++ over safer languages like Java is performance (memory, CPU speed, real-time constarints etc). And C++ the language is designed for performance, but only with an expectation of a very powerful optimizing compiler. Most C++ std classes are extraordinarily slow and inefficient if compiled without optimizations - certainly much slower than Java for example.
So, C++ is not really C++ without aggressive optimizing compilers. And one of the biggest tools that compiler writers have found to squeeze performance out of C++ code is relying on UB not to happen. That essentially gives the optimizer some ability to reason locally about global behavior: "if that value were nullptr, this would be UB, so that value can't be nullptr so this check is not necessary". And this often extends to the well defined semantics of standard library classes outside their actual implementation - which rely on exceptions.
So, to get defined behavior out of the std classes in the absence of exceptions, either you disable many optimizations entirely, or you carefully write the optimizer to have different logic based on the no-exceptions flag. But, all C++ comittee members and C++ compiler writers believe exceptions are The Right Way, for every situation. So getting them to do quite a lot of work to support someone doing the wrong thing would be very hard.
In safety critical embedded systems there is no such thing as "program just terminating". The program is the only software that is running on your device, and you need to degrade execution to some safe state no matter what. Every error should be processed, ideally right where it occurred (so I am not a great fan of exceptions either).
At() with exceptions support is pretty much equivalent with a method returning an Option<T>. More precisely, it gives a superset of the functionality of returning Option<T>. If you declare the call site noexcept(), you should even get some compiler checking to make sure you handle the exception.
> If you declare the call site noexcept(), you should even get some compiler checking to make sure you handle the exception.
What compiler does it? At least g++ does not. It is not what specification dictates either.
I can't see how it is a superset either. If the library returns an Option, the calling code can process it as it please, including throwing an exception. On the other hand, if the library only indicates error by throwing an exception, it cannot work with the caller that is built with exceptions disabled.
Oops, you're right, the whole point of noexcept is to promise to the compiler that you know in practice exceptions can't happen, I got confused...
Otherwise, I should point out I explicitly said "at() with exception support enabled". It's also important the ability to disable exceptions is not a feature of C++, the C++ specs assume exceptions work (just like the Java or C# or Go specs). It is a feature of certain C++ implementations that they suport this mode, just like they support other non-standard features (compiler intrinsics, various #pragmas, etc).
Still even with exception support enabled I can't see what you can do with a function that throws that you cannot do with a function that returns maybe not Option<T>, but Result<T, E> in fewer lines of code.
Disabling exceptions is indeed not in the standard, probably because of Stroustrup's position (I respect many of his opinions, but cannot agree with this one) - but it's what every sane compiler, especially a one targeted at embedded systems, will support. Exceptions are designed for a controlled environment where a program terminating will return to somewhere that will maybe add a line to a logging system and restart it automatically. It only complicates things when terminating is an unacceptable scenario.
Yes, Result<T, E> should be equivalent in power to exceptions (the missing E part is why I was saying it's a superset of Option<T> functionality).
Regarding exceptions being more code, I very much don't agree. Even for embedded apps, the pattern of "if this fails, wind back up to some top level event loop" is quite common, and exceptions give it to you for free if you're also using RAII. In contrast, with Result<T, E> you have to write code at every level of the stack to handle it. Code which gets particularly ugly when you combine it with things like map() or filter().
Prohibiting exceptions is necessary in the embedded space for, err, space and allocation reasons.
Just crash the system and reboot the MCU can make sense depending on the application. And where it can't, you need to take the same kind of care for handling every single problem at the call site, or correctly propagating it to a layer that can handle it.
Exceptions aren't special here, they are simply a way to do error handling and recovery.
It's the kind of rule that doesn't make sense for applications, but when you've got tightly constrained memory limits, it makes sense.
I have personally removed exceptions (to be fair, it was only few) from an embedded application and introduced the -fno-exceptions flag. The binary size was reduced by ~20%, which can be important if you are doing SW updates to space and have a link budget... Also, the reduced code size is more cache friendly on a system with rather limited cache memories.
Well the problem was most likely with the code then, not the language feature -- because the changes are quite localized to throw sites and catch sites, not throughout the code.
There are cases in MISRA's problem domain where the software watchdog is part of the same program, and fully crashing that program is a different, more severe error than alternatives.
It depends on the problem domain. For automotive embedded software? Definitely not. But Google, for instance, bans them in much of its server code under the principle that exceptions should denote circumstances where the entire program cannot recover and when a server node cannot recover, logging noisily and failing fast so that it can be restarted into a working state is preferable to trying to unwind a bad state.
Given that constraint, they conclude that the overhead of maintaining exception unwinding and the non-local control flow aren't worth it.
The classic GSG prohibition on exceptions has more to do with a lack of exception safety in their legacy code base than anything else. Promptly-crash-on-failure can be achieved by adopting a "don't catch exceptions" style, with significant advantages of not throwing away much of the strengths of RAII or needing the evil hack that is two-phase initialization.
You can for all intents and purposes avoid two phase init with factory functions (which is how Go does it) and private constructors.
(To my memory though, Google didn't throw away the benefits of RAII without allowing exceptions... They discouraged complicated behavior in constructors and so the only thing a constructor could fail on was OME, which was supposed to crash anyway).