If you are interested in this, take a look at NaiveSystems Analyze [0] which is a free and open source static analyzer for checking MISRA compliance etc.
Disclaimer: I'm the founder.
It has been battle tested with real customers in automotive, medical devices, and semiconductors. AFAIK this is the first FOSS tool that achieves commercial use standards (extensive ruleset coverage, low false positives based symbolic execution (which Coverity relies heavily on) and SMT solver, ...)
2023 is supported in the enterprise edition but not in the community edition yet. We gradually move features from EE to CE as new features added to EE. So you can expect 2023 support in CE in the future :-)
I'm not sure about the timeline because that depends on a lot of things. In the meantime I guess you could start with AUTOSAR C++14. MISRA C++:2023 is essentially built on top of that.
For enterprise edition simply email to hello[AT]naivesystems.com as noted in the README on GitHub
That’s not really a big deal in this context. Getting over the 2011 change was a big jump, and basically makes modern c++ usable for many more projects.
Lots of toolchains are pretty conservative also, 2017 is pretty new there. And much like the ISO standard, updates should be a lot easier now.
You will hardly find a C++ compiler that is 100% C++17 compliant, lesser so with those later standards.
Currently all C++ implementations are lagging behind ISO, as all major commercial contributors focus in other stacks, and take C++14 / C++17 as good enough for their in-house use of C++.
The bigger three have lost steam, while everyone else is even further behind .
Is there anything that GCC is missing? Looks complete to me [0]. Everything besides modules is also there for C++20 [1]. I'm at an automotive startup and we've been using a few C++26 features since summer already.
Are you ISO26262-compliant and your software is ASIL-A or higher? If no, why would you care about MISRA? If yes, how did you qualify a recent GCC?
Safety-critical automotive software development depends on tool qualification, which is often a lot of tedious work. There must be a spec, there must be tests, there must be proof that the tests cover the full spec, you must have a process to inform users about bugs. There is no free compiler which provides this.
Not yet compliant, but working on it. There's quite a few ASIL-D components too. We don't use GCC but rather tip-of-the-tree LLVM. Embedded is a completely separate and different mess using whatever the vendor ships for a particular microcontroller.
So far our safety guys haven't seen any issues with tool qualification besides requiring everything to be documented and the whole system to be re-tested in case of any tool changes.
This metric isn't really useful. This disregards any value of say a compiler implementing 99% of all features ever announced, including draft C++26 features, because it's missing a single C++17 feature preventing it from saying it's fully compliant so we shouldn't bother reading about anything but old stuff.
The big 3 already support features from C++23 worth learning about https://en.cppreference.com/w/cpp/compiler_support Obviously you can't always use that, but most places with a C++ compiler today support at least some useful features past C++17.
Your comment seems to have split in two, so I'll respond to this one as it's the longer of the two.
I agree it is annoying when compilers don't support the same features but my point is the question isn't whether there is an unimplemented feature from that revision in many compilers it's whether the feature you want to use is commonly supported. As an example, if you want widely implemented features like <=> from C++20 then it doesn't really matter most compiler stdlibs don't support riemann_zeta from C++17. Waiting for them to do so only sets you behind years or decades because you're looking for arbitrary features you'll likely never use to be universally supported too.
For those unfamiliar, these are standards published by the Motor Industry Software Reliability Association (MISRA) that provide guidelines for C++ and aim to promote safety, security, and reliability in embedded system software.
One thing that surprised me a bit was that exceptions are more or less recommended in these guidelines. In the section "4.18 Exception handling", it is mentioned that "most compilers provide options (such as -fno-exceptions) that can be used to disable exceptions in order to eliminate the code and size overheads". Then it goes on describing how this makes it impossible to comply with a bunch of other rules.
Since these are "guidelines for the use of C++17 in critical systems", I would have expected it to prohibit exceptions due to their non-determinstic nature. On a side note, dynamic memory is prohibited (rule 21.6.1).
I haven't read the earlier MISRA C++ guidelines so I don't know if this have changed.
Exceptions are not non-deterministic, although the control flow can be a bit non obvious of course.
I think if you disallow exceptions you run into other problems. How can a constructor fail now? You need to have some kind of flag showing if an object is fully constructed. But then that goes against the idea to "make illegal states unrepresentable".
I do wish C++ had more tools to reign in exceptions. Maybe an "onlythrows X" annotation that says only these very specific exceptions may escape from a block, and the checker will complain if it cannot prove that only X can be thrown. The opposite of checked exceptions basically.
The constructor error problem is easily solved by using factory functions and two phase construction. The problem is that the standard library is relying on exceptions quite a bit and major parts become unusable.
> If the copy constructor can fail and you don't want that, then delete it?
You're trivialising just how deeply embedded exceptions are into the design of the language.
I gave just one example and it was not meant to be exhaustive, just one 'gotcha' that you won't find out till runtime and your program starts (worst case) giving you slightly incorrect results without you knowing about it...
So, yeah, if you want to do without exceptions (without having your program execute random code) you need to know in advance what special cases to handle, like unintended copy construction, or failures in overloaded operators, or which std libs can be used and which cannot, or which C++ libraries can be linked, and which cannot.
All of which is perfectly possible, but taken together is hardly "easy". It's tedious, error-prone, bloated ... but hardly what someone would call "easy".
C++ is complicated, I get it. Things that are "easy" in other languages are "hard" in C++. That doesn't mean that writing C++ code that can't throw isn't something that tens of thousands of engineers
are doing every day. One could argue that all of C++ is tedious, bloated and error-prone.
I don't think this is true, but I don't know this for sure.
From what I have read it seems that 'only' Bloomberg, Meta and Microsoft that uses c++ with exceptions.
And since both microsoft and meta are adopting rust in their services it seems to me that they are looking for another language than C++. (why else adopt a new language?)
LLVM has been for quite some time driven by Apple and Google.
WebKit, another Apple child.
Qt, it has to support environments where exceptions are not allowed, otherwise they would be losing customers, specially since Qt is older than C++98.
gcc, was initially written in C, and for quite long time had a mixed code base with minimal C++.
The companies adopting Rust aren't doing so because of lack of exceptions, they would still adopt Rust if the language had exceptions support (which panic and std::ops::Try kind of are), rather due to the type safety that C and C++ aren't able to provide.
You would be surprised how many games actually do support exceptions.
Handling constructor failure is one of the least valuable use cases for exceptions. Idioms for exception-free construction are straightforward and some cases will require these idioms even with exceptions.
Resource exhaustion or hardware failures are the more straightforward use cases for exceptions, but doing anything clever in those cases requires writing similar handling code as you would without exceptions.
Maintaining state invariants is trivial without exceptions thrown from constructors. Just make constructors private and write a public static factory method. Allowing exceptions in constructors on the other hand creates the problem of: what should the destructor of an only partially constructed object do? It’s just unnecessary.
I think there is a misconception on your part. Formally, there is no "half-constructed" object, at least not in any way that's missing an automatic mechanism to unwind mid-way (i.e. half-deconstruct).
Each object construction is a list of N + 1 construction stages -- constructing the N subobjects (implicitly or as per the initializer list), followed by the constructor body.
The destructor has N + 1 stages too, those match exactly the constructor's stages. If there is an exception happening in any stage of the constructor, say stage E, naturally only stages 0 to E-1 get un-done (in reverse), but not E nor any later stage.
So what you have to do is imagine that all sub-objects are already constructed, and imagine there is a function that runs the constructor body and destructor body in a sequence. The constructor part could throw an exception, causing the destructor part to never run. Like any other function, it should be possible to run it without leaking anything if an exception happens. Make it "exception safe" using RAII or by being extra careful.
If you've written constructor and destructor such that they match in this way (which is, again, like you would write any other function), then will work correctly in all cases. This is a powerful concept and pretty much fool-proof. I say that as someone who has lots of concerns about the language's complexity -- including exceptions.
If you call mmap/VirtualAlloc/open/fopen in your constructor and later it throws, you will have a resource leak, because the destructor won’t clean it up.
Again, you need to make the constructor exception safe, that's just like any other function. Just imagine the constructor and destructor bodies as one combined function. (Of course don't forget the method calls in between but those should preserve the class invariant).
If you call mmap/VirtualAlloc/open/fopen in your constructor and later it throws, you will have a resource leak, because the destructor won’t clean it up.
In the domain where Misra is applied, resource exhaustion and hardware failures are totally valid scenarios that need to be processed in the same way as any other error.
Prohibiting exceptions is a toxic antipattern. Once you have more than one thread you want to propagate fatal errors in a sane way. ("Just crash the whole program, #yolo" is not a sane way.)
Exceptions are not a sane way to handle many error conditions either. Predictably and efficiently handling fatal error conditions will require custom handling code regardless. For many types of software, the downsides of exceptions are not offset by a corresponding benefit in real reliable systems. I’ve worked on systems that work both ways and I have a hard time recommending exception use.
On the other hand, once your embedded system is sufficiently large, people will want to use (and inevitably will use at some point) standard containers such as std::string or std::vector. And without exceptions, all of those might invoke UB at any time (I have yet to see a standard library that understands and honors -fno-exceptions; usually they just drop all try-catch blocks and all throws)
Could you elaborate on that? I'd love to see an example of this. Are you saying that even relatively simple code (using eg. std::vector) could easily cause UB if -fno-exceptions is enabled?
Your compiler vendor has to pick a (reasonable) behaviour though and apply it consistently, and while they are not required to document it (IIRC - I think that's just for implementation-defined?) you can probably get them to tell you if you have a good support relationship with them. Or you can just figure out what the compiler does, and hope they don't change the behaviour too much with the next release :-)
Most standard containers have no way to communicate allocation failure to the caller in the absence of exceptions (think of constructors that take a size). Worse, the implementations I’ve seen would eventually call operator new, assuming it would throw if it fails. That is, subsequent code would happily start copying data to the newly created buffer, without any further tests if that buffer is valid. In the absence of exceptions, that won’t work.
I guess my hope was that the program would just terminate immediately the instant it tries to throw an exception while -fno-exceptions is set, thus ideally preventing any further action from the program.
Well, what do you expect std::vector<T>::at() to do if the index is out of bounds and it can't throw exceptions? Or std::vector<T>::push_back() if it can't reallocate to a larger size?
These are just some obvious cases. Not to mention that any use of operator new is UB if memory allocation fails and the system can't throw exceptions.
In principle, I would agree with you, but the biggest problem is that the whole C++ ecosystem works the opposite way.
The main reason people use C++ over safer languages like Java is performance (memory, CPU speed, real-time constarints etc). And C++ the language is designed for performance, but only with an expectation of a very powerful optimizing compiler. Most C++ std classes are extraordinarily slow and inefficient if compiled without optimizations - certainly much slower than Java for example.
So, C++ is not really C++ without aggressive optimizing compilers. And one of the biggest tools that compiler writers have found to squeeze performance out of C++ code is relying on UB not to happen. That essentially gives the optimizer some ability to reason locally about global behavior: "if that value were nullptr, this would be UB, so that value can't be nullptr so this check is not necessary". And this often extends to the well defined semantics of standard library classes outside their actual implementation - which rely on exceptions.
So, to get defined behavior out of the std classes in the absence of exceptions, either you disable many optimizations entirely, or you carefully write the optimizer to have different logic based on the no-exceptions flag. But, all C++ comittee members and C++ compiler writers believe exceptions are The Right Way, for every situation. So getting them to do quite a lot of work to support someone doing the wrong thing would be very hard.
In safety critical embedded systems there is no such thing as "program just terminating". The program is the only software that is running on your device, and you need to degrade execution to some safe state no matter what. Every error should be processed, ideally right where it occurred (so I am not a great fan of exceptions either).
At() with exceptions support is pretty much equivalent with a method returning an Option<T>. More precisely, it gives a superset of the functionality of returning Option<T>. If you declare the call site noexcept(), you should even get some compiler checking to make sure you handle the exception.
> If you declare the call site noexcept(), you should even get some compiler checking to make sure you handle the exception.
What compiler does it? At least g++ does not. It is not what specification dictates either.
I can't see how it is a superset either. If the library returns an Option, the calling code can process it as it please, including throwing an exception. On the other hand, if the library only indicates error by throwing an exception, it cannot work with the caller that is built with exceptions disabled.
Oops, you're right, the whole point of noexcept is to promise to the compiler that you know in practice exceptions can't happen, I got confused...
Otherwise, I should point out I explicitly said "at() with exception support enabled". It's also important the ability to disable exceptions is not a feature of C++, the C++ specs assume exceptions work (just like the Java or C# or Go specs). It is a feature of certain C++ implementations that they suport this mode, just like they support other non-standard features (compiler intrinsics, various #pragmas, etc).
Still even with exception support enabled I can't see what you can do with a function that throws that you cannot do with a function that returns maybe not Option<T>, but Result<T, E> in fewer lines of code.
Disabling exceptions is indeed not in the standard, probably because of Stroustrup's position (I respect many of his opinions, but cannot agree with this one) - but it's what every sane compiler, especially a one targeted at embedded systems, will support. Exceptions are designed for a controlled environment where a program terminating will return to somewhere that will maybe add a line to a logging system and restart it automatically. It only complicates things when terminating is an unacceptable scenario.
Yes, Result<T, E> should be equivalent in power to exceptions (the missing E part is why I was saying it's a superset of Option<T> functionality).
Regarding exceptions being more code, I very much don't agree. Even for embedded apps, the pattern of "if this fails, wind back up to some top level event loop" is quite common, and exceptions give it to you for free if you're also using RAII. In contrast, with Result<T, E> you have to write code at every level of the stack to handle it. Code which gets particularly ugly when you combine it with things like map() or filter().
Prohibiting exceptions is necessary in the embedded space for, err, space and allocation reasons.
Just crash the system and reboot the MCU can make sense depending on the application. And where it can't, you need to take the same kind of care for handling every single problem at the call site, or correctly propagating it to a layer that can handle it.
Exceptions aren't special here, they are simply a way to do error handling and recovery.
It's the kind of rule that doesn't make sense for applications, but when you've got tightly constrained memory limits, it makes sense.
I have personally removed exceptions (to be fair, it was only few) from an embedded application and introduced the -fno-exceptions flag. The binary size was reduced by ~20%, which can be important if you are doing SW updates to space and have a link budget... Also, the reduced code size is more cache friendly on a system with rather limited cache memories.
Well the problem was most likely with the code then, not the language feature -- because the changes are quite localized to throw sites and catch sites, not throughout the code.
There are cases in MISRA's problem domain where the software watchdog is part of the same program, and fully crashing that program is a different, more severe error than alternatives.
It depends on the problem domain. For automotive embedded software? Definitely not. But Google, for instance, bans them in much of its server code under the principle that exceptions should denote circumstances where the entire program cannot recover and when a server node cannot recover, logging noisily and failing fast so that it can be restarted into a working state is preferable to trying to unwind a bad state.
Given that constraint, they conclude that the overhead of maintaining exception unwinding and the non-local control flow aren't worth it.
The classic GSG prohibition on exceptions has more to do with a lack of exception safety in their legacy code base than anything else. Promptly-crash-on-failure can be achieved by adopting a "don't catch exceptions" style, with significant advantages of not throwing away much of the strengths of RAII or needing the evil hack that is two-phase initialization.
You can for all intents and purposes avoid two phase init with factory functions (which is how Go does it) and private constructors.
(To my memory though, Google didn't throw away the benefits of RAII without allowing exceptions... They discouraged complicated behavior in constructors and so the only thing a constructor could fail on was OME, which was supposed to crash anyway).
Each C/C++ guideline define language subset so de facto new language but with weak enforcement where in practice most organisations following it anyway create some specific exceptions from that guidelines. C/C++ world (C less) is a mess, it can't be fixed and this agony will be very long lasting.
Indeed. I don't want to be a shill but "safe C++" has a name, it's called Rust. C++ is such an incredibly complex language filled with such easy to set off footguns, that there is no way out of this mess without a non-backwards complete solution, like Rust.
I conformed to MISRA C:2012 for an open source project and what's unfortunate is the lack of open source static analyzers for verifying MISRA compliance. Aside from Cppcheck [1] all other analyzers I found are commercial/proprietary. It's surprising because Clang should make writing an analyzer easier than ever.
Clang has its own limitations. And it takes more effort than just writing the checkers. We open sourced our previously proprietary static analyzer (mostly based on Clang but also integrated other useful tools) but the commercial/enterprise edition still has its own value in stability, quality assurance, and technical support. It's more like building a Linux distro (e.g. RHEL) from various FOSS components.
I am a C (not C++ hobbyist), and I try to read up on standards and style guides to understand what I should be trying to do. Given that Misra has a C:2012, does anyone know how this compares to SEI Cert C Coding Standard (which im reading right now along with Secure Coding in C and C++)?
Definitely a generalisation, but i would see them as two sides of the same coin.
If Cert has something about an API, then it will have an example of misuse of it and then an example showing its correct use.
MISRA might either prohibit it's use completely (if there's a safe alternative), or with require some boilerplate step be taken every time the API is used.
Misra is a specification for languages to specify and avoid undefined or underdefined behavior as I understand it, thus avoiding bugs. What if a language has no undefined behavior and dynamic allocation is easily disabled. What would misra rules say?
In addition to avoiding undefined behavior, MISRA discourages language features that are "dangerous" like I/O and dynamic memory routines. It also defines some (controversial) styling rules, such as requiring functions to have exactly one exit point.
SE/SE is called out explicitly in the ISO26262 documents (functional safety in road vehicles). This makes it easy to defend the inclusion of such a guideline, even if you may not agree with it.
But as it happens, MISRA C++:2023 doesn't have this guideline.
Code styling rules, such as requiring functions to have exactly one exit point, could be carried over to Rust. Many existing rules are still relevant for unsafe blocks and there would likely be new rules for panic.
At least the wiki article on MISRA C cites multiple studies that consider the rules mostly pointless, in some cases outright counterproductive and basically accuses it of being only beneficial to companies selling compliance tools. If MISCRA C++ is anywhere as bad you might be better of avoiding it.
With strict rules anyone can write software that does not instantly fail horribly (e.g., if pointers are not allowed, there won't be a bad dereference (but that's not even in misra)). That helps outsourcing to the lowest bidder.
I recently watched this webinar[0] about it, presented by the company ParaSoft, which develops a static analysis tool and compliance reporting solution for businesses required to use MISRA C++[1].
It’s a standard for the use of c++ in critical systems. Basically a guide to techniques and language features you can and can’t use (or how) and still have very predictable/controllable system behavior.
Why not? Each project has a unique blend of acceptable risks and non-functional requirements. I promise you, you'd hate C++ if you had to write each project, safety critical or not, according to the least common denominator feature set for safety critical domains.
Ah yes, the motor industry, which with the exception of Tesla delivers some of the world's most miserable, buggy, unpleasant-to-use infotainment software, crippled by bureaucracy, introduces more of the latter in order to micromanage software developers even more.
Thanks, but no, thanks. I'd rather you completely stop developing software and just let me plug in my phone. The less software my car runs, the safer I feel, and I need nothing more but navigation and music, which my phone handles just fine.
In general you would apply MISRA only on some critical parts of the software, and the infotainment wouldn't be part of this. At least it was like this when I applied MISRA on high speed trains, and when I worked briefly at the company that invented the airbag, their infotainment box was a windows embedded with 0 SIL2+ software on it. (But I agree with your point tho).
You would wonder how much software of parts in your Tesla is written by this big bad german car suppliers that all work under the MISRA rules.
Would the critical software things of a Tesla be written by Tesla with their quality understanding, then this cars would be just death traps.
Hmm, what do you mean? I think this is mostly public knowledge. I don't know how much Tesla tries to inhouse, but I would guess it's roughly similar to other OEMs (kinda a wild guess tho)
Most of the components are designed by suppliers they partner with. I don't know that you're going to find a good hyperlink for this stuff.
Your car is running a hell of a lot more software than what you've just described. The ECU manages the engine; the BCM manages lights, wipers, airbags, and seatbelt pre-tensioners; you'll also have ABS (anti-lock brakes, so you can steer while braking instead of plowing straight ahead regardless of the orientation of your wheels) and either Traction Control or Electronic Stability Control. Every single one of these things is safety-critical, whose failure could cause a crash before you can react to it. It's this software that's written to the MISRA standards.
None of the fancy infotainment systems I observed the last time I went car shopping were likely to use it. They were based on third party frameworks and tools -- e.g. Android -- that weren't or couldn't possibly be built following this standard.
In general, systems that are safety critical or adjacent to such systems should use formal methods. On many newer vehicles, like the Tesla Cybertruck, the infotainment system also doubles as the instrument display, which makes it safety adjacent. As such, it should also use formal methods. Model checking, based on satisfiability, is a relatively low bar to achieve.
MISRA is a decent idiomatic framework that brings one closer to safe coding practices. However, it's not foolproof. Formal methods in theory _is_, but in practice it is not. Defense in depth is useful for designing and writing software. A good idiomatic style, unit testing, and formal methods each provide complementary checks.
So, my recommendation would be to choose a good idiomatic style -- and despite its relative complexity, MISRA is a good idiomatic style from a safety perspective. Then, build a good automated testing culture. Incorporate model checking, especially through any execution path that is safety critical. Finally, architect the system so that failures in things that don't matter (video, audio, "games", or navigation) don't impact things that do matter (instrument display or communication with critical systems).
I mostly agree, but don't be so sure that there's no C code, because if there is, you'd think they'd at least run it through the analyzer and say they looked at it and addressed the scary ones.
I think infotainment actually does sit within the safe software process. Likely the lowest. I think static analysis recommendations could be relaxed for ASIL-A... I believe strongly recommended otherwise, which basically means, "you most certainly must do".
Worth pointing out that MISRA is NOT required, even for higher levels; compliance to a standard is. I'm just guessing though chance of C code not so small, if so likely claiming some amount of MISRAness. Though you don't have to do MISRA it's the de facto choice.
They might say we're ASIL-A it doesn't matter. Actually, I think development under those frameworks has ASPICE implications, and many of the same considerations.
- Most people have no idea how many modules there are in their vehicles. It’s a LOT more code than you realize.
- Companies like ERAS, Vector, Kvaser, Mentor, Bosch, and others have a great lock on software for systems they steer via SAE and ISO.
- AutoSar and other shitty systems. It doesn’t matter how good your programmers are if they are kneecapped out the gate.
- Same as any programming lately at big companies… it’s 10 managers to every actual programmer.
- Outsourced. When I was at a Chrysler, code was worked on, released, and tested locally for some modules. Now… often times no one at Chrysler may even be allowed to see code. A LOT of work is done in India.
MISRA isn’t even close to a problem in automotive. And I disagree it’s a wage problem. You could pay the engineers in India more, and it still wouldn’t give them any idea how the consumer will use the end product. It doesn’t help them understand repair/replace diagnostics. No one is monitoring the ever increasing complexity of the systems as a whole.
Overall, cars do work well. But damn if they aren’t trying to ruin that.
I'm very disturbed by some things, specific to the most recent vehicles especially. Lots of design done by competing teams that have tons of incentive to say things like "not my problem!", and point fingers.
Somewhat related to the proliferation of AutoSAR, but AutoSAR is just the solution, whether you like it or not. It could be replaced with some hypothetical "PerfectSAR", and this remains true.
None of this is exactly brand new. That said, the extent of the situation is worrying. Combine with this that we're trying to deliver so much more on tighter schedules... seems like a recipe for disaster.
Ye maybe. Agile is a nightmare in automotive since the process is so strict and the ensuing micromanagement makes doing thing a gigantic burden.
Companies like Bosh have a strange dev process where they don't want to change code but instead make almost everything tunable by parameters so that you essentially can program via config, to get around some internal process.
Their code is a nightmare to interface with.
AutoSar is a joke. I would rather each device have their own API then dealing with that dynamic complex mess.
Programming simple ECUs without bloat for a car is done quickly. Except the engine and ABS system. You got sensitive control loops in those. Those two systems is probably almost all programming work.
I had imagined that the main cost would be certification. But maybe if the hardware is similar enough to an already certified veteran car , you could piggyback off of that. OTOH I've heard can't even swap out the engine of an old car without it needing re-certification.
Disclaimer: I'm the founder.
It has been battle tested with real customers in automotive, medical devices, and semiconductors. AFAIK this is the first FOSS tool that achieves commercial use standards (extensive ruleset coverage, low false positives based symbolic execution (which Coverity relies heavily on) and SMT solver, ...)
[0]: https://github.com/naivesystems/analyze