Last weekend, I took an old cross-platform app written by somebody else between 1994-2006 in C++ and faffed around with it until it compiled and ran on my modern Mac running 14.x. I upped the CMAKE_CXX_STANDARD to 20, used Clang, and all was good. Actually, the biggest challenge was the shoddy code in the first place, which had nothing to do with its age. After I had it running, Sonar gave me 7,763 issues to fix.
The moral of the story? Backwards compatibility means never leaving your baggage behind.
> [M]any developers use C++ as if it was still the previous millennium. [...] C++ now offers modules that deliver proper modularity.
C++ may offer modules (in fact, it's been offering them since 2020), however, when it comes to their implementation in mainstream C++ compilers, only now things are becoming sort of usable with modules still being a challenge in more complex projects due to compiler bugs in the corner cases.
I think we need to be honest and upfront about this. I've talked to quite a few people who have tried to use modules but were unpleasantly surprised by how rough the experience was.
The C++ Core Guidelines have existed for nearly 10 years now. Despite this, not a single implementation in any of the three major compilers exists that can enforce them. Profiles, which Bjarne et al have had years to work on, will not provide memory safety[0].
The C++ committee, including Bjarne Stroustrup, needs to accept that the language cannot be improved without breaking changes. However, it's already too late. Even if somehow they manage to make changes to the language that enforce memory safety, it will take a decade before the efforts propagate at the compiler level (a case in point is modules being standardised in 2020 but still not ready for use in production in any of the three major compilers).
> The C++ committee, including Bjarne Stroustrup, needs to accept that the language cannot be improved without breaking changes.
The example in the article starts with "Wow, we have unordered maps now!"
Just adding things modern languages have is nice, but doesn't fix the big problems.
The basic problem is that you can't throw anything out. The mix of old and new stuff leads to obscure bugs. The new abstractions tend to leak raw pointers, so that old stuff can be called.
C++ is almost unique in having hiding ("abstraction") without safety. That's the big problem.
I find the unordered_map example rather amusing. C++’s unordered_map is, somewhat infamously, specified in an unwise way. One basically cannot implement it with a modern, high performance hash table for at least two reasons:
1. unordered_map requires some bizarre and not widely useful abilities that mostly preclude hash tables with probing:
2. unordered_map has fairly strict iteration and pointer invalidation rules that are largely incompatible with the implementations that turn out to be the fastest. See:
> References and pointers to either key or data stored in the container are only invalidated by erasing that element, even when the corresponding iterator is invalidated.
And, of course, this is C++, where (despite the best efforts of the “profiles” people), the only way to deal with lifetimes of things in containers is to write the rules in the standards and hope people notice. Rust, in contrast, encodes the rules in the type signatures of the methods, and misuse is deterministically caught by the compiler.
Like std::vector, std::unordered_map also doesn't do a good job on reservation, I've never been entirely sure what to make of that - did they not care? Or is there some subtle reason why what they're doing made sense on the 1980s computers where this was conceived?
For std::vector it apparently just didn't occur to C++ people to provide the correct API, Bjarne Stroustrup claims the only reason to use a reservation API is to prevent reference and iterator invalidation. -shrug-
[std::unordered_map was standardised this century, but, the thing standardised isn't something you'd design this century, it's the data structure you'd have been shown in an undergraduate Data Structures class 40 years ago.]
> For std::vector it apparently just didn't occur to C++ people to provide the correct API, Bjarne Stroustrup claims the only reason to use a reservation API is to prevent reference and iterator invalidation. -shrug-
Do you mean something like vector::reserve_at_least()? I suppose that, if you don’t care about performance, you might not need it.
FWIW, I find myself mostly using reserve in cases where I know what I intend to append and when I will be done appending to that vector forever afterwards.
I'm not familiar with vector::reserve_at_least but assuming that's an API which reserves capacity without destroying the amortized constant time of the exponential growth built in to the type, yes, that.
You absolutely can throw things out, and they have! Checked exceptions, `auto`, and breaking changes to operator== are the two I know of. There were also some minor breaking changes to comparison operators in C++20.
They absolutely could say "in C++26 vector::operator[] will be checked" and add an `.at_unsafe()` method.
They won't though because the whole standards committee still thinks that This Is Fine. In fact the number of "just get good" people in the committee has probably increased - everyone with any brains has run away to Rust (and maybe Zig).
Every major project in that cares about perf and binary size would disable the option that compiler vendors would obviously provide, like -fno-exceptions.
Rust memory and type system offer stronger guarantees, leading to better optimization of bound checks, AFAIK.
There are more glaring issues to fix, like std::regex performance and so on.
It took me several reads to figure out that you probably meant ‘auto’ the storage class specifier. And now I’m wondering whether this was ever anything but a no-op in C++.
They were happy with C++ and it was the best thing since sliced bread.
They are now happy with rust and it is the best thing since sliced bread.
To me, languages have a, let's call it 'taste' for the lack of better word off the top of my head. It's that combining quality that pg called 'hacker's languages', such as C, and lisp, for example.
C++ feels like a bureaucratic monster with manual double bookkeeping, byzanthine, baroque, up to outright weird and contradictory in places. Ever since rust was conceived, I gave it multiple shots to learn. When I was not thrown off by what I perceive as java-style annotations, i.e., something orthogonal to the language itself where no one seems to have bothered to come to a consensus to be able to express this from the language itself, its general feel reminds me of something a C++ embracer will feel comfortable in. I.e., in pg's words, not a hacker's language, paired with a crusade of personal enlightenment. What used to be OO and GoF now is memory safety as-implemented-by-rust (note: not by borrow checker, we could've had this with cyclone, for example, more than two decades ago).
I have, in my original comment, marked this as my personal opinion and feeling, as is the above. I'm not arguing. I love FP and the idea of having a systems language with FP concepts working out to memory safety and higher level expression sounds like the holy grail of yester-me. I'm disappointed I couldn't find my professional salvation in rust with how uneasy I feel within the language. It's as if a suit and tie was forced on me, or a hawaii shirt and shorts (depending on your preference, image it's the thing you wouldn't voluntarily wear).
Now, if other folks also mirror my observation of how the folks flock from C++ to rust, you bet they take their mindset and pedestal with them to stand on and preach off of. At least those I know do, only their sermon changed from C++ to rust, the quality of their dogma remained constant.
Gotcha! I just didn't make the connection, when I read your comment I thought "what does a list of C++ features + the idea that people left it because they didn't like where it's going mean that the two languages are the same?"
I wasn't interested in arguing either, I was just trying to understand what you meant, and now I do. Thank you for sharing.
Rust was definitely created as an alternative to C++, but I don't really get your criticism. Unless you're just saying you don't like robust languages with very strong type systems or something?
To me, Rust feels as if it had sprung from the same mind. Or in the case of C++, set of minds. Who have a common mindset. I sadly don't critize rust's general design choices constructively. It's more of a public realization, '"C++ mind-set compatible" might just be the quality to describe the specific aroma I dislike in this melange".
I'm fine with robust languages with very strong type systems, I think. Are Haskell, ML, F#, Scala in this set? Robust and very strongly typed enough? I don't dislike their taste, even though I think I've had enough scala, specifically, for this life time. If these aren't in the set you're thinking of, I'd like to know what makes up that set for you.
"just get good" implies development processes that catch memory and safety bugs. Meaning what they are really saying between the lines is that the minimum cost of C++ development is really high.
Any C++ code without at least unit tests with 100% test coverage on with UB sanitizer etc, must be considered inherently defective and the developer should be flogged for his absurd levels of incompetence.
Then there is also the need for UB aware formal verification. You must define predicates/conditions under which your code is safe and all code paths that call this code must verifiably satisfy the predicates for all calls.
This means you're down to the statically verifiable subset of C++, which includes C++ that performs asserts at runtime, in case the condition cannot be verified at compile time.
How many C++ developers are trained in formal verification? As far as I am aware, they don't exist.
Any C++ developers reading this who haven't at least written unit tests with UB sanitizer for all of their production code should be ashamed of themselves. If this sounds harsh, remember that this is merely the logical conclusion of "just get good".
While I sort of agree on the complaint, personally I think the best spot of C++ in this ecosystem is still on great backward-compatibility and marginal safety improvements.
I would never expect our 10M+ LOC performance-sensive C++ code base to be formally memory safe, but so far only C++ allowed us to maintain it for 15 years with partial refactor and minimal upgrade pain.
I think at least Go and Java have as good backwards compatibility as C++.
Most languages take backwards compatibility very seriously. It was quite a surprise to me when Python broke so much code with the 3.12 release. I think it's the exception.
I don't know about go, but java is pathetic. I have 30 years old c++ programs that work just fine.
However, an application that I had written to be backward compatible with java 1.4, 15 years ago, cannot be compiled today. And I had to make major changes to have it run on anything past java 8, ~10 years ago, I believe.
Compared to C++ (or even Erlang), Go is pretty bad.
$DAYJOB got burned badly twice on breaking Go behavioral changes delivered in non-major versions, so management created a group to carefully review Go releases and approve them for use.
All too often, Google's justification for breaking things is "Well, we checked the code in Google, and publicly available on Github, and this change wouldn't affect TOO many people, so we're doing it because it's convenient for us.".
Java has had shit backwards compatibility for as long as I have had to deal with it. Maybe it's better now, but I have not forgotten the days of "you have to use exactly Java 1.4.15 or this app won't work"... with four different apps that each need their own different version of the JRE or they break. The only thing that finally made Java apps tolerable to support was the rise of app virtualization solutions. Before that, it was a nightmare and Java was justly known as "the devil's software" to everyone who had to support it.
That was probably 1.4.2_15, because 1.4.15 did not exist. What you describe wasn’t a Java source or binary compatibility problem, it was a shipping problem and it did exist in C++ world too (and still exists - sharing runtime dependencies is hard). I remember those days too. Java 5 was released 20 years ago, so you describe some really ancient stuff.
Today we don’t have those limits on HDD space and can simply ship an embedded copy of JRE with the desktop app. In server environments I doubt anyone is reusing JRE between apps at all.
While "Well, just bundle in a copy of the whole-ass JRE" makes packaging Java software easier, it's still true that Java's backwards-compatibility is often really bad.
> ...sharing runtime dependencies [in C or C++] is hard...
Is it? The "foo.so foo.1.so foo.1.2.3.so" mechanism works really well, for libraries whose devs that are capable of failing to ship backwards-incompatible changes in patch versions, and ABI-breaking changes in minor versions.
> Java's backwards-compatibility is often really bad.
“Often” is a huge exaggeration. I always hear about it, but never encountered it myself in 25 years of commercial Java development. It almost feels like some people are doing weird stuff and then blame the technology.
> Is it? The "foo.so foo.1.so foo.1.2.3.so"
Is it “sharing” or having every version of runtime used by at least one app?
> I always hear about it, but never encountered it myself in 25 years of commercial Java development.
Lucky you, I guess?
> Is it “sharing” or having every version of runtime used by at least one app?
I'm not sure what you're asking here? As I'm sure you're aware, software that links against dependent libraries can choose to not care which version it links against, or link against a major, minor, or patch version, depending on how much it does care, and how careful the maintainers of the dependent software are.
So, the number of SOs you end up with depends on how picky your installed software is, and how reasonable the maintainers of the libraries they use are.
> So, the number of SOs you end up with depends on how picky your installed software is, and how reasonable the maintainers of the libraries they use are.
And that is the hard problem, because it’s people problem, not technical one, and it’s platform independent. When some Java app was requiring a specific build of JRE, it wasn’t limitation or requirement of the platform, but rather the choice of developers based on their expectations and level of trust. Windows still dominates desktop space and it’s not uncommon for C++ programs to install or require a specific version of runtime, so you eventually have lots of them installed.
I feel like a few decades ago, standards intended to standardize best practices and popular features from compilers in the field. Dreaming up standards that nobody has implemented, like what seems to happen these days, just seems crazy to me.
Language is improving (?), although IME it went besides the point I'm finding new features to be less useful for every day code. I'm perfectly happy with C++17/20 for 99% of the code I write. And keeping the backwards compatibility for most of the real-world software is a feature not a bug, ok? Breaking it would actually make me go away from the language.
I hoped Sean would open source Circle. It seemed promising, but it's been years and don't see any tangible progress. Maybe I am not looking hard enough?
Profiles will not provide perfect memory safety, but they go a long way to making things better. I have 10 million lines of C++. A breaking change (doesn't matter if you call it new C++ or Rust) would cost over a billion dollars - that is not happening. Which is to say I cannot use your perfect solution, I have to deal with what I have today and if profiles can make my code better without costing a full rewrite then I want them.
Changes which re-define the language to have less UB will help you if you want safety/ correctness and are willing to do some work to bring that code to the newer language. An example would be the initialization rules in (draft) C++ 26. Historically C++ was OK with you just forgetting to initialize a primitive before using it, that's Undefined Behaviour in the language so... if that happens too bad all bets are off. In C++ 26 that will be Erroneous Behaviour and there's some value in the variable, it's not always guaranteed to be valid (which can be a problem for say, booleans or pointers) but just looking at the value is no longer UB and if you forgot to initialize say an int, or a char, that's fine since any possible bit sequence is valid, what you did was an error, but it's not necessarily fatal.
If you're not willing to do any work then you're just stuck, nobody can help you, magic "profiles" don't help either.
But, if you're willing to do work, why stop at profiles? Now we're talking about a price and I don't believe that somehow the minimum assignable budget is > $1Bn
The first part is why I'm excited for future C++ - they are making things better.
The reason I life profiles is they are not all or nothing. I can put them in new code only, or maybe a single file that I'm willing to take the time to refactor. Or at least so I hope, it remains to be seen if that is how they work out. I've been trying to figure out how to make rust fit in, but std::vector<SomeVirtualInterface> is a real pain to wrap into rust and so far I haven't managed to get anything done there.
The $1 billion is realistic - this project was a rewrite of a previous product that became unmaintainable and inflation adjusted the cost was $1 billion. You can maybe adjust that down a little if we are more productive, but not much. You can adjust it down a lot if you can come up with a way to keep our existing C++ and just extend new features and fix the old code only where it really is a problem. The code we have written in C++98 (because that was all we had in 2010) still compiles with the latest C++23 compiler and since there are no know bugs it isn't worth updating that code to the latest standards even though it would be a lot easier to maintain (which we never do) if we did.
> I can put them in new code only, or maybe a single file that I'm willing to take the time to refactor.
It's also expected that you'll be able to do this with Safe C++. Of course the interop with older C++ code will then still involve unsafety. But incremental improvement should be possible.
Enforcing style guidelines seems like an issue that should be tackled by non-compiler tools. It is hard enough to make a compiler without rolling in a ton of subjective standards (yes, the core guidelines are subjective!). There are lots of other tools that have partial support for detecting and even fixing code according to various guidelines.
It's part of a compiler ecosystem. ie. The front end is shared.
See clang-tidy and clang analyzer for example.
ps: That's what I like most about the core guidelines, they are trying very hard to stick to guidelines (not rules) that pretty much uncontroversially make things safer _and_ can be checked automatically.
They're explicitly walking away from bikeshed paintings like naming conventions and formatting.
What are you talking about, the language gets better with each release. Using C++ today is a hell of a lot better than even 10 years ago. It seems like people hold "memory safety" as the most important thing a language can have. I completely disagree. It turns out you can build awesome and useful software without memory safety. And it's not clear if memory safety is the largest source of problems building software today.
In my opinion, having good design and architecture are much higher on my list than memory safety. Being able to express my mental model as directly as possible is more important to me.
> And it's not clear if memory safety is the largest source of problems building software today.
The Chromium team found that
> Around 70% of our high severity security bugs are memory unsafety problems (that is, mistakes with C/C++ pointers). Half of those are use-after-free bugs.
It’s possible you hadn’t come across these studies before. But if you have, and you didn’t find them convincing, what did they lack?
- Were the codebases not old enough? They’re anywhere between 15 and 30 years old, so probably not.
- Did the codebases not have enough users? I think both have billions of active users, so I don’t think so.
- Was it a “skill issue”? Are the developers at Google and Microsoft just not that good? Maybe they didn’t consider good design and architecture at any point while writing software over the last couple of decades. Possible!
There’s just one problem with the “skill issue” theory though. Android, presumably staffed with the same calibre of engineers as Chrome, also written in C++ also found that 76% of vulnerabilities were related to memory safety. We’ve got consistency, if nothing else. And then, in recent years, something remarkable happened.
> the percentage of memory safety vulnerabilities in Android dropped from 76% to 24% over 6 years as development shifted to memory safe languages.
They stopped writing new C++ code and the memory safety vulnerabilities dropped dramatically. Billions of Android users are already benefiting from much more secure devices, today!
You originally said
> And it's not clear if memory safety is the largest source of problems building software today.
It is possible to defend this by saying “what matters in software is product market fit” or something similar. That would be technically correct, while side stepping the issue.
Instead I’ll ask you, do you still think it is possible to write secure software in C++, but just trying a little harder. Through “good design and architecture”, as your previous comment implied.
Two of the biggest use cases for modern C++ are video games and HFT, where memory safety is of absolutely minimal importance (unless you're writing some shitty DRM/anticheat). I work in HFT using modern C++ and bugs related to memory safety are vanishingly rare compared to logic and performance bugs.
Very much this. For some reason people assume that security/exploits are what the below is refering to, as if that's the endgoal that software is trying to solve.
> it's not clear if memory safety is the largest source of problems building software today
The importance of memory safety depends on whether your code must accept untrusted inputs or not.
Basically 99% of networked applications that don't talk to a trusted server and all OS level libraries fall under that category.
Your HFT code is most likely not connecting to an exchange that is interested in exploiting your trading code so the exploit surface is quite small. The only potential exploit involves other HFT algorithms trying to craft the order books into a malicious untrusted input to exploit your software.
Meanwhile if you are Google and write an android library, essentially all apps from the play store are out to get you.
Basically C++ code is like an infant that needs to be protected from strangers.
I million times more systems were infiltrated due to PHP SQL injection bugs than were infiltrated via Chromium use-after-free bugs.
Let's keep some sanity and perspective here, please. C++ has many long-standing problems, but banging on the "security" drum will only drive people away from alternative languages. (Everyone knows that "security" is just a fig leaf they use to strong-arm you into doing stuff you hate.)
> Around 70% of our high severity security bugs are memory unsafety problems
> ~70% of the vulnerabilities Microsoft assigns a CVE
> 76% of vulnerabilities
What is the difference between the first two (emphasis added) and what you said? Just as a thought experiment...
If I measure a single factor in exclusion to all others I can also find whatever I want in any set of data. Now your point may be valid but it is not what they published and without the full dataset we cannot validate your claim however I can validate that what you claim is no what they claim.
To answer your question in the final paragraph. Yes it is, but it requires the same cultural shift as what it would take to write the same code in rust or swift of golang or whatever other memory safe language you want to pick.
If rust was in fact viable for such a large project, how's the servo project going? That still the resounding success it was expected to be? Rust in the kernel? That going well?
The jury is still out on whether rust will be mass adopted and is able to usurp C/C++ in the domains where C/C++ dominate. It may get there, but I would much much rather start a new project using C++20 than in rust and I would still be able to make it memory safe and yes it is a "skill issue", but purely because of legacy C++ being taught and accepted in new code in a codebase.
Rules for writing memory safe C++ has not just been around for decades but has be statically checkable for over a decade but for a large project there are too many errors to universally apply them to existing code without years of work. However if you submit new code using old practices you should be held financially and legally responsible just like an actual engineer in another field would be.
It's because we are lax about standards that it's even an issue.
As a note, if you see an Arc<Mutex<>> in rust outside of some very specific Library code whoever wrote that code probably wouldn't be able to write the same code in a memory and thread safe manner, also that is an architectural issue.
Arc and Mutex are synchronisation primatives that are meant to be used to build datastructures and not in "userspace" code. It's a strong code smell that is generally accepted in Rust. Arc probably shouldn't even need to exist at all because that is a clear indication nobody thought about the ownership semantics of the data in question, maybe for some datastructures it is required but you should very likely not be typing it into general code.
If Arc<Mutex<>> is littered throughout your rust codebase you probably should have written that code in C#/Java/Go/pick your poison...
This whole concept that code should be architected as "libraries" and "userspace" is such a C++ism.
It's a really weird concept that probably comes only from having this extremely complex language where even the designers expect some parts of it are too weird for "normal programmers". But then they imagine some advanced class of programmer, the "library programmers", who can deal with such complexity.
The more modern way of designing software is to stick to the YAGNI principle: design your code to be simple and straightforward, and only extract out datastructures into separate libraries if and when they prove to be needed.
Not to mention, the position that shared ownership should just not exist at all is self-evidently absurd. The lifetime of an object can very well be a dynamic property of your program, and a concurrent one. A language that lacks std::shared_ptr / Arc is simply not a complete language, there will be algorithms that you just can't express.
So you strongly believe that the programmer should implement .map on arrays and hashmaps etc themselves? Well you will love C code then.
The point of library code is to implement these things once in a safe and efficient manner and reuse the implementation.
Sometimes there are more domain or even company specific things that should be implemented exactly once and reused.
Nobody said there are different tiers of developers like "library developers" and "normal developers". Those are different types of programming that a single developer can do but fundamentally require a different thought pattern. Designing datastructures and algorithms are a lot more CS whereas general programming is much more akin to plumbing. If you think library code isn't needed it's because you overlook the library code you already use.
There are some things that are not yagni, if you have those in place then the rest of your code can literally be implemented that way because you literally won't need it.
It's not that shared_ptr isn't needed, it's that people don't use it where necessary, they use it because it's convenient not to think entirely and because the necessary Library code isn't there. I stand strong that seeing std::shared_ptr/box (or even std::unique_ptr/Box) in general code is a code smell, the fact that you even said that there are certain algorithm's that cannot be expressed without it means you agree, the algorithm should be implemented exactly once and reused. If it's only used one then sure it can be abstracted when needed but that doesn't mean you shouldn't need to justify why it's there.
> Profiles, which Bjarne et al have had years to work on, will not provide memory safety
While I agree with this in a general sense, I think it ought to be quite possible to come up with a "profile" spec that's simply meant to enforce the language restriction/subsetting part of Safe C++ - meaning only the essentials of the safety checking mechanism, including the use of the borrow checker. Of course, this would not be very useful on its own without the language and library extensions that the broader Safe C++ proposal is also concerned with. It's not clear as of yet if these can be listed as part of the same "profile" specifications or would require separate proposals of their own. But this may well be a viable approach.
I have seen 3 different safe c++ proposals (most are not papers yet, but they are serious efforts to show what safe c++ could look like). However there is a tradeoff here. the full bower checker in C++ approach is incompatible with all current C+++ and so adopting it is about as difficult is rewriting all your code in some other language. The other proposals are not as safe, but have different levels of you can use this with your existing code. All are not ready to get added to C++, but they all provide something better and I'm hopeful that something gets into C++ (though probably not before C++32)
I've seen maybe twice that many. Did one myself once. It's possible to make forward progress, but to get any real safety you have to prohibit some things.
Circle is 100% backward compatible with C++. That is a technical property of the language.
You are welcome to take your millions of lines of C++ code and it will compile without change using Circle as any valid C++ code is valid Circle code, which is the technical definition of being backward compatible.
You don't need to change existing code to use Circle or the new features Circle introduces, you can just write new classes and functions with those features and your existing code will continue to compile as-is.
You don't get the advantages of circle if you are constantly dealing with code that is returning raw pointers you have to deallocate. Or APIs where you need to pass in an index which the called function then uses vectors operator []. Safe C++ (from the same guy from what I can tell) only is safe if you used std2 containers, and otherwise rewrite your C++ entirely. Sure the world would be better if we did, but that would cost billions of dollars so it isn't happening. What we need is a way to introduce some safety into code that already exists without spending billions and a lot of time to rewrite it.
That is not backward compatibility. In the real world people mix C and C++ all the time without a lot of complex rewriting. Most of the time they don't even write a wrapper around the C, or if they do it is a easy/thin wrapper (generally you take a function returning a pointer you have to delete and make it a smart pointer), not a deep rewrite of the C code.
All my efforts to do the above so I can mix C++ and Rust have quickly failed when I realized that my wrappers would not be thing, and thus they would cost large performance penalties.
The cxx crate offers partial interop between C++ and Rust - for example, it wraps the C++ unique_ptr (the "take a pointer you have to delete and make it a smart pointer" abstraction) so Rust can make use of it appropriately. It's nowhere near complete, but they do welcome patches and issue reports. Anyway, this isn't even all that relevant to Circle and Safe C++, that can potentially share more with C++ than Rust does, such as avoiding a separate heap abstraction so that Safe C++ might be able to free objects that were allocated in legacy C++ code, etc.
I was an extreme C++ bigot back in the late 90's, early 2000's. My license plate back then was CPPHACKR[1]. But industry trends and other things took my career in the direction of favoring Java, and I've spent most of the last 20+ years thinking of myself as mainly a "Java guy". But I keep buying new C++ books and I always install the C++ tooling on any new box I build. I tell myself that "one day" I'm going to invest the time to bone up on all the new goodies in C++ since I last touched it, and have another go.
When the heck that day will actually arrive, FSM only knows. The will is sort-of there, but there are just SO many other things competing for my time and attention. :-(
[1]: funny side story about that. For anybody too young to remember just how hot the job market was back then... one day I was sitting stopped at a traffic light in Durham (NC). I'm just minding my own business, waiting for the light to change, when I catch a glimpse out of my side mirror, of somebody on foot, running towards my car. The guy gets right up to my car, and I think I had my window down already anyway. Anyway, the guy gets up to me, panting and out of breath from the run and he's like "Hey, I noticed your license plate and was wondering if you were looking for a new job." About then the light turned green in my direction, and I'm sitting there for a second in just stunned disbelief. This guy got out of his car, ran a few car lengths, to approach a stranger in traffic, to try to recruit him. I wasn't going to sit there and have a conversation with horns honking all around me, so I just yelled "sorry man" and drove off. One of the weirder experiences of my life.
Funny, sounds like the Simpsons gag from the same time period: “what’s wrong with this country? Can’t a man walk down the street without being offered a job?”
Interesting. I was SO into the Simpsons at one time, but somehow I'd never seen that episode (as best as I can remember anyway). Now I feel the urge to go back and rewatch every episode of the Simpsons from the beginning. It would be fun, but man, what a time sink. I started the same thing with South Park a while back and stalled out somewhere around Season 5. I'd like to get back to it, but time... time is always against us.
That episode is by far my #1 favorite. Season 8 Episode 2, “You Only Move Twice”, during the period considered by most to be the peak of the Simpsons show quality, and IMO the best episode of the season.
Cypress Creek was intended to be a reference to Silicon Valley and the tech companies there of the time, and it’s got some of the best comedy in the season (Hank Scorpio is the best one-off character ever in the show IMO.)
Note to the above: I am wrong. My license plate back then was C++HACKR, with the actual "+" signs. NC license plates do allow that, although while the +'s are on the tag, they don't show up on your registration card or in the DMV computer system.
I mixed up the tag and my old domain name, which was "cpphacker.co.uk" (and later, just cpphacker.com/org).
The programmers on the sound team at the video game company I worked for as an intern in 1998 would always stash a couple of extra void pointers in their classes just in case they needed to add something in later. Programmers should never lose sight of pragmatism. Seeking perfection doesn’t help you ship on time. And often, time to completion matters far more than robustness.
How does enforcing profiles per-translation unit make any sense? Some of these guarantees can only be enforced if assumptions are made about data/references coming from other translation units.
This is the one major stumbling block for profiles right now that people are trying to fix.
C++ code involves numerous templates, and the definition of those templates is almost always in a header file that gets included into a translation unit. If a safety profile is enabled in one translation unit that includes a template, but is omitted from another translation unit that includes that same template... well what exactly gets compiled?
The rule in C++ is that it's okay to have multiple definitions of a declaration if each definition is identical. But if safety profiles exist, this can result in two identical definitions having different semantics.
I guess modules are supposed to be the magic solution for that, Bjarne has shown them in this article, even using import std.
Its a bit optimistic cause modules are still not really a viable option in my eyes, because you need proper support from the build systems, and notably cmake only has limited support for them right now.
Modules alone do not guarantee one definition per entity per linked program. On the contrary, build systems are needing to add design complexity to support, for instance, multiple built module interfaces for the std module because different translation units are consuming the std module with different settings -- different standards versions for instance.
I've been playing with building out an OpenGL app using C++23 on bleeding edge CMake and Clang and it really is a breath of fresh air... I do run into bugs in both but it is really nice. Most of the bugs are related to import std though which is expected... Oh and clangd(LSP) still having very spotty support for modules.
The tooling is way better than it was 6 months ago though asin I can actually compile code in a non Visual Studio project using import std.
I will be extremely happy the day I no longer need to see a preprocessor directive outside of library code.
Seeing badly formatted code snippets without color highlighting in article called "21st Century C++" somehow resonates with my opinion on how hard to write and to ready C++ still is after working with other laguages.
This honestly looks like C++ being feature-juryrigged to a degree that it doesn't even look like what C++ is: a c-derived low level language.
Everything is unobvious magic. Sure, you stick to a very restricted set of API usages and patterns, and all the magic allocation/deallocation happens out of sight.
But does that make it easier to debug? Better to code it?
This simply looks like C++ trying not to look like C++: like a completely different language, but one that was not built from the ground up to be that language, rather a bunch of shell games to make it look like another language as an illusion.
Yeah, I didn't have a problem keeping my shit straight in C++ in the '90s. The kitchen-sink approach since then hasn't been worth keeping up with. The fact that we're still dealing with header files means that the language stewards' priorities are not in line with practical concerns.
Here's how Bjarne describes that first C++ program:
"a simple program that writes every unique line from input to output"
Bjarne does thank more than half a dozen people, including other WG21 members, for reviewing this paper, maybe none of them read this program?
More likely, like Bjarne they didn't notice that this program has Undefined Behaviour for some inputs and that in the real world it doesn't quite do what's advertised.
The collect_lines example won't even compile, it's not valid C++, but there's undefined behavior in one of the examples? I'm very surprised and would like to know what it is, that would be truly shocking.
Really? If you've worked with C++ it shouldn't be shocking.
The first example uses the int type. This is a signed integer type and in practice today it will usually be the 32-bit signed integer Rust calls i32 because that's cheap on almost any hardware you'd actually use for general purpose software.
In C++ this type has Undefined Behaviour if allowed to overflow. For the 32-bit signed integer that will happen once we see 2^31 identical lines.
In practice the observed behaviour will probably be that it treats 2^32 identical lines as equivalent to zero prior occurrences and I've verified that behaviour in a toy system.
"Undefined behavior" is not a bug. It's something that isn't specified by an ISO standard.
Rust code is 100 percent undefined behavior because Rust doesn't have an ISO standard. So, theoretically some alternative Rust compiler implementation could blow up your computer or steal your bitcoins. There's no ISO standard to forbid them from doing so.
(You see where I'm going with this? Standards are good, but they're a legal construct, not an algorithm.)
Yeah, legal constructs are not actually real and are based on circular logic. (And not just in software, that's a property of legal constructs in general.)
I haven't read much from Bjarne but this is refreshingly self-aware and paints a hopeful path to standardize around "the good parts" of C++.
As a C++ newbie I just don't understand the recommended path I'm supposed to follow, though. It seems to be a mix of "a book of guidelines" and "a package that shows you how you should be using those guidelines via implementation of their principles".
After some digging it looks like the guidebook is the "C++ Core Guidelines":
> use parts of the standard library and add a tiny library to make use of the guidelines convenient and efficient (the Guidelines Support Library, GSL).
Which seems to be this (at least Microsoft's implementation):
And I'm left wondering, is this just how C++ is? Can't the language provide tooling for me to better adhere to its guidelines, bake in "blessed" features and deprecate what Bjarne calls, "the use of low-level, inefficient, and error-prone features"? I feel like these are tooling-level issues that compilers and linters and updated language versions could do more to solve.
The problem with 45 years of C++ is that different eras used different features. If you have 3 million lines of C++ code written in the 1990's that still compiles and works today, should you use new 202x C++ features?
I still feel the sting of being bit by C++ features from the 1990s that turned out to be footguns.
Honestly, I kinda like the idea of "wrapper" languages. Typescript/Kotlin/Carbon.
I'm curious about that now, too. Is there the equivalent of Python's ruff or Rust's cargo clippy that can call out code that is legal and well-formed but could be better expressed another way?
Clang-tidy can rewrite some old code to better. However there is a lot of working code from the 1990s that cannot be automatically rewritten to a new style. Which is what makes adding tooling hard - somehow you need to figure out what code should follow the new style and what is the old style and updating to modern would be too expensive.
> As a C++ newbie I just don't understand the recommended path I'm supposed to follow, though
Did you even read the article ? He has given the recommended path in the article itself.
Two books describe C++ following these guidelines except when illustrating errors: “A tour of C++” for experienced programmers and “Programming: Principles and Practice using C++” for novices. Two more books explore aspects of the C++ Core Guidelines
J. Davidson and K. Gregory Beautiful C++: 30 Core Guidelines for Writing Clean, Safe, and Fast Code. 2021. ISBN 978-0137647842
R. Grimm: C++ Core Guidelines Explained. Addison-Wesley. 2022. ISBN 978-0136875673.
> And I'm left wondering, is this just how C++ is? Can't the language provide tooling for me to better adhere to its guidelines
Well, first, the language can't provide tooling: C++ is defined formally, not through tools; and tools are not part of the standard. This is unlike, say, Rust, where IIANM - so far, Rust has been what the Rust compiler accepts.
But it's not just that. C++ design principles/goals include:
* multi-paradigmatism;
* good backwards compatibility;
* "don't pay for what you don't use"
and all of these in combination prevent baking in almost anything: It will either break existing code; or force you to program a certain way, while legitimate alternatives exist; or have some overhead, which you may not want to pay necessarily.
And yet - there are attempts to "square the circle". An example is Herb Sutter's initiative, cppfront, whose approach is to take in an arguably nicer/better/easier/safer syntax, and transpile it into C++ :
Same. Luckily my team switched to Rust almost 100%. So I don't need to learn about the godforsaken coroutine syntax and what pitfalls they laid when you use char wrong with it or in which subset of calls std::range does something stupid and causes a horrible performance regression.
Bjarne has been criticized for accepting too many (questionable) things into the language even at the dawn of C++ and committee kept that behavior. Moreover they have this pattern that given the options they always choose the easiest to misuse and most unsafe implementation of anything that goes into standard. std::optional is a mess, so is curly bracket initialization, auto is like choosing between stepping on Legos or putting your arm into a spider-full bag.
The committee is the worst combination of "move fast and break things" and "not in my watch". C++98 was an okay language, C++11 was alright. Anything after C++14 is a minesweeper game with increasing difficulty.
> Bjarne has been criticized for accepting too many (questionable) things
He even writes that way in his own article... The quote from the last section of the introduction was hilarious, and actually made me laugh a little bit for almost those exact reasons.
BS, Comm ACM > "I would have preferred to use the logically minimal vector{m} but the standards committee decided that requiring from_range would be a help to many."
I went from being curious about C++, to hating C++, to wanting to love it, to being fine with it, to using it for work for 5+ years, to abandoning it and finally to want to use it for game development, maybe. It's the circle of life.
The masochist in me keeps coming back to c++. My analogy of it to other languages is that it’s like painting a house with a fine brush versus painting the Mona Lisa with a roller. Right tool for the job I suppose.
It's my job and career(well, C and C++) but I often try to avoid C++. Whenever I use it(usually writing tests) I go through this cycle of re-learning some cool tricks, trying to apply them, realizing they won't do what I want or the syntax to do it is awkward and more work than the dumb way, and I end up hating C++ and feeling burned yet again.
>>contemporary C++30 can express the ideas embodied in such old-style code far simpler
IMO, newer C++ versions are becoming more complex (too many ways to do the same thing), less readable (prefer explicit types over 'auto', unless unavoidable) and harder to analyse performance and memory implications (hard to even track down what is happening under the hood).
I wish the C++ language and standard library would have been left alone, and efforts went into another language, say improving Rust instead.
I have used auto liberally for 8+ years; maybe I'm accustomed to reading code containing it but I really can't think of it being a problem. I feel like auto increases readability, the only thing I dislike is that they didnt make it a reference by default.
Where do you see difficult to track down performance/memory implications? Lambda comes to mind and maybe coroutines (yet to use them but guessing there may be some memory allocations under the hood). I like that I can breakpoint my C++ code and look at the disassembly if I am concerned that the compiler did something other than expected.
Given how important backwards compatibility is for C++, it's either take over a basically unused keyword or come up with something so weird that would never appear in existing code.
Java solved this by making var a reserved type, not a keyword, but I don't know if that's feasible for C++.
E.g. `std::ranges::for_each`, where lambda captures a bunch of variables by reference. Like I would hope the compiler optimizes this to be the same as a regular loop. But can I be certain, when compared to a good old for loop?
To be fair std::ranges seems like the biggest mistake the committee allowed into the language recently.
Effectively other than for rewriting older iterators based algorithms to using new ranges iterators I just don't use std::ranges... Likely the compiler cannot optimise it as well (yet) and all the edge cases are not workes out yet. I also find it to be quite difficult to reason about vs older iterator based algorithm's.
for each would take a lambda and call the lambda for each iterator pair, if the compiler can optimise it it becomes a loop, if it can't it becomes a function call in a loop which probably isn't much worse... If for some reason the lambda needs to allocate per iteration it's going to be a performance nightmare.
Would it really be much harder to take that lambda, move it to a templated function that takes an iterator and call it the old fashioned way?
Yeah, the std::ranges implementation is a bit of a mess. The inability to start clean without regard for backward compatibility reasons limits what is possible. I think most people see how you could implement comparable functionality with nicer properties from a clean sheet of paper. It is the curse of being an old language.
Just ban ranges lib, it is hot garbage anyway. The compilers are able to optimize lambdas fairly well nowadays(when inlined), I wouldn't be that concerned.
You don't 'have' to keep up with the language and I don't know that many people try to keep up with every single new feature - but it is worse to be one of those programmers for whom C++ stopped at C++03 and fight any feature introduced since then (the same people generally have strong opinions about templates too).
There are certainly better tools for many jobs and it is important to have languages to reach for depending on the task at hand. I don't know that anything is better than C++ for performance sensitive code.
I’ve been using c++ since the late 90’s but am not stuck there.
I was using c++11 when it was still called c++0x (and even before that when many of the features were developing in boost).
I took a break for a few years over c++14, but caught up again for c++17 and parts of c++20...
Which puts me 5-6 years behind the current state of things and there’s even more new features (and complexity) on the horizon.
I’m supportive of efforts to improve and modernize c++, but it feels like change didn’t happen at all for far too long and now change is happening too fast.
The ‘design by committee’ with everyone wanting their pet feature plus the kitchen sink thrown in doesn’t help reduce complexity.
Neither does implementing half-baked features from other ‘currently trendy’ languages.
It’s an enormous amount of complexity - and maybe for most code there’s not that much extra actual complexity involved but it feels overwhelming.
If you already used C++20 you aren't meaningfully behind, very little of interest has been introduced since then, and much of it isn't usable yet because of implementation issues.
I’ve touched on some of c++20, but haven’t used it extensively.
Specifically here are areas I haven’t used that appear to have nontrivial amounts of complexity, footguns, syntax and other things to be aware of:
* Ranges
* Modules
* Concepts
* Coroutines
Each of these is a large enough topic that it will involve time and effort to reach an equivalent level of competence and understanding that I have with other areas of c++.
I don’t mind investing time learning new things but with commentary around the web (and even this thread) calling the implementation and syntax a hot mess, at some point it’s a better investment to put that learning in to a language without all the same baggage.
I really wish c++ had gone with breaking change epochs for c++20.
If you only read HN, you would think C++ died years ago.
As someone who worked in HFT, C++ is very much alive and new projects continue to be created in it simply because of the sheer of amount of experts in it. (For better or for worse)
I have listened to a few podcasts by HFT people. Looks like you try to maximize performance and use a lot of C++ skills. Very interesting to listen to but I wonder how does anyone pick up the skills?
C++ has been dead and effectively banned at amzn for years. Only very specific (robotics and ML generally) projects get exemptions. Rust is big and only getting bigger
You don't have to "keep up with it", if by this you mean what I think you mean.
You don't have to use features. Instead, when you have a (language) problem to solve or something you'd like to have, you look into the features of the language.
Knowing they exist beforehand is better but is the hard part, because "deep" C++ is so hermetic that it is difficult to understand a feature when you have no idea which problem it is trying to solve.
I think it's good enough or side projects. More powerful than C so I don't need to hand roll strings and some algos but I tend to keep a minimum number of features because I'm such an amateur.
> I used the from_range argument to tell the compiler and a human reader that a range is used, rather than other possible ways of initializing the vector. I would have preferred to use the logically minimal vector{m} but the standards committee decided that requiring from_range would be a help to many.
Oh so I have to remember from_range and can't do the obvious thing? Great. One more thing to distract me from solving the actual problem I'm working on.
What exactly is wrong with the C++ community that blinds them to this sort of thing? I should be able to write performant, low-level code leveraging batteries-included algorithms effortlessly. This is 2025 people.
since -14 or -17 I feel no need to keep up with it. thats cool if they add a bunch more stuff, but what I'm using works great now. I only feel some "peer pressure" to signal to other people that I know c++20, but as of now, I've put nothing into it. I think it's best to lag behind a few years (for this language, specifically).
The compilers tend to lag a few years behind the language spec too, especially if you have to support platforms where the toolchains lag latest gcc/clang (Apple / Android / game consoles).
Respectfully, you might want to add at least a few C++20 features into your daily usage?
consteval/constinit guarantees to do what you usually want constexpr to do. Have personally found it great for making lookup tables and reducing the numbers of constants in code (and c++23 expands what can be done in consteval).
Designated initializer is a game-changer for filling structures. No more accidentally populating the wrong value into a structure initializer or writing individual assignments for each value you want to initialize.
On the other hand, the decline of robust and high quality software started with the introduction of very immature languages such as both javascript or typescript ecosystems.
It's really any other language other than those two.
loving he goes 'int main() { ... }' and never returns an int from it. Even better: without extra error / warning flags the compiler will just eat this and generate some code from it, returning ... yeah. Your guess is probably better than mine.
If the uber-bean counter, herald of the language of bean counters demonstrate unwillingness to count beans, maybe the beans are better counted in another way.
Well, actually... the "main" function is handled specially in the standard. It is the only one where the return type is not void and you don't need to explicitly return from it - if you do it, it is treated as if you returned 0.
(You will most definitely get a compiler error if you try this with any other function.)
You might say this is very silly, and you'd be right. But as quirks of C++ go it is one of the most benign ones. As usual it is there for backwards compatibility.
And, for what it's worth, the uber-bean counter didn't miss a bean here...
They don't have to. The subset depends on the job! That's the beauty and power of C++. That's why we have projects written in it in all domains. From websites to spaceships and Mars rovers.
yes, and you will tell me exactly what subset and coding convention "makes sense" for this domain, and you will give your reasoning too. And I will give my arguments, and on and on it goes. teams have broken up over this.
A well-designed language is one in which there are very few different ways of doing the same thing. And C++ is definitely not that.
Another feature of a well-designed language is how well it is able to separate features for library writers vs application writers. I have seen way too many smart coders end up polluting application code with unnecessarily complex features of C++ meant for library writers.
Why would a well designed language have only one or few ways to do the same thing? Seems rather arbitrary. I like when I have many ways to do the same thing.
Imagine if you told a writer or poet that English is bad because there is more than one way to say the same thing...
Programming languages are for people more than machines. Machines are happy with microcode.
Bjarne Stroustrup (the creator of C++) is the best language designer. Many language designers will create a language, work on it for a couple years, and then go and make another language. Stroustrup on the other hand has been methodically working on C++ and each year the language becomes better.
Modules sound cool for compile time, but do they prevent duplicative template instantiations? Because that's the real performance killer in my experience.
(It's a great post in general. N.B. that it's also quite old and export templates have been removed from the standard for quite some time after compiler writers refused to implement them.)
TL;DR: Declare your templates in a header, implement them in a source file, and explicitly instantiate them inside that same source file for every type that you want to be able to use them with. You lose expressiveness but gain compilation speed because the template is guaranteed to be compiled exactly once for each instantiation.
Which is to say, "extern template" is a thing that exists, that works, and can be used to do what you want to do in many cases.
The "export template" feature was removed from the language because only one implementer (EDG) managed to implement them, and in the process discovered that a) this one feature was responsible for all of their schedule misses, b) the feature was far too annoying to actually implement, and c) when actually implemented, it didn't actually solve any of the problems. In short, when they were asked for advice on implementing export, all the engineers unanimously replied: "don't". (See https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n14... for more details).
1. You gain the ability to use the compilation unit's anonymous namespace instead of a detail namespace, so there is better encapsulation of implementation details. The post author stresses this as the actual benefit of export templates, rather than compile times.
2. You lose the ability to instantiate the template for arbitrary types, so this is probably a no-go for libraries.
3. Your template is guaranteed to be compiled exactly once for each explicit instantiation. (Which was never actually guaranteed for real export templates).
I have often thought about writing something vaguely similar. We’ll see if I ever do. It wouldn’t be the same because I don’t hold the same position Bjarne did in the early days, but I am very interested in Rust history, and want to preserve it. It wouldn’t be from my perspective rather than from the creator’s perspective.
I did give a talk one time on Rust’s history. It was originally at FOSDEM, but there was an issue with the recording. The ACM graciously asked me to do it again to get it down on video https://dl.acm.org/doi/10.1145/2959689.2960081
Unfortunately, Rust is significantly less expressive than C++ and therefore is unlikely to replace it for high-performance systems code. As much as I don’t like C++, it is very powerful as a tool. The ability to express difficult low-level systems constructs and optimizations concisely and safely in the language are its killer feature. Once you know how to use it, other languages feel hobbled.
C++ doesn't allow you to express low level systems constructs concisely and safely though. You usually get neither.
Look at the first example in the article, where the increment can overflow and cause UB despite that overflow having completely defined semantics at the hardware level. Fixing it requires either a custom addition function or C++26, another include, and add_sat(). I wouldn't consider either concise in a program that doesn't include all of std.
This assumes you are writing C++ in the most naive way possible. I’m sure some people do that but nothing requires it. The capabilities of a language are not defined by its worst programmers.
Modern C++ allows you to swap out most features and behaviors of the language with your own implementations that make different guarantees. C++ is commonly used in high-assurance environments with extremely high performance requirements, and it remains the most effective language for these purposes because you can completely replace most of the language with something that makes the safety guarantees you require. This is rather important. For example, userspace DMA is idiomatic in e.g. high-performance databases kernels; handling this is much safer in C++ than Rust. In C++, you can trivially write elegant primitives that completely hide the unusual safety model. In Rust, you have to write a lot of ugly unsafe code to make this work at all because userspace DMA isn’t compatible with a borrow checker. There can always be multiple mutable references to memory but it is not knowable at compile-time, safety of an operation can only be arbitrated at runtime.
Of course, it is still incumbent on the developer to use the language competently in all cases.
The capabilities of a language are not defined by its worst programmers.
Is the implication here that Bjarne is a bad C++ developer? If the person in charge of the EWG fails "to use the language competently in all cases", what hope is there for the rest of us mere mortals?
For what it's worth, unsafe Rust is safer than C++. There's very little UB to explode your carefully crafted implementations. Safe rust of course has no UB except for what you write in unsafe blocks, so it's safer still and there's no real difference in the abstractions you can write with concepts vs traits.
I'm not actually arguing for rust here though, because this isn't a great showing for it. Trying to write the related add_wrap(T, T) function in rust is stupidly verbose compared to add_sat(T, T) thanks to bad decisions the num_traits authors made. What I am saying is C++ isn't a form of high level assembly like your original comment suggested. Understanding the relationship between the language and the hardware takes a lot of experience that most people don't use when writing code.
UB is a feature of the standard, not the implementation. Many of those behaviors can be defined. Modern C++ conveniently allows you to replace many of the bits that have UB, per standard, with your own bits with defined behavior with zero overhead. This was not always the case. You aren’t dependent on the compiler implementor. The ability to consistently do this transparently became practical around C++17 IMO. The C++ standard library is in many regards obsolete and many orgs treat it that way.
I never suggested that C++ was “a form of high level assembly”. I’ve written enough assembly and C to know better; you lose a bit of precision with C++. But now I can define (or not) the behavior I want in a way that is largely transparent. This has been a brilliant change to the language.
If you have a foundational library that makes different and/or explicit guarantees than std, it is pretty easy to police that in a code base with automation. Everyone doing high-performance and/or high-assurance systems is dragging in few if any dependencies, so this is practical. The kinds of things that C++ is really good at for new code are the kinds of things where this is what you would do regardless.
Developers don’t even have to be hardware experts, they just have to not use std for most things. That is a pretty low barrier. And std is a mess with the albatross of legacy support. Reimagined C++20 native “standard” libraries are much, much cleaner and safer (and faster).
Legacy C++ code bases aren’t going to be rewritten in a new language. New C++ code bases can take advantage of alternative foundations that ignore std and many do. Most things should not be written in C++, but for some things C++ is unmatched currently and safer in practice than is often suggested with basic hygiene.
Modern C++ conveniently allows you to replace many of the bits that have UB, per standard, with your own bits with defined behavior with zero overhead.
Okay, let's continue the example. Please demonstrate how to replace the addition operator on a primitive type. You can't within the confines of the language and that's a good thing in most cases. What you can do is pass -fwrapv, except that MSVC doesn't officially define a comparable flag.
Developers don’t even have to be hardware experts, they just have to not use std for most things.
Signed overflow isn't a problem with std, the solution to it is in std. Null pointers aren't a problem with std, but the recommended fixes are again in std. Etc.
If you have a foundational library that makes different and/or explicit guarantees than std, it is pretty easy to police that in a code base with automation.
As far as I'm aware, neither folly, absl, nor boost define custom integral types with defined overflow behavior. Please provide examples of anyone doing that.
UB is a feature of the standard, not the implementation.
If you're writing "high assurance code", surely you're writing to the standard and not the implementation? The implementation's guarantees change with every upgrade, every new flag, and each time you build for different targets. I certainly try to avoid compiler assumptions as someone who writes safety critical code.
DMA being a problem appears to be mostly a problem with a lack of identification of the data. If the shape of the data could be verified by the language runtime, instead of being an arbitrary stream of bytes whose meaning must be known by the recipient without any negotiation, this form of unsafety would disappear, since the receiving code simply needs to assert the schema, which could be as simple as checking a 32 bit integer.
Then all you need to do is also verify that the sending code adheres to the schema it specified.
This has very little to do with borrow checking. From the perspective of the borrow checker, a DMA call is no different from RPC or writing to a very wide pointer.
hm, I'd be concerned about relying on autovectorization. How much better is 'better'? Compiler friends have told me that something permute-heavy like sorting is unlikely to soon work, if ever.
My biased opinion, from doing this full-time in C++, is that the C++ SIMD story is much further along, especially regarding mature libraries.
The only places where C++ failed to take C's crown has been on UNIX clones (naturally, due to the symbiotic relationship), and embedded where even modern C couldn't replace C89 + compiler extensions from the chip vendor, many shops are stuck in the past, even though most toolchains are already up to C++20 and C17 nowadays.
Rust is still too new for many folks to adopt, it depends on how much you would be willing to help grow the ecosystem, versus doing the actual application.
It will eventually get there, but also have the same issues as C++, regarding taking over C in UNIX/POSIX and embedded, and C++ has the advantage of having been a kind of Typescript for C, in terms of adoption effort, being a UNIX language from AT&T, designed to fit into C ecosystem.
Depends exactly what you want to do. C is not very popular at all in professional settings - C++ is far more popular. I would say if you know Rust then C++ isn't very hard though. You'll write better C++ code too because you'll naturally keep the good habits that the Rust compiler enforces and the C++ compiler doesn't.
Just reading the first 1/5 of this made me bored. I started my career with C++, being heavy into it for 10 years. But I've been doing Swift for the last 10 at least. I had a job interview last week for a job that was heavy C++, with major reliance on templates and post-C++ 11... and it didn't go well. You know what? I don't give a shit.
It's crazy that with that amount of experience you wouldn't get the job, just because you lack some modern C++ info in your brain's memory. Stuff you could search for or ask an LLM in 5 seconds (or even look up in a freaking physical book). You'd probably be fully up to date within a few weeks.
Says a lot about the people hiring imo. Good luck to them finding someone who can recite C++ spec from memory.
If you last worked on Pre templates C++ and now need to work on a template heavy codebase you are effectively writing in a different language. I don't think it will be a few weeks of catching up.
Depends. For certain fields the pay is great and there’s a dearth of candidates.
For other fields there is also a dearth of candidates but the pay falls short and you’ll be leaving tens of thousands of dollars on the table compared to what you could get with other languages.
Bjarne Stroustrup,
AT&T Labs, Florham Park, NJ, USA
Abstract
This paper outlines the proposal for generalizing the overloading rules for Standard C++ that is expected
to become part of the next revision of the standard. The focus is on general ideas rather than technical
details (which can be found in AT&T Labs Technical Report no. 42, April 1, 1998).
That's why C++ is still around today, it was built on some solid principles. Bjarne is such a good language designer because he never abandoned it. Lesser designers make a language and start another in 5 or 10 years. Bjarne saw the value in what he created and had a sense of responsibility to those using it to keep making it better and take their projects seriously.
Whenever I have an idea and I start a project, I start with C++ because I know if the idea works out, the project can grow and work 10 years later.
Something about the formatting of the code blocks used is all messed up for me. Seems to be independent on browser, happens in both Firefox and Chrome.
This is a Bjarne issue. For personal reasons he uses proportional fonts in his code blocks (in his texts) instead of monospaced and the code snippets always look bad. I guess he is stuck in his ways, just have to work around this ugly look.
The font is selected by the HTML/CSS of the ACM site, not by Bjarne.
There may be a bug in the CSS of the ACM site, but I think that it is more likely that anyone who does not see correctly formatted code on that page has forgotten to open the settings of their browsers and select appropriate default fonts for "serif", "sans serif" and "monospace".
As installed, most browsers very seldom have appropriate default fonts, you normally must choose them yourself.
In this case, whoever does not see a monospace font, which is mandatory for rendering the code on that page, because the indentation is done with spaces, which become too narrow if rendered with a proportional font, must have that proportional font set in their browser as a default monospace font, so they should correct this.
Bjarne has nothing to do with the HTML/CSS pages of the ACM site, which select for displaying the code the default monospace font that is configured in the browser of the user.
If a proportional font is used for rendering, the most likely cause is that the user has not configured the default monospace font in the settings of the browser.
This must depend on some settings of the browser and perhaps also on the locally installed typefaces.
On my Firefox on Linux, this HTML page is not rendered with any custom typefaces, but it uses those specified by me as defaults for serif/sans serif/monospace.
The C++ code is rendered in my browser with my default, i.e. with JetBrains Mono and there is nothing weird.
The code quoted by you is indented as expected, not as in your posting.
On my computer, I have mostly typefaces that I have bought myself and which are seldom encountered in most computers. I do not have any of the typefaces that are typically specified in CSS rules, i.e. none of the typefaces that can be found in default installations of Windows, Linux or MacOS.
So perhaps there is a bug in their CSS at the definition of "wp-block-code", which on other computers selects a bad typeface that is proportional, so that the narrow spaces make the indentation disappear. (Their wp-block-code says "font-family:inherit" and I have not searched further to see from where the wrong font-family may be inherited.)
Here, perhaps because that bad typeface cannot be found, the browser uses my default monospace font and the code is displayed fine.
Or else, perhaps you have not set in your browser a proper default for monospace fonts and it just takes Arial or other such inappropriate system font even for monospace.
It's typical Stroustrup style to write code in a variable width font. I'd wager they didn't have an option to use a variable-width font in their code blocks in their CMS and normal paragraphs are trimmed automatically.
I didn't see the author at first. However, immediately after seeing the code I checked for the author, because I was sure it was Stroustrup.
While you are right about the books of Stroustrup, here your inference is wrong, because Stroustrup cannot have anything to do with the CSS style sheets of the ACM Web site, which, in conjunction with the browser settings, determine the font used for rendering the text.
On my browser, all the code is properly indented, most likely because my browsers are configured correctly, i.e. with a monospace font set as the default for "monospace".
Whoever does not see indentation, most likely has not set the right default font in their browser.
The code blocks aren't in a preformatted tag like <pre> so the whitespace gets collapsed. It seems the intention was to turn spaces into but however it was done was messed up because lots of spaces didn't get converted.
Have you verified that your browsers have correct settings for their default fonts, i.e. a real monospace font as the default for "monospace"?
Here the code is displayed with my default monospace font, as configured in browsers, so the formatting is fine.
There are only 2 possible reasons for the bad formatting: a bug in the CSS of the ACM site, which selects a bad font on certain computers or a bad configuration of your own browsers, where you have not selected appropriate default fonts.
This doesn't seem to be a code blog, but a general science communication blog. The editors may not be familiar with code syntax, and may simply be using a content management system and copy-pasting from source material.
> ACM, the Association for Computing Machinery, is the world's largest educational and scientific society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field's challenges.
Most of programming language conferences are organized by ACM.
Communications of the ACM has had unbelievably bad typography for code samples for decades (predating the web). No idea how this is allowed to continue.
> Between Rust and Zig, the problems of C++ have been solved much more elegantly
Those languages occupy different points in the design space than C++. And thus, in the general sense, neither of them, nor their combination, is "C++ with the problems solved". I know very little Rust and even less Zig. But I do know that there are various complaints about Rust, which are different than the kinds of complaints you get about C++ - not because Rust is bad, just because it's different in significant ways.
> It is so objectively horrible in every capacity
Oh, come now. You do protest too much... yes, it has a lot of warts. And it keeps them, since almost nothing is ever removed from the language. And still, it is not difficult to write very nice, readable, efficient, and safe C++ code.
> it is not difficult to write very nice, readable, efficient, and safe C++ code
That's a fine case of Stockholm Syndrome you've got there. In reality, it is hard. The language fights you every step of the way. That's because the point in the design space C++ occupies is a uniquely stupid one. It wants to have it's cake and eat it too. The pipe-dream behind C++ is that you can write code in an expressive manner and magically have it also be performant. If you want fast code, you have to be explicit about many things. C++ ties itself in knots trying to be implicitly explicit about those things, and the result is just plain harder to reason about. If you want code that's safe and fast, you go with Rust. If you want code that's easy and fast, you go with Zig. If you want code that's easy and safe you go with some GCed lang. Then if you want code that's easy, safe, and fast, you pick C++ and get code which might be fast. You cannot have all three things. Many other langues find an appropriate balance of these three traits to be worthwhile, but C++ does not. It's been 40 years since the birth of C++ and they are only just now trying to figure out how to make it compile well.
Even Cobol code hasn't been ported in it's entirety, and the whole codebase at the peak was probably orders of magnitude smaller than C++. It's also far easier to port Cobol - with it being used mostly for data processing and business logic - than C++ that was used for all manners of strange, esoteric and complicated pieces of software requiring thousand to millions of man-hours to port (for example most of Gecko and Blink).
C++ will be here forever, at least in some manner.
We can all at least appreciate that COBOL is something you try to get rid of where possible. If we took the same attitude to C++ as we do COBOL, then I think the issue would be much less severe.
That in and of itself is a failure. The decision to continually bolt more stuff onto this mess instead of developing a viable alternative is honestly painful. When you look at something like Zig, it gets you much of what C++ offers and in a way that doesn't cause you pain. Is the argument that Zig simply wasn't possible 30 years ago? I doubt it. As best I can tell, Zig comes as the result of a relatively experienced C programmer making the observation that you could improve C in a lot of easy ways. Were it not for the existing mess, he might have called his language C++. Instead a Scandinavian nut-job decided to heap some mess on top of C and everyone just went along with it.
Honestly, I am a happier and more productive developer since left C++ behind for other languages. And it's not just the language, but the lack of ecosystem too. Things like the build system, managing dependencies, etc, all such a pain compared to modern languages with good ecosystem (Rust, Flutter, Kotlin, etc)
Rust doesn't "depend" on LLVM in the sense you seem to imagine, you can instead lower Rust's MIR into Cranelift (which is written in Rust) if you want for example.
LLVM's optimiser is more powerful, and it handles unwinding, so today most people want LLVM but actually I think LLVM's future might involve more Rust.
The moral of the story? Backwards compatibility means never leaving your baggage behind.
reply