Additionally I still cannot understand why they didn't make iterators safe from the very beginning. In the alias examples some iterators must alias and some not. With safe iterators the checks would be trivial, as just the base pointers need to be compared. This could be done even at compile-time, when all iterators bases are known at compile-time.
Their argument then was that iterators are just simple pointers, not a struct of two values base + cur. You don't want to pass two values in two registers, or even on the stack. Ok, but then call them iterators, call them mere pointers.
With safe iterators, you could even add the end or size, and don't need to pass begin() and end() to a function to iterate over a container or range. Same for ranges.
A iterator should have just have been a range (with a base), so all checks could be done safely, the API would look sane, and the calls could be optimized for some values to be known at compile-time. Now we have the unsafe iterators, with the aliasing mess, plus ranges, which are still unsafe and ill-designed. Thanksfully I'm not in the library working group, because I would have had heart attacks long time ago over their incompetence.
My CTL (the STL in C) uses safe iterators, and is still comparable in performance and size to C++ containers. Wrong aliasing and API usage is detected, in many cases also at compile-time.
We're talking about a commitee that still releases "safety improving" constructs like std::span without any bounds checking. Don't think about it too much.
The C++ committee and standard library folks are in a hard spot.
They have two goals:
1. Make primitives in the language as safe as they can.
2. Be as fast as corresponding completely unsafe C code.
These goals are obviously in opposition. Sometimes, if you're lucky, you can improve safety completely at compile time and after the safety is proven, the compiler eliminates everything with no overhead. But often you can't. And when you can't, C++ folks tend to prioritize 2 over 1.
You could definitely argue that that's the wrong choice. At the same time, that choice is arguably the soul of C++. Making a different choice there would fundamentally change the identity of the language.
But I suspect that the larger issue here is cultural. Every organization has some foundational experiences that help define the group's identity and culture. For C++, the fact that the language was able to succeed at all instead of withering away like so many other C competitors did is because it ruthlessly prioritized performance and C compatibility over all other factors.
Back in the early days of C++, C programmers wouldn't sacrifice an ounce of performance to get onto a "better" language. Their identity as close-to-the-metal programmers was based in part on being able to squeeze more out of a CPU than anyone else could. And, certainly, at the time, that really was valuable when computers were three orders of magnitude slower than they are today.
That culture still pervades C++ where everyone is afraid of a performance death of a thousand cuts.
So the language has sort of wedged itself into an untenable space where it refuses to be any slower than completely guardrail-less machine code, but where it's also trying to be safer.
I suspect that long-term, it's an evolutionary dead end. Given the state of hardware (fast) and computer security failures (catastrophically harmful), it's worth paying some amount of runtime cost for safer languages. If you need to pay an extra buck or two for a slightly faster chip, but you don't leak national security secrets and go to jail, or leak personal health information and get sued for millions... buy the damn chip.
Ironically, in the early days of C, it was a good as Modula-2 or Pascal dialects "squeeze more out of a CPU than anyone else could".
All that squeezing was made possible by tons of inline Assembly extensions, that Modula-2 and Pascal dialects also had.
It only took off squeezing, when C compiler writers decided to turn to 11 the way UB gets exploited in the optimizer, with the consequences that we have to suffer 20 years later.
The primary goal of WG21 is and always has been compatibility, and particularly compatibility with existing C++ (though compatibility with C is important too).
That's why C++ 11 move is not very good. The safe "destructive" move you see in Rust wasn't some novelty that had never been imagined previously, it isn't slower, or more complicated, it's exactly what programmers wanted at the time, however C++ could not deliver it compatibly so they got the C++ 11 move (which is more expensive and leaves a trail of empty husk objects behind) instead.
You're correct that the big issue is culture. Rust's safety culture is why Rust is safe, Rust's safety technology merely† enables that culture to thrive and produce software with excellent performance. The "Safe C++" proposal would grant C++ the same technology but cannot gift it the same culture.
However, I think in many and perhaps even most cases you're wrong to think C++ is preferring better performance over safety, instead, the committee has learned to associate unsafe outcomes with performance and has falsely concluded that unsafe outcomes somehow enable or engender performance when that's often not so. The ISO documents do not specify a faster language, they specify a less safe language and they just hope that's faster.
In practice this has a perverse effect. Knowing the language is so unsafe, programmers write paranoid software in an attempt to handle the many risks haunting them. So you will find some Rust code which has six run-time checks to deliver safety - from safe library code, and then the comparable C++ code has fourteen run-time checks written by the coder, but they missed two, so it's still unsafe but it's also slower.
I read a piece of Rust documentation for an unsafe method defined on the integers the other day which stuck with me for these conversations. The documentation points out that instead of laboriously checking if you're in a case where the unsafe code would be correct but faster, and if so calling the unsafe function, you can just call the safe function - which already does that for you.
† It's very impressive technology, but I say "merely" here only to emphasise that the technology is worth nothing without the culture. The technology has no problem with me labelling unsafe things (functions, traits, attributes now) as safe, it's just a label, the choice to ensure they're labelled unsafe is cultural.
Rust is unique in being both safe and nearly as fast as idiomatic C/C++. This is a key differentiator between Rust and languages that rely on obligate GC or obligate reference counting for safety, including Golang, Swift, Ocaml etc.
it's not clear to me that GC is actually slower than manual memory management (as long as you allow for immutable objects. allocation/free is slow so most high performance programs don't have useless critical path allocations anyway.
For memory there are definitely cases where the GC is faster. This is trivially true.
However, GC loses determinism, so if you have non-memory resources where determinism matters you need the same mechanism anyway, and something like a "defer" statement is a poor substitute for the deterministic destruction in languages which have that.
Determinism can be much more important than peak performance for some problems. When you see people crowing about "lock free" or even "wait free" algorithms, the peak performance on these algorithms is often terrible, but that's not why we want them. They are deterministic, which means we can say definite things about what will happen and not just hand wave.
> My CTL (the STL in C) uses safe iterators, and is still comparable in performance and size to C++ containers
I wonder, what's "comparable" there ? Because for instance MSVC, libstdc++ and libc++ supports some kind of safe iterators but they are definitely not useable for production due to the heavy performance cost incurred.
Their argument then was that iterators are just simple pointers, not a struct of two values base + cur. You don't want to pass two values in two registers, or even on the stack. Ok, but then call them iterators, call them mere pointers. With safe iterators, you could even add the end or size, and don't need to pass begin() and end() to a function to iterate over a container or range. Same for ranges.
A iterator should have just have been a range (with a base), so all checks could be done safely, the API would look sane, and the calls could be optimized for some values to be known at compile-time. Now we have the unsafe iterators, with the aliasing mess, plus ranges, which are still unsafe and ill-designed. Thanksfully I'm not in the library working group, because I would have had heart attacks long time ago over their incompetence.
My CTL (the STL in C) uses safe iterators, and is still comparable in performance and size to C++ containers. Wrong aliasing and API usage is detected, in many cases also at compile-time.