STL is mostly garbage, which is a big reason it's not used in places that actually need performance, both for build times and runtime.
Yes, it's embarrassing. I think it should be noted that the performance mantra is mostly for show with C++, though. The same people who spout it will also blatantly use pessimizing language features just because, when normal, procedural code would've done the job faster and ultimately simpler.
I think the performance overhead of a few things in C++ make sense, but in general you get performance in C++ by turning things off and abstaining from most of the features. Exceptions complicate mostly everything, so the best thing to do is to turn them off and not rely on anything that uses them, for example.
Modern C++ isn't fast and most of C++ wasn't even before the modern variant was invented. The C "subset" is.
> STL is mostly garbage, which is a big reason it's not used in places that actually need performance, both for build times and runtime.
It is NOT garbage. It is more than sufficient for the 99% of devs that need key in hand data structure and good enough performance (Meaning faster than 99% of over programming languages already).
If what you need is sub micro-second perf, then yes, redefined your data-structure.
BTW, you will very likely have to do that in any language anyway. Because it is impossible to design fastforever-living data structure. They (almost) all become obsolete when architectures evolve. Red-Black Trees where state of art DS, teached-at-school 10 years ago, they are useless garbage nowadays if you seek for performance.
> It is more than sufficient for the 99% of devs that need key in hand data structure and good enough performance (Meaning faster than 99% of over programming languages already).
I really don't get this argument. If you don't need pedal-to-the-metal performance then why are you using C++ in the first place? (Unless, of course, your answer is "legacy code".)
C++ is being touted as being high performance, but basically every standard data structure besides `std::vector` is garbage for high performance, pedal-to-the-metal code. And not only data structures - `std::regex`'s performance is bad, `std::unique_ptr` doesn't optimize as well as a plain pointers, no vendor has a best-in-class `std::hash` implementation (they're neither DoS-safe nor the fastest), etc.
> BTW, you will very likely have to do that in any language anyway. Because it is impossible to design fast forever-living data structure. They (almost) all become obsolete when architectures evolve.
Do you, though? Rust already replaced their standard hash map implementation with a completely different one which was faster, so it shows that it can be done.
> I really don't get this argument. If you don't need pedal-to-the-metal performance then why are you using C++ in the first place? (Unless, of course, your answer is "legacy code".)
There is many other things that pure compute-bound CPU performance that can drive you to a GC-less language like C++. Fine grain memory usage is one of them, latency control is an other.
> `std::regex`'s performance is bad. , `std::unique_ptr` doesn't optimize as well as a plain pointers
However, do you have any bench / source for std::unique_ptr ? You are the first one I hear giving critics on it.
> Do you, though? Rust already replaced their standard hash map implementation with a completely different one which was faster, so it shows that it can be done.
Rust does not have 25 years of code base behind him. We can talk to that again when he gets 10 years more.
C++ can not afford to randomly break API on one of its core STL component just to hypothetically gain few bit of performance.
It can be done, but it should be done with a depreciation process and an alternative implementation, like it has been done for auto_ptr -> unique_ptr.
Rust didn't break the API of HashMap. The trick is that the STL decided that certain algorithmic requirements were a part of the interface. For std::map, the case here is that it must be sorted. Its API also means you can't be polymorphic over a hashing strategy. This constrains possible valid algorithms. Rust made the opposite choices.
And so they did exactly what you say, for that reason. std::unordered_map is a better comparison.
Yes, it's embarrassing. I think it should be noted that the performance mantra is mostly for show with C++, though. The same people who spout it will also blatantly use pessimizing language features just because, when normal, procedural code would've done the job faster and ultimately simpler.
I think the performance overhead of a few things in C++ make sense, but in general you get performance in C++ by turning things off and abstaining from most of the features. Exceptions complicate mostly everything, so the best thing to do is to turn them off and not rely on anything that uses them, for example.
Modern C++ isn't fast and most of C++ wasn't even before the modern variant was invented. The C "subset" is.