Hacker News new | past | comments | ask | show | jobs | submit login

STL is mostly garbage, which is a big reason it's not used in places that actually need performance, both for build times and runtime.

Yes, it's embarrassing. I think it should be noted that the performance mantra is mostly for show with C++, though. The same people who spout it will also blatantly use pessimizing language features just because, when normal, procedural code would've done the job faster and ultimately simpler.

I think the performance overhead of a few things in C++ make sense, but in general you get performance in C++ by turning things off and abstaining from most of the features. Exceptions complicate mostly everything, so the best thing to do is to turn them off and not rely on anything that uses them, for example.

Modern C++ isn't fast and most of C++ wasn't even before the modern variant was invented. The C "subset" is.




> STL is mostly garbage, which is a big reason it's not used in places that actually need performance, both for build times and runtime.

It is NOT garbage. It is more than sufficient for the 99% of devs that need key in hand data structure and good enough performance (Meaning faster than 99% of over programming languages already).

If what you need is sub micro-second perf, then yes, redefined your data-structure.

BTW, you will very likely have to do that in any language anyway. Because it is impossible to design fast forever-living data structure. They (almost) all become obsolete when architectures evolve. Red-Black Trees where state of art DS, teached-at-school 10 years ago, they are useless garbage nowadays if you seek for performance.


> It is more than sufficient for the 99% of devs that need key in hand data structure and good enough performance (Meaning faster than 99% of over programming languages already).

I really don't get this argument. If you don't need pedal-to-the-metal performance then why are you using C++ in the first place? (Unless, of course, your answer is "legacy code".)

C++ is being touted as being high performance, but basically every standard data structure besides `std::vector` is garbage for high performance, pedal-to-the-metal code. And not only data structures - `std::regex`'s performance is bad, `std::unique_ptr` doesn't optimize as well as a plain pointers, no vendor has a best-in-class `std::hash` implementation (they're neither DoS-safe nor the fastest), etc.

> BTW, you will very likely have to do that in any language anyway. Because it is impossible to design fast forever-living data structure. They (almost) all become obsolete when architectures evolve.

Do you, though? Rust already replaced their standard hash map implementation with a completely different one which was faster, so it shows that it can be done.


> I really don't get this argument. If you don't need pedal-to-the-metal performance then why are you using C++ in the first place? (Unless, of course, your answer is "legacy code".)

There is many other things that pure compute-bound CPU performance that can drive you to a GC-less language like C++. Fine grain memory usage is one of them, latency control is an other.

> `std::regex`'s performance is bad. , `std::unique_ptr` doesn't optimize as well as a plain pointers

std::regex is not defendable, specially when there is much better implementation already available (https://github.com/hanickadot/compile-time-regular-expressio...)

However, do you have any bench / source for std::unique_ptr ? You are the first one I hear giving critics on it.

> Do you, though? Rust already replaced their standard hash map implementation with a completely different one which was faster, so it shows that it can be done.

Rust does not have 25 years of code base behind him. We can talk to that again when he gets 10 years more.

C++ can not afford to randomly break API on one of its core STL component just to hypothetically gain few bit of performance.

It can be done, but it should be done with a depreciation process and an alternative implementation, like it has been done for auto_ptr -> unique_ptr.


Rust didn't break the API of HashMap. The trick is that the STL decided that certain algorithmic requirements were a part of the interface. For std::map, the case here is that it must be sorted. Its API also means you can't be polymorphic over a hashing strategy. This constrains possible valid algorithms. Rust made the opposite choices.

And so they did exactly what you say, for that reason. std::unordered_map is a better comparison.


> However, do you have any bench / source for std::unique_ptr ? You are the first one I hear giving critics on it.

Chandler Carruth talks about this at CppCon 2019 [0]. From a quick review he says part of the reason std::unique_ptr isn't zero-cost right now is:

- Objects passed by value (like std::unique_ptr) are passed on the stack instead of in registers, and changing that will be an ABI break

- No destructive move means extra temporaries will be created/retained

[0]: https://youtu.be/rHIkrotSwcc?t=1046


> Exceptions complicate mostly everything, so the best thing to do is to turn them off and not rely on anything that uses them, for example.

Look forward to all your users ignoring your return codes and never calling valid() on your objects that can only signal construction failed that way.

Also the impact of exceptions on performance is overblown, even in high perf situations:

https://news.ycombinator.com/item?id=20342183

Throwing is slow, but throwing is supposed to be rare. Don't use it as flow control.


About 1% of developers actually need to deliver µs fast code.

The remaining 99% are happy to be able to deliver fast enough portable code without having to reinvent data structures all the time.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: