Hacker News new | past | comments | ask | show | jobs | submit login

> Huge. Only checking array bounds on every access degrades performance considerably.

I have not found this, at least in application code. There is usually at most a few percent different between v[i] and v.at(i) (the latter checks bounds) with C++ std::vector, for example. So I almost always use .at() these days, and it does catch bugs.




Well, there is the rub. The safe thing to do is more verbose, the unsafe way is in muscle memory. Rust did it the other way around deliberately. But that is easy if you do a new design and don't need to retrofit a safe solution. No snark intended.


That is why one should always enable the compiler switches that turn operator[]() into at(), and then only disable them on case by case scenario, if proven worthwhile to do so.


wouldn't compiling with _GLIBCXX_ASSERTIONS or the corresponding for your compiler of choice be a better solution? It will also catch quite a few more issues (dereferencing null smart pointers, empty optionals, etc.), while being still relatively lightweight.


Thanks, I didn't know (or didn't remember) about this, but it's not clear from the docs that it bounds checks []. I don't find it difficult to use .at(). I'll try the debug mode too, but when I bother writing something in C++ it's usually because I actually care about its speed, so I don't want the overhead to be too bad.

https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_macros...


Not the OP, that is my approach, keep on the bounds checking settings from VC++ also in release builds.

And before STL was a thing, all the custom types I had were bounds checked by default.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: