The reason performance decreased when he removed bounds checking is because asserting bounds is very useful to a compiler. Essentially, the compiler emits code like this:
1. if (x >= 0) && (x < arr_len(arr))
2. get element from array index x
3. else
4. throw exception
5. do more stuff
The compiler deduces that at line 5 0 <= x < arr_len(arr). From that it can deduce that abs(x) is a no op, that 2*x won't overflow (cause arrays can only have 2^32 elements), etc. Without bounds checking the compiler emits:
1. get element from array index x
2. do more stuff
So the compiler doesn't know anything about x, which is bad. The solution which apparently is not implemented in Rust (or LLVM, idk) is to emit code like the following:
1. assert that 0 <= x < arr_len(arr)
2. get element from array index x
3. do more stuff
If what you said is true, then this is not Rust specific, and we should observe performance improvement in all languages, should they introduce bounds-checking during code emission. Is there compiler flag that we can turn on bounds-checking in GCC for C++ programs? Is there research to compare performance before and after that flag?
> The solution which apparently is not implemented in Rust (or LLVM, idk) is to emit code like the following:
The solution to what? You appear to be suggesting that compilers should insert bounds checks even when the programmer doesn't ask for them, which, sure, I'm down for that, but the whole point of this discussion is that the bounds checking that currently gets done has a negligible cost in practice.
Maybe I wasn't clear. The compiler should compile code without bounds checking as if bounds checking occur. The likely reason for the slowdown reported in the blog post is that Rust doesn't do this, which may be a compiler bug.