This seems like a pretty good compromise on integer overflow. I'm happy to see the Rust core team taking the problem seriously after seeing so much discussion about it.
I love the compromise on integer overflow. Default overflow checking in debug mode will keep most Rust programs free of overflow, and that should ensure that if hardware designers ever come to their senses and implement free hardware overflow traps, Rust code can enable them without issue.
Rust's position is reasonable, but it won't keep most programs free of overflow. Overflow and underflow bugs typically require malicious input to expose; casual testing usually doesn't encounter them.
Also, in release builds they promise an undefined value, not undefined behavior. By my reading, hardware-assisted overflow checks cannot log errors or do anything else useful under this policy. (Probably saturating arithmetic is best, since it's most likely to trigger an out-of-bounds error.)
Their saving grace is the other safety checks (e.g. bounds checking), and their assertion that "we reserve the right to make checking stricter in the future" which enables them to strengthen this policy.
> Since overflow cannot cause crashes or data races (outside of unsafe code), we can skip these checks without endangering Rust's core value proposition of safe systems programming. Whenever checks are disabled, overflow will yield an undefined value (this is distinct from--and much more limited than--the "undefined behavior" you get in C).
This is too dismissive of the seriousness of problems that might occur due to unchecked overflow [1].
Nobody is attempting to dismiss the idea that overflow is a problem. The fact that we have had innumerable discussions over this very topic should be indication that this is something that Rust cares about, and will be investigating further in the future as compiler support improves (it's not stated outright, but I suspect the fact that overflow yields an "undefined value" is to leave open the possibility of making overflow saturating-by-default sometime in the future without breaking backwards-compatibility, if the performance hit turns out to be acceptable).
1. Overflow doesn't cause memory unsafety in Rust, so trying to imply that it's not a genuinely safe language is incorrect.
2. Even if it did, saturating on overflow is exactly as safe as trapping, and has the added benefit of not introducing an untold number of new ways that your program can panic.
3. Trapping on overflow is oversold as a solution to logical errors because you're still opening yourself up to approximately four billion (or 18 quintillion, depending on width) invalid states before your program bites it. If you want a real solution, stick numeric bounds in your type system.
So, I'm not really caring about memory safety--there are lots of reasons to care, mind you, but that's not my concern.
It's that, quite simply, not being able to easily catch integer arithmetic errors is a really annoying flaw in a language that needs to do systems work--especially if it's used in places where, say, you are decoding buffers or handling IO or basically getting data from an untrusted/faulty source.
It could even be done as a basic addition to the standard library...I frankly don't care how. It's just that it is a really obvious wart to anyone that's ever had to do safe integer work in the lingua-franca of languages, C.
Just because the program doesn't crash doesn't mean it's being useful.
It sounds like there is confusion between "trapping" and "wrapping" here.
"Introducing 4 billion [...]" sounds like a description of the failure modes of wrapping. In contrast, trapping interrupts the flow of the program, so there are zero "invalid states before your program bites it".
Trapping seems like a very appealing solution, from the perspective of application code. The primary downsides are that it's somewhat inefficient in today's popular hardware architectures, and it's somewhat inconvenient for optimizers.
They are invalid states because, with the exception of indexing into an array or doing something else directly related to the memory size of your platform, INT_MAX is never the point at which the value stored in a fixed-width numeric type ceases to make sense. If I have an RPG where I want to cap a character's stats at 99 (been playing too much Dark Souls, I think...) yet fail to implement a manual sanity check, I open myself to the possibility of, at best, 128-99=29 invalid states (and in the worst-case, if I used a u64, I get the aforementioned 18 quintillion invalid states). My program is invalid during this time, but trapping arithmetic can't help me. This is why I said that to actually solve this problem you need numeric bounds in your type system.
When I was in high school, my friend and I played a robot fighting game, where you had some fixed energy and every action would consume some of it. My friend found a way to exercise a large number of expensive actions in one turn, resulting in energy underflow. With some care, he found a way to achieve the maximum possible energy. At no point was the energy out of bounds.
Sanity checks can detect invalid states, but not invalid calculations (which have intermediate values that may exceed the bounds - consider calculating the average stat in your Dark Souls example). You can get an unexpectedly valid state from a calculation that overflows, and thereby pass a manual safety check. Trapping arithmetic will catch these.
You are correct that trapping on overflow isn't useful as a substitute for bounds checking on variables that have a well-defined maximum and/or minimum value. However, that doesn't mean overflow checking is useless. It remains extremely useful to prevent silent incorrect behavior when an otherwise unbounded value hits implementation limits. In fact, I'd call it essential to any language that aims for the level of robustness Rust aspires to.
I'm not trying to say it's useless. :) I think it's very valuable, but we need to be honest about its shortcomings rather than viewing it as a sheer win. A language like Rust is completely worthless if it's not fast; we've got a dozen languages that are already memory-safe and within 2x the speed of C. Given that Rust's plethora of safety mechanisms mean that numeric overflow can't cause memory unsafety, I completely understand why they would take the practical route rather than the theoretically-perfect route (even if, as predicted, they get crucified for it by armchair language designers).
3. What numeric bound should YouTube have placed on their view counter (in the case I linked earlier)? 2 billion? Why? If the answer is "Because that's all that fits in an i32" then we're right back where we started: overflow takes many programs directly from a valid state to an invalid one.
Some values that you wish to model don't have any logical upper bound. In this case the correct solution is to forgo fixed-size integers entirely and use an arbitrary-precision integer. If that is impractical for your given domain, then you need to select the largest fixed-size numeric type that is practical and resign yourself to some degree of potential incorrectness.