Depending on the code I've seen performance increases above 100x in some cases. While that's not exactly the norm, benchmarking Rust in debug mode is absolutely pointless even as a rough estimate.
This can happen in languages that use dynamic constructs that can't be optimized out. For example, there was a PHP-to-native compiler (HipHop/HPHPc) that lost to faster interpreters and JIT.
Apple's Rosetta 2 translates x86-64 to aarch64 that runs surprisingly fast, despite being mostly a straightforward translation of instructions, rather than something clever like a recompiling optimizing JIT.
And the plain old C is relatively fast without optimizations, because it doesn't rely on abstraction layers being optimized out.
The optimisation passes are expensive (not the largest source of compile time duration though).
Debug mode is designed to build as-fast-as-possible while still being correct, so that you can run your binary (with debug symbols) ASAP.
Overflow checks are present even in release mode, and some write-ups seem to indicate they have less overhead than you’d think.
Rust lets your configure your cargo configs to apply some optimisation passes even in debug, if you wish. There’s also a config to have your dependencies optimised (even in debug) if you want. The Bevy tutorial walks through doing this, as a concrete example.
That's not right, Rust only checks for overflow in release mode for numbers where its value is known at compile time. In debug mode all operations are checked for overflow.
Yes, optimization is disabled by default in debug mode, which makes your code more debuggable. Overflow checks are also present in debug mode, but removed in release mode. Bounds checking is present in release mode as well as debug mode, but can sometimes be optimized away.
There's also some debug information that is present in the file in debug mode, which leads to a larger binary size, but shouldn't meaningfully affect performance except in very simple/short programs.