> First up - bounds checks are simply not expensive, that was true a decade or two ago, it is not now. Similarly null checks are not expensive either
They're not expensive because they are optimized away. Branches are still relatively expensive in general. Prediction only goes so far.
> Take integer overflow: there is no reason that that should be UB
Yes there is, it allows for size expansion of the type to match the current architecture. Like promoting a 32 bit index into a 64 bit index for optimal array accesses on a 64 bit CPU. Which you can't do if you're needing to ensure it wraps at 32 bits.
It's not the how it overflows that's the problem, it's the when it overflows that is. Defining it means it has to happen at the exact size of the type, instead of at the optimal size for the usage & registers involved.
> and the impact of pretending that this isn't the case has been numerous security exploits over the year.
And many of those exploits still exist if overflow is defined, because that the overflow happened was the security issue, not the resulting UB failure mode. UB helps here because now it can categorically be called a bug, and you can have things like -ftrapv. That unsigned's wrap is defined and not UB is actually also a source of security bugs - things like the good ol' `T* data = malloc(number * sizeof(T));` overflow bugs. There's no UB there, so you won't get any warnings or UBsan traps. Nope, that's a well-defined security bug. If only unsigned wrapping had been UB that could have been avoided...
But if you want code that is hyper consistent and behaves absolutely identically everywhere regardless of the hardware it's running on (which things like crypto code does actually want), then C/C++ is just the wrong language for you. By a massive, massive amount. Tightening up the UB and pushing some stuff to IB doesn't suddenly make it a secure virtual machine, after all. You're still better off with a different language for that. One where consistency of execution is the priority, abstracting away any and all hardware differences. But that's never been C or C++'s goal.
C/C++ do have problems with UB. But int overflow isn't actually an example of it. And unsigned's overflow being defined is also really a mistake, it should have been UB.
They're not expensive because they are optimized away. Branches are still relatively expensive in general. Prediction only goes so far.
> Take integer overflow: there is no reason that that should be UB
Yes there is, it allows for size expansion of the type to match the current architecture. Like promoting a 32 bit index into a 64 bit index for optimal array accesses on a 64 bit CPU. Which you can't do if you're needing to ensure it wraps at 32 bits.
It's not the how it overflows that's the problem, it's the when it overflows that is. Defining it means it has to happen at the exact size of the type, instead of at the optimal size for the usage & registers involved.
> and the impact of pretending that this isn't the case has been numerous security exploits over the year.
And many of those exploits still exist if overflow is defined, because that the overflow happened was the security issue, not the resulting UB failure mode. UB helps here because now it can categorically be called a bug, and you can have things like -ftrapv. That unsigned's wrap is defined and not UB is actually also a source of security bugs - things like the good ol' `T* data = malloc(number * sizeof(T));` overflow bugs. There's no UB there, so you won't get any warnings or UBsan traps. Nope, that's a well-defined security bug. If only unsigned wrapping had been UB that could have been avoided...
But if you want code that is hyper consistent and behaves absolutely identically everywhere regardless of the hardware it's running on (which things like crypto code does actually want), then C/C++ is just the wrong language for you. By a massive, massive amount. Tightening up the UB and pushing some stuff to IB doesn't suddenly make it a secure virtual machine, after all. You're still better off with a different language for that. One where consistency of execution is the priority, abstracting away any and all hardware differences. But that's never been C or C++'s goal.
C/C++ do have problems with UB. But int overflow isn't actually an example of it. And unsigned's overflow being defined is also really a mistake, it should have been UB.