Assuming you're working in a language that defines signed integer overflow. Depending on the language, you can result in undefined behavior, instead. For that reason, I'd go with an unsigned counter, with the first million IDs being invalid or reserved for future use. That way, you get well-defined overflow into an invalid region.
C/C++ is notoriously head-ache inducing on this point. Yes, all the CPU archs you'd reasonably expect to encounter today behave this way. However, because the language standard says signed overflow is undefined, compilers are free to assume that it will never happen, and make optimizations that you'd think would be unsafe, but are technically permissible. [1]
Well that's interesting. I was not aware of any compilers doing this. I wonder if there's a switch in gcc/llvm/msvc/etc to turn this specific behavior off.
The -fstrict-overflow option is enabled at levels -O2, -O3, -Os.
In other words, basically every program that you're using is compiled with that option enabled. (Release builds typically use -O2, sometimes even -O3.)
I was answering the "not aware that any compilers were doing this" part, hoping that they would be able to answer their second question using the source I linked.
Which is a good thing, assuming you did reasonable testing with ubsan etc. Having to assume things that never happen is a big problem for optimization!
That's a good point. Seems like we could do a bit better than the current state of the art, though. If non-optimized builds trapped on overflow, that would at least give you a better chance of detecting these problems before the optimizer starts throwing out code you meant to keep.