Hacker News new | past | comments | ask | show | jobs | submit login

It's very reasonable. This way they overflow into invalid values instead of zero.



Assuming you're working in a language that defines signed integer overflow. Depending on the language, you can result in undefined behavior, instead. For that reason, I'd go with an unsigned counter, with the first million IDs being invalid or reserved for future use. That way, you get well-defined overflow into an invalid region.


When's the last time you worked with an architecture that didn't use twos complement and roll into negatives on overflow?

Your reserved bottom range is a perfectly good solution. But rolling into negatives seems fine, too.


C/C++ is notoriously head-ache inducing on this point. Yes, all the CPU archs you'd reasonably expect to encounter today behave this way. However, because the language standard says signed overflow is undefined, compilers are free to assume that it will never happen, and make optimizations that you'd think would be unsafe, but are technically permissible. [1]

[1]https://stackoverflow.com/questions/18195715/why-is-unsigned...


Well that's interesting. I was not aware of any compilers doing this. I wonder if there's a switch in gcc/llvm/msvc/etc to turn this specific behavior off.


https://linux.die.net/man/1/gcc -- Search for `-fstrict-overflow`. And note how it says

  The -fstrict-overflow option is enabled at levels -O2, -O3, -Os. 
In other words, basically every program that you're using is compiled with that option enabled. (Release builds typically use -O2, sometimes even -O3.)


-fstrict-overflow is the opposite of what the parent comment was asking about. You want -fwrapv or -fno-strict-overflow.


I was answering the "not aware that any compilers were doing this" part, hoping that they would be able to answer their second question using the source I linked.


Basically all C and C++ compilers do this.

They do it, so they can simplify things like "x < x+1" to "true".


Signed overflow is ub in C


I'm aware of that. The question was whether any practical architectures will do the "wrong" thing here.


Compilers will do the "wrong" thing. For example, gcc optimizes out the "x >= 0" check in

    for (int x = 0; x >= 0; x++)
making it an infinite loop, because it assumes "x++" can't overflow.


Signed overflow is IB, not UB.


Alas, no. Signed overflow is UB and compilers can and will assume that it cannot happen when performing optimizations.


Which is a good thing, assuming you did reasonable testing with ubsan etc. Having to assume things that never happen is a big problem for optimization!


That's a good point. Seems like we could do a bit better than the current state of the art, though. If non-optimized builds trapped on overflow, that would at least give you a better chance of detecting these problems before the optimizer starts throwing out code you meant to keep.


That's not going to work because you're expecting every client to ban ID's that are too small.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: