Hacker News new | past | comments | ask | show | jobs | submit login

This is common knowledge for C programmers. In the embedded/firmware space, I've seen #define UINT_MAX (unsigned int)(-1) very often. It's convenient because it is always the maximum unsigned integer value regardless of whether int is 8/16/32/64 bits.



According to standard, (unsigned int)(-1) is undefined behavior (as is signed overflow) because the machine can use some other representation of signed integers than twos-complement. On the other hand you will probably never find non-twos-complement architecture in any vaguely production use today.


Nope, casting signed to unsigned is well-defined. The C standard requires it to act like two’s complement regardless of what the machine actually uses:

> Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.

https://stackoverflow.com/questions/50605/signed-to-unsigned...


~0u is also a safe equivalent if there is unwarranted fear around wrapping.


Apart from fact that it is not an integer overflow, what you can see in system headers is not what you can do in regular program’s headers. System headers are part of toolchain and can rely on specifics of platform and actively use these to provide compliant values for you. When you’re on a different platform, a different set of definitions is provided (though most cross-platform standard library implementations try to abstract specifics deep into their internals and builtins, since it is useful for stdlib writers too).


Unsigned underflow/overflow is very clearly defined.

Signed integer underflow/overflow is UB, on the other hand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: