You need to first establish that the type was chosen intentionally before asking why it was intentionally chosen. Otherwise the question is ill-formed.
It looks like they are using PHP/MySQL/Javascript/Flash, with only MySQL having any explicit types.
Even so, an error is often preferable to overflow, which is usually undefined behavior and could lead to a duplicate primary key anyways if it wraps to the first game.
A better question is "why 32-bit over 64-bit", but the site dates back to 2005 where that was the norm and the question has the same issues.
Assuming you're working in a language that defines signed integer overflow. Depending on the language, you can result in undefined behavior, instead. For that reason, I'd go with an unsigned counter, with the first million IDs being invalid or reserved for future use. That way, you get well-defined overflow into an invalid region.
C/C++ is notoriously head-ache inducing on this point. Yes, all the CPU archs you'd reasonably expect to encounter today behave this way. However, because the language standard says signed overflow is undefined, compilers are free to assume that it will never happen, and make optimizations that you'd think would be unsafe, but are technically permissible. [1]
Well that's interesting. I was not aware of any compilers doing this. I wonder if there's a switch in gcc/llvm/msvc/etc to turn this specific behavior off.
The -fstrict-overflow option is enabled at levels -O2, -O3, -Os.
In other words, basically every program that you're using is compiled with that option enabled. (Release builds typically use -O2, sometimes even -O3.)
I was answering the "not aware that any compilers were doing this" part, hoping that they would be able to answer their second question using the source I linked.
Which is a good thing, assuming you did reasonable testing with ubsan etc. Having to assume things that never happen is a big problem for optimization!
That's a good point. Seems like we could do a bit better than the current state of the art, though. If non-optimized builds trapped on overflow, that would at least give you a better chance of detecting these problems before the optimizer starts throwing out code you meant to keep.
Compilers can optimize for signed integers better. Overflow/underflow on signed integers is undefined behavior, which is space for compilers to optimize. But unsigned ints are defined for all cases, so you get less optimal code.
Also, you have problems whenever you compare against signed ints.
Because signed is default for some reason in most languages, and most developers aren't taught to think critically about how decisions like simple datatypes might affect scalability.
The problem is momentum. I could use unsigned int everywhere, but then I have to constantly typecast to int and back anywhere I use a library expecting signed ints. If we all switched to unsigned int by default, then everything would make more sense but we'll all live in typecasting hell during the migration.
Unsigned by default doesn't make more sense than signed by default. The behavior near 0 is surprising; if you underflow you either get a huge value (anything not Swift) or you crash (Swift).
It was a mistake to use them for sizes in C++. Google code style requires using int64 to count sizes instead of uint32 for good reasons.
I read somewhere in the swift documentation that unless you have a specific need for a UInt, that Int is preferred even if you know that the value will always be nonnegative. I think compatibility is one reason they give.