Hacker News new | past | comments | ask | show | jobs | submit login

were they ever expecting negative number of games? why signed integer?



You need to first establish that the type was chosen intentionally before asking why it was intentionally chosen. Otherwise the question is ill-formed.

It looks like they are using PHP/MySQL/Javascript/Flash, with only MySQL having any explicit types.

Even so, an error is often preferable to overflow, which is usually undefined behavior and could lead to a duplicate primary key anyways if it wraps to the first game.

A better question is "why 32-bit over 64-bit", but the site dates back to 2005 where that was the norm and the question has the same issues.


It's very reasonable. This way they overflow into invalid values instead of zero.


Assuming you're working in a language that defines signed integer overflow. Depending on the language, you can result in undefined behavior, instead. For that reason, I'd go with an unsigned counter, with the first million IDs being invalid or reserved for future use. That way, you get well-defined overflow into an invalid region.


When's the last time you worked with an architecture that didn't use twos complement and roll into negatives on overflow?

Your reserved bottom range is a perfectly good solution. But rolling into negatives seems fine, too.


C/C++ is notoriously head-ache inducing on this point. Yes, all the CPU archs you'd reasonably expect to encounter today behave this way. However, because the language standard says signed overflow is undefined, compilers are free to assume that it will never happen, and make optimizations that you'd think would be unsafe, but are technically permissible. [1]

[1]https://stackoverflow.com/questions/18195715/why-is-unsigned...


Well that's interesting. I was not aware of any compilers doing this. I wonder if there's a switch in gcc/llvm/msvc/etc to turn this specific behavior off.


https://linux.die.net/man/1/gcc -- Search for `-fstrict-overflow`. And note how it says

  The -fstrict-overflow option is enabled at levels -O2, -O3, -Os. 
In other words, basically every program that you're using is compiled with that option enabled. (Release builds typically use -O2, sometimes even -O3.)


-fstrict-overflow is the opposite of what the parent comment was asking about. You want -fwrapv or -fno-strict-overflow.


I was answering the "not aware that any compilers were doing this" part, hoping that they would be able to answer their second question using the source I linked.


Basically all C and C++ compilers do this.

They do it, so they can simplify things like "x < x+1" to "true".


Signed overflow is ub in C


I'm aware of that. The question was whether any practical architectures will do the "wrong" thing here.


Compilers will do the "wrong" thing. For example, gcc optimizes out the "x >= 0" check in

    for (int x = 0; x >= 0; x++)
making it an infinite loop, because it assumes "x++" can't overflow.


Signed overflow is IB, not UB.


Alas, no. Signed overflow is UB and compilers can and will assume that it cannot happen when performing optimizations.


Which is a good thing, assuming you did reasonable testing with ubsan etc. Having to assume things that never happen is a big problem for optimization!


That's a good point. Seems like we could do a bit better than the current state of the art, though. If non-optimized builds trapped on overflow, that would at least give you a better chance of detecting these problems before the optimizer starts throwing out code you meant to keep.


That's not going to work because you're expecting every client to ban ID's that are too small.


Also note that some languages like Java do not even support unsigned integers.


Compilers can optimize for signed integers better. Overflow/underflow on signed integers is undefined behavior, which is space for compilers to optimize. But unsigned ints are defined for all cases, so you get less optimal code.

Also, you have problems whenever you compare against signed ints.


Because signed is default for some reason in most languages, and most developers aren't taught to think critically about how decisions like simple datatypes might affect scalability.


The problem is momentum. I could use unsigned int everywhere, but then I have to constantly typecast to int and back anywhere I use a library expecting signed ints. If we all switched to unsigned int by default, then everything would make more sense but we'll all live in typecasting hell during the migration.


Unsigned by default doesn't make more sense than signed by default. The behavior near 0 is surprising; if you underflow you either get a huge value (anything not Swift) or you crash (Swift).

It was a mistake to use them for sizes in C++. Google code style requires using int64 to count sizes instead of uint32 for good reasons.


Not just the default, it doesn't even exist in some languages.


I read somewhere in the swift documentation that unless you have a specific need for a UInt, that Int is preferred even if you know that the value will always be nonnegative. I think compatibility is one reason they give.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: