Hacker News new | past | comments | ask | show | jobs | submit login

>> Firstly, no, good old C doesn't.

Yes, it does. The C99 standard has been around for 20 years. I started giving up on compilers that don't support it 10 years ago. I consider it a given for C.

>> It is implementation-defined whether there is an int16_t

No, it's not. That type is part of the C99 standard.

>> Is that so? This code will work nicely on an ancient Unix box with 16 bit int, or on a machine with 32 or even 64 bit int:

That's cool, your example only needs to count to 3. Any size integer will do. The problems arise when you go to 100,000 and that old 16bit machine rolls over at 65536 but the newer ones don't. Other times someone (me) may want things to roll over at the 16 bit boundary and we need to specify the size as int16_t rather than figure out if int or short or char is that size for each architecture. (and yes I know rollover is undefined behavior)

>> Write a convincing argument that we should change both ints here to int32_t or whatever for improved portability.

In your example it doesn't matter. I'd argue that at least giving the size of your integers some thought every time is a good habit to get into so you don't write non-portable code in the cases where it does matter. You are free to argue that it's too much effort or something, but I'd invite you to argue that "i16" is more effort to type than "int".




> The problems arise when you go to 100,000 and that old 16bit machine rolls over at 65536 but the newer ones don't

There, we are running into the question of: on that system, can we define that array at all, if it has 100,000 characters.

> That type is part of the C99 standard.

Unfortunately, the standard isn't what translates and executes your code; that would be the implementation. The standard allows implementations not to provide int16_t, if they have no type for which it can be a typedef name. A maximally program can use int16_t only if it has detected that it's present. If an int16_t typedef name is provided then <stdint.h> will also define the macro INT16_MAX. We can thus have code conditional on the existence of int16_t via #ifdef. (Since we know it has to be nonzero if provided, we can use #if also).

> Other times someone (me) may want things to roll over at the 16 bit boundary and we need to specify the size as int16_t rather than figure out if int or short or char is that size for each architecture. (and yes I know rollover is undefined behavior)

Someone who knows C knows that unsigned short is 16 bits wide or wider, as is unsigned int. A maximally portable wrap-around of a 16 bit counter, using either of thise types, is achieved using: cntr = (cntr + 1) & 0xFFFF. That will work on a machine that has no 16 bit word, and whose compiler doesn't simulate the existence of one.

We can do it less portably if we rely on there being a uint16_t; then we can drop the & 0xFFFF. (It isn't undefined behavior if we use the unsigned type.) The existence of uint16_t in the standard is encouraging programmers to write less portable; they reach for that instead of the portable code with the & 0xFFFF. Code relying on uint16_t, ironically, is not portable to the PDP-7 machines that Thompson and Ritchie originally worked with on Unix; those machines have 18 bit words.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: