Hacker News new | past | comments | ask | show | jobs | submit login

I think that makes perfect sense, but for some reason people want their max file sizes and max array sizes to be different on different CPU architectures without testing it or anything.



It goes the other way too. One of my worst memories from Java was when OpenJDK authors decided that they want the maximum array size to be the same (signed int32 range) on different CPU architectures and heap sizes without testing it or anything. It turned out that with enough small objects, that counter overflows. Of course having 2^29 tiny objects around was huge overhead to begin with so I understand why it hadn't been tested and why it doesn't usually happen in real life, but it would have been pretty easy avoid with simple "if it's a size, use size_t" heuristic.

After you accept that size_t can be platform specific and there's nothing scary about it, the format string issue is solved with something like "%z".


I used to be a good citizen and used size_t for all my buffer sizes, data sizes, memory allocation sizes, etc but I have abandoned that completely.

If my program needs to support >4GB sizes I need 64-bit always, makes no sense to pretend "size_t data_size" could be anything else, and if it doesn't need to support that, using 8 byte size_t on 64-bit machines makes to sense either, just wasting memory/cache line space for no reason.


They do have some use as a “currency type” - it’s fine if all your code picks i32, but then you might want to use a library that uses i64 and there could be bugs around the conversions.

And C also gives you the choice of unsigned or not. I prefer Google’s approach here (never use unsigned unless you want wrapping overflow) but unfortunately that’s definitely something everyone else disagrees on. And size_t itself is unsigned.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: