Hacker News new | past | comments | ask | show | jobs | submit login

> C unsigned integers are completely well behaved: they do arithmetic modulo 2^n

Sadly, one rarely finds an excuse to work in the field Z_(2^32) or Z_(2^64), so while that behavior is well-defined, it's rarely correct for whatever your purpose is.




It is usually correct for my purposes (electronic design automation). When it isn't I need to figure out how to handle overflow. There is no automatic solution that is right in all cases, and a trap certainly isn't.


Array indices should arguably be unsigned (and struct/type sizes), so I'd say it's a lot more common than you imply.


I would have used to argue this, until I learned that Ada not only allows enum-indexing into arrays (compiler handled), but it also allows non-zero-based indexing.

Example: #1

    -- You just index into this using 100 .. 200 and let the compiler handle it.
    type Offsetted_Array is array (Positive range 100 .. 200) of Integer;
Example: #2

    -- Indexing using an enumeration (it's really just a statically sized map)

    -- An enumeration.
    type c_lflag_t is (ISIG, ICANON, XCase, ... etc.

    -- Create an array which maps into a single 32-bit integer.
    type Local_Flags is array (c_lflag_t) of Boolean
        with Pack, Size => 32;


Yes, Ada is pretty flexible in this regard, but I'm not sure how useful this actually is.


It's actually super useful, especially since you effectively get a statically sized map. Also, you can iterate over enums, and move forward ('Succ) or backwards ('Pred) or to 'First or 'Last. You can also return VLA arrays, which means fewer "allocate just to return" problems (GNAT uses a second stack per thread allocated ahead of time).


What I meant was, how useful non-zero indexing is in general. The utility of indexing by enum is clear, as you say.


I've only used it a few times but IIRC it was contiguous value ranges of grouped values (I think it was error codes coming from C code) anchored to the middle of a range. e.g. an enum which goes from 0 .. N, but values 10-30 were some specific set of logical values and I didn't care about the rest. It was nice that Ada automatically did all range checks for me and I didn't have to remember to subtract to check the correct array index.

The most common thing I've seen it for is that most arrays (and containers) in Ada are written as 1 .. N, but if you're share index information with C code, you want 0 .. N-1 indexing.


And exactly how is silent wraparound useful or even sane for that use case? You just proved the point of the one you responded to.


Wrapping is more sensible than negative indices.


It is still dogshit though. The reasonable behaviour would be an error.


And you can raise the error if that index is actually out of bounds. I don't see why the wrapping specifically is the problem here, the only unsafety is indexing operation itself.


Sure, Rust for example will let you do that (although in debug builds it will panic unless you explicitly said this is what you intended). However from a correctness point of view, it is extremely unlikely that things[n] is doing what you intended if n wrapped.

Most likely you thought you were harmlessly increasing n, after all it's an unsigned integer, and you added something to it. But when it wrapped, adding something to it made it decrease dramatically and you probably didn't consider that.

This can be punished by bad guys, where you expected a value like 10 or maybe 420 instead the bad guys provide a huge number, you do some arithmetic with their huge number, you wrap the offset to the very start of your data structure. Now it's inside the bounds, but not where you expected at all.

This is why people talk about "hoping you get a segfault" in languages like C++ because the alternative is much worse.

If you need to care about this (fiddling with files somebody provided e.g. by uploading or emailing them to you is an obvious place this comes up in web services) you should use WUFFS to do that. You can't make this mistake in WUFFS.


I agree that domain-specific ranged types as found in Ada are close to ideal. Unbounded integers or naturals are second best. Wrapping and checked arithmetic are distant thirds, but I don't think either is intrinsically superior to the other in terms of safety. It depends on the program's specific design IMO, but if we're talking about a C-like language where checked arithmetic is not common, I still think it's clear that indexing should be unsigned. Not the approach I'd take in a new language of course.

The pointer arithmetic you describe is the real source of most unsafety. The reason most C/C++ programmers prefer segfaults is because such arithmetic lacks bounds checking.

Thanks for the reference to WUFF though, looks cool.


It’s useful when working with bits and bytes and stuff. Aside from that, I fully agree.


I think the programmer should be able to specify what happens on overflow.

Maybe they're bit twiddling and silent wrapping is expected. Maybe they want the program to hard fault. Both are valid.


Perhaps you'd like Rust, where all the choices are offered, as functions on integers such as:

carrying_add (separate carry flag on input and output)

checked_add (result is None if it would overflow)

unchecked_add (explicitly unsafe, assumes overflow will never occur)

overflowing_add (like carrying_add but does not provide carry flag input)

saturating_add (the integer "saturates" at its maximum or, in the opposite direction, minimum - useful for low-level audio code)

wrapping_add (what C does for unsigned integers)

Rust also has variants that handle potentially confusing interactions e.g. "I have a signed integer, and I want to add this unsigned integer to it". With 8-bit integers, adding 200 to -100 should be 100, and Rust's provided function does exactly what you expected, whereas in C you might end up casting the unsigned integer to signed and maybe it works or maybe it doesn't. Likewise for "What's the magnitude of the difference between these two unsigned integers?" Rust provides a function that gets this right, without needing to consult a textbook for the correct way to tell the compiler what you want.

If you can't afford to ever get it wrong, WUFFS simply forbids overflow (and underflow) entirely, WUFFS programs that could overflow aren't valid WUFFS programs and won't compile.


Right, but in almost all languages one of the possible options is chosen by default because people want "+" to do something instead of having to specify each time. My personal opinion is that "+" should trap by default and the various other behaviors that are available (which 'tialaramex lists below as examples of which Rust provides) via some other mechanism. Some languages (C, C++) do it yet another wrong way in that "+" does a thing and there is no other way to do addition, and it's even worse because they picked one of the bad ones to use as a default.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: