Hacker News new | past | comments | ask | show | jobs | submit login

If SQLite wants exactly C89, it can just require -std=c89, and then people compiling it with a different standard target are to blame. This is just standard backwards incompatibility, nothing about UB (in other languages requiring specific compiler/language versions is routine). Problems would arise even if it was changed from being a defined 'free(x)' to being a defined 'printf("here's the thing you realloc(x,0)'d: %p",x)'. (whether the C standard should always be backwards compatible is a more interesting question, but is orthogonal to UB)

I do remember reading somewhere that a real platform in fact not handling size 0 properly (or having explicitly-defined behavior going against what the standard allowed?) being an argument for changing the standard requirement. It's certainly not because compiler developers had big plans for optimizing around it, given that both gcc and clang don't: https://godbolt.org/z/jjcGYsE7W. and I'm pretty sure there's no way this could amount to any optimization on non-extremely-contrived examples anyway.

I had edited one of my parent comments to mention realloc, so if we both landed on the same example, there's probably not that many significant other cases.




> If SQLite wants exactly C89, it can just require -std=c89, and then people compiling it with a different standard target are to blame.

Backwards compatibility? I thought that was a target for WG14.

> This is just standard backwards incompatibility, nothing about UB

But UB is insidious and can bite you with implicit compiler settings, like the default to C99 or C11.

> whether the C standard should always be backwards compatible is more interesting, but is a question orthogonal to UB

If it's a target, then it should be.

And on the contrary, UB is not orthogonal to backwards compatibility.

Any UB could have been made implementation-defined and still be backwards compatible. But it's backwards-incompatible to make anything UB that wasn't UB. These count as examples of WG14 screwing over its users.

> I do remember some mention somewhere of a real platform in fact not handling size 0 properly being an argument for reducing the standard requirement.

So WG14 just decides to screw over users from other platforms? Just keep it implementation-defined! It already was! And that's still a concession from the pure defined behavior of C89!

> I had edited one of my parent comments to mention realloc, so if we both landed on the same example, there's probably not that many significant other cases.

I beg to differ. Any case where UB was implicit just because it wasn't defined in the standard could have easily been made implementation-defined instead.

Anytime WG14 adds UB that doesn't need to be UB, it is screwing over users.


> Backwards compatibility? I thought that was a target for WG14.

C23 removed K&R function declarations. Indeed backwards-compatibility is important for them, but it's not the be-all end-all.

Having a standard state exact possible behavior is meaningless if in practice it isn't followed. And it wasn't just implementation-defined, it had a specific set of options for what it could do.

> Any case where UB was implicit just because it wasn't defined in the standard could have easily been made implementation-defined instead. Any UB could have been made implementation-defined and still be backwards compatible. But anything that wasn't UB that now is counts as an example of WG14 screwing over its users.

If this is such a big issue for you, you could just name another example. It'd take, like, 5 words to say another feature in question unnecessarily changed. I'll happily do the research on how it changed over time.

It's clear that you don't like UB, but I don't think you've said anything more than that. I quite like that my compiler will optimize out dead null comparisons or some check that collapses to a 'a + C1 < a' after inlining/constant propagation. I think it's quite neat that not being able to assume signed wrapping means that one can run sanitizers that warn on such, without heaps of false-positives from people doing wrapping arith with it. If anything, I'd want some unsigned types with no unsigned wrapping (though I'd of course still want some way to do wrapping arith where needed)


> Having a standard state exact possible behavior is meaningless if in practice it isn't followed.

No, it means that the bug is documented to be in the platform, not the program.

> If this is such a big issue for you, you could just name another example. It'd take, like, 5 words to say another feature in question unnecessarily changed.

Okay, how about `signal()` being called in a multi-threaded program? Why couldn't they define it in C11 such that it could be called? Obviously, such a thing didn't really exist in C99, but it did in POSIX, and in POSIX, it wasn't, and still isn't, undefined. Why couldn't WG14 have simply made it implementation-defined?

> I quite like that my compiler will optimize out dead null comparisons or some check that collapses to a 'a + C1 < a' after inlining/constant propagation.

I'd rather not be forced to be a superhuman programmer.


> No, it means that the bug is documented to be in the platform, not the program.

Yes, it means that the platform is buggy, but that doesn't help anyone wanting to write portable-in-practice code. The standard specifying specific behavior is just giving a false sense of security.

> Okay, how about `signal()` being called in a multi-threaded program? Why couldn't they define it in C11 such that it could be called?

This is even more definitely not a case of compiler developer conflict of interest. And it's not a case of previously-defined behavior changing, so that set remains still at just realloc. (I wouldn't be surprised if there are more, but if it's not a thing easily listed off I find it hard to believe it's a real significant worry)

But POSIX defines it anyway; and as signals are rather pointless without platform-specific assumptions, it's not like it matters for portability. Honestly, having signals as-is in the C standard feels rather useless to me in general. And 'man 2 signal' warns to not use 'signal()', recommending the non-standard sigaction instead.

And, as far as I can tell, implementation-defined vs undefined barely matters, given that a platform may choose to define the implementation-defined thing as doing arbitrary things anyway, or, conversely, indeed document specific behavior for undefined things. The most significant thing I can tell from the wording is that implementation-defined requires the behavior to be documented, but I am fairly sure there are many C compilers that don't document everything implementation-defined.

> I'd rather not be forced to be a superhuman programmer.

All you have to do is not use signed integers for doing modular/bitwise arithmetic just as much as you don't use integers for doing floating-point arithmetic. It's not much to ask. And the null pointer thing isn't even an issue for userspace code (i.e. what 99.99% of programmers write).

I do think think configuring behavior of various things should be more prevalent & nicer to do; even in cases where a language/platform does define specific behavior, it may nevertheless be undesired (e.g. a+1<a might not work for overflow checking if signed addition was implementation-defined (and, say, a platform defines it as saturating), and so portable projects still couldn't use it for such).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: