Hacker News new | past | comments | ask | show | jobs | submit login

I haven't got the time to read the paper yet but I believe I'd emerge with the more or less the same opinion that I've had before: nobody's forcing you to pass -O2 or -O3. It's stupid to ask the compiler to optimize and then whine that it optimizes. I usually am OK with the compiler optimizing, hence I ask it to do so. I'm glad that others who disagree can selectively enable or disable only the optimizations they're concerned about. Most of the optimizations that people whine about seem quite sane to me. Of course, sometimes you find real bugs in the optimizer (yesterday someone on libera/#c posted a snippet where gcc with -O2 (-fdelete-null-pointer-checks) removes completely legit checks)



At least in compilers like gcc, optimization needs to be enabled to get sane warnings emitted by the compiler, so some people are indeed being forced to pass in -O2 to get sane build warnings for projects.

I would really like it for the C standard to clean up Undefined Behaviour. Back in the 1980s when ANSI C was first specified, a lot of the optimizations that modern compiler writers try to justify via Undefined Behaviour simply weren't part of many compiler's repertoires, so most systems developers didn't need to worry about UD and there was no push for the standard to do so as a result.

If people really want the optimizations afforded by things like assuming an int can't overflow to a negative number in a for loop, my personal position is that the code should be annotated such that the optimization is enabled. At the very least, the compiler should warn that it is making assumptions about potentially UB when applying such optimizations.

There is this false belief that all legacy code should be able to compiled with a new compiler with no changes and expect improved performance. Anyone who works on real world large systems knows that you can't migrate to newer compilers or updated OSes with zero effort (especially if there's any C++ involved). I understand that compiler writers want to improve their performance on SPEC, but the real world suffers from the distortions caused by viewing optimizations through the narrow scope of benchmarks like SPEC.


> If people really want the optimizations afforded by things like assuming an int can't overflow to a negative number in

Perhaps people wanted int overflow to be UD because that is semantically clean. Code where possible integer overflow is intended and not bug is IMHO significant minority of all integer usage, in vast majority it is a bug, similar to invalid memory access. So it would be reasonable for some compilers/platforms to handle integer overflows by cpu exception translated to signal killing the process (similar to SIGSEGV for invalid memory access), and only integer operations explicitly marked for overflow (i.e. unsigned in C) be allowed.

It is just our experience that makes 'integer overflow should wrap' and 'invalid memory access should crash' default positions, while 'integer overflow should crash' and 'invalid memory access should be ignored/return 0' are alternative positions. Conceptually, both are just stricter or looser ways to handle code errors and it makes sense that C does not prescribe neither one.


> At least in compilers like gcc, optimization needs to be enabled to get sane warnings emitted by the compiler, so some people are indeed being forced to pass in -O2 to get sane build warnings for projects.

That's a fair point and it irritates me too. But you're not forced to run -O2 in production if you don't want to. I've had projects where we compile with multiple compilers and different settings -- including different libcs -- just to get all the warnings, while we ship non-optimized or minimally optimized debug builds to the customer..


You can't turn off -O2 in some projects. The Linux kernel has plenty of code that requires optimization to be on to build succesfully -- look at things like BUILD_BUG_ON(). And it's projects like the Linux kernel that have had security issues caused by overzealous compiler optimizations. Getting blindsighted silently by compilers has already proven that the current approach towards Undefined Behaviour by compiler writers is not working.


> If people really want the optimizations afforded by things like assuming an int can't overflow to a negative number in a for loop, my personal position is that the code should be annotated such that the optimization is enabled.

That's not an entirely absurd idea, especially if you wanted to enable optimization piece-wise for legacy code. But in the end does it matter whether you specify the optimizations you want in the build system or in the code?

> At the very least, the compiler should warn that it is making assumptions about potentially UB when applying such optimizations.

The assumptions are usually "assume no UB." I don't know what kind of warnings you expect but this idea has been brought up before and it's not good because it would just make the compiler flood warnings on any code that does any arithmetic on variables (as opposed to compile time constants).


Again, not all optimizations that compiler writers can come up with are Good Ideas. Since the Meltdown and Spectre processor vulnerabilities came to light back in 2017, many large projects have been forced to switch to prioritizing security over performance. With that shift in the industry, I think it's fair to expect compiler writers to give greater weight to potential security issues (including issues related to UB). It really is a different world now than it was 5 years ago on this front.


Meltdown shouldn’t have required anyone to switch anything since it was a purely Intel hardware bug.

Even saying Spectre causes you to care more about memory safety sounds like marketing fluff. Fixing memory bugs doesn’t even prevent Spectre issues.


When Spectre and Meltdown hit, the Linux kernel was forced down the path of adding mitigations which pessimised the generated code. Performance is no longer the primary driver of generated code. That is a change in technical direction and is hardly "marketing fluff". And it doesn't just impact Linux, it impacts languages that are implemented using JIT engines that virtually everyone uses. Thankfully Linux was able to badger the C compiler into doing what was needed, but these sorts of needs of system level projects are way outside of what ISO C permits, and they're hard technical requirements now that can't be ignored.


Memory safety was always and is still more important than performance in code generation, and it's by far the most important thing in security. Spectre mitigations in userland are mostly about removing exact sources of timing data rather than changing anything else.

There are code changes to stop branch prediction but it's mainly limited to the kernel. Or even limited to the Linux kernel, because other OSes have different approaches to it.


IIRC a few mitigations were rolled back as the additional safety was not worth the performance cost.


I’m amused by the people who ask for optimization and then complain about it. Or the “it did what I said, not what I meant” crowd.

But, officially, undefined behavior is always undefined, not just at higher optimization levels.


Alternatively if my compiler has a "run faster" flag I would not expect it to change the "semantics" of my code in the process.

Additionally in C UB is often not intentional nor trivial to detect in your codebase since it may be the interaction of two pieces of code that are not anywhere obviously close to each other. There comes a point where faster but broken code isn't better it's just broken.


"Run faster" involves things like register allocation. How can you justify register allocation doesn't change semantics of code? It absolutely does! If you smash the stack, variables on stack will be smashed, and variables on register will not. This is change of semantics.

Compilers justify register allocation by assuming stack can't be smashed, contrary to hardware reality. Because smashing the stack is UB. That's what C UB is for.


C exposes a (fairly low-level) abstract machine with its own semantics, encapsulating the operational semantics of the CPU. Optimization flags are expected to affect the uarch-operational semantics, but not the semantics of the C abstract machine.

Think of it like trading one cycle-accurate emulator for a better, more tightly-coded cycle-accurate emulator. Or a plain loop-and-switch bytecode interpreter for a threaded-code bytecode interpreter. The way your code is running on the underlying machine changes, but the semantics of your code relative to the abstract machine the code itself interacts with should not change.

> Compilers justify register allocation by assuming stack can't be smashed, contrary to hardware reality.

...which should be entirely fine, as the existence of a stack isn't part of the exposed semantics of the C abstract machine. Values can be "on the stack" — and you can get pointers to them — but nothing in the C abstract machine says that you should be able to smash the stack. There's nothing in the C standard itself describing a physically-laid-out stack in memory. (There are ABI calling conventions defined in the standard, but these are annotations for target-uarch codegen, not facts about the C abstract machine.) There could just as well be a uarch with only the ability to allocate things "on the stack" by putting them in a heap — in fact, this is basically how you'd have to do things if you wrote a C compiler to target the JVM as a uarch — and a C compiler written for such a uarch would still be perfectly compliant with the C standard.

If C were an interpreted language, the fact that the stack can be smashed would be referred to as a "flaw in the implementation of the runtime" — the runtime allowing the semantics of the underlying uarch to leak through to the abstract-machine abstraction — rather than "a flaw in the interpreter" per se; and you'd then expect a better runtime implementation to fix the problem, by e.g. making all pointers to stack values actually be hardware capabilities, or making each stack frame into its own memory segment, or something crazy like that.

As a compiled language — and especially one that has encapsulation-breaking holes in its abstract machine, like the C inline-assembly syntax — you can't exactly expect a runtime shim to fix up the underlying uarch to conform to the C abstract machine. But you could at least expect the C compiler to not let you do anything that invokes those cases where it's exposing potentially-divergent uarch semantics rather than a normalized C-abstract-machine semantics. As if, where there should be a shim "fixing up" a badly-behaved uarch, you'd instead hit a NotImplementedException in the compiler, resulting in a compilation abort.


If you wanted to prevent undefined behavior at compile time by having the compiler throw an exception, you'd actually need to prevent potentially undefined behavior, for the simple reason that compile time doesn't know what your inputs at runtime are. You could come up with a subset of C, but it would be a restrictive and probably useless subset. C strings go out the window, for instance, because you don't know at compile time if a given char * is indeed a NUL-terminated string. You also wouldn't be able to store that pointer anywhere, because it could be freed as soon as you return from the function - and that's assuming we're disallowing threads and signals in our C subset, or else you just couldn't use the pointer at all. (This, by the way, argues that it's a flaw in the language definition, not in the implementation of anything.)

There are a huge number of languages where you pass strings along with their length. There are also plenty of languages where you know (either via compile-time analysis and an insistence on writing code in an analyzable way, as in Rust, or via mild runtime overhead, as in Swift or Haskell or Java) that the memory remains still valid when you reference it. But those languages are not a subset of C. They either need more information about a program than a C program contains, or they disallow operations that the C abstract machine can perform.


You're right that you can't define a safe subset of C without making it impractical. MISRA C defines a C subset intended to help avoid C's footguns, but it still isn't actually a safe language. There are alternative approaches though:

1. Compile a safe language to C (whether a new language or an existing one)

2. Formal analysis of C, or of some practical subset of C, to prove the absence of undefined behaviour

Work has been done on both approaches.

ZZ compiles to C. [0] Dafny can compile to C++, but it seems that's not its primary target. [1][2]

There are several projects on formal analysis of C. [3][4][5][6]

[0] https://github.com/zetzit/zz

[1] https://github.com/dafny-lang/dafny

[2] https://dafny-lang.github.io/dafny/

[3] https://frama-c.com/

[4] https://www.microsoft.com/en-us/research/project/vcc-a-verif...

[5] https://www.eschertech.com/products/ecv.php

[6] https://trust-in-soft.com/


Is it better to write ZZ or Dafny code than to write Java, Python, Haskell, Rust, or Swift code?

(They look cool and I'm enjoying reading about them, but for the things I do for my day job where C is one of the obvious options, I'm not sure they're better options, but usually one of the languages above is a better option.)


ZZ's whole goal is to transpile to C, which (as their website mentions) has some practical advantages. Consider exotic hardware platforms where there's a vendor-provided proprietary C compiler and no other choice.

Dafny has a slightly different emphasis, as it's about formal verification, so it's more competing with Ada SPARK than with mainstream languages.

I can't comment on the general ergonomics on ZZ and Dafny as I've not tried them out, but presumably they're going to be well behind a language like Java with its huge ecosystem of IDEs etc. Neither seems particularly mature, either, so I wouldn't bet the farm on them.

> Is it better to write ZZ or Dafny code than to write Java, Python, Haskell, Rust, or Swift code?

If you've got the choice and you're just trying to do 'normal day job programming' I imagine that yes you'd be much better off just using Java. If you want a language that's quite like C or C++ but much better regarding safety, there are Rust, Zig, and the too-often-overlooked Ada language.


I think you’re conflating undefined behavior with implementation-defined behavior.

How integers act at the extremes of their values, for example, is implememtation-defined — when you’re using the integer types (rather than using IEEE754 floats for everything to get well-defined formal semantics), you’re implicitly telling the compiler that you don’t care what happens at the boundaries, and that you want whichever behavior the target uarch’s ISA entails.

Think of it like exposing an opaque native FFI type in a managed-runtime language. The semantics of that type aren’t “up to” the runtime; all the runtime does is ensure that only a safe set of interaction primitives on the type are exposed into the managed runtime. (“Safe” in the sense of not corrupting the runtime’s own abstract machine semantics; not in the sense of making any guarantees about the operations leaving the value in a well-defined state per the underlying machine.) in C, integers — and the operations on them — are exactly such “opaque foreign types.” As are pointers, for that matter. (And this is also why casting between integers and pointers is UB — since the internal representations of each aren’t part of the semantic model of the C abstract machine, it cannot make any guarantee about their interconvertability.)

C compilers shouldn’t abort on implementation-defined behavior. Having that behavior be implementation-defined is the well-defined semantics that the C abstract machine applies to such things.

It’s only cases where there’s no clear semantics even per implementation, that the C compiler should swerve and say “I don’t know what that should mean; so I give up.”


I agree that integer overflow is implementation-defined, but I wasn't talking about integer overflow, I was talking about char *.

Reading from a pointer past the bounds of the object being pointed to is undefined, not implementation-defined. How do you propose to write a size_t strlen(const char *s) that successfully compiles with your proposed compiler?


Exactly, If I wanted to be operating of the level where I am worrying about registers and stack smashing and such I would just drop down assembly directly. You use C because it has an in-theory abstract machine that I can target and avoid needing to use assembly.


Even comparing two pointers can expose potentially divergent uarch semantics. Should that be disallowed at compile time?


No no no; you see, in these kinds of debates "UB" is only the things I disagree with!


Many of the people who complain about undefined behavior effectively want a #pragma dwim.


The standard committee’s position is that if undefined behavior isn’t easy for the programmer to detect, why would it be easy for the compiler to detect?

I’m a little more familiar with the C++ committee than the C committee. The C++ committee prefers to declare something an error than to declare it undefined behavior. They only declare something undefined when they believe it would be unreasonably difficult for the compiler to detect the problem (e.g., using multiple, incompatible, definitions for an inline function; which can happen if you have a macro expanding differently in different parts of the code, or you have multiple definitions for the function, but only one definition is ever visible to the compiler at any moment in time).

I’m pretty sure the “signed overflow is undefined” rule is something of a special case: it should be easy to detect when source code doesn’t have a hard coded upper bound, but giving an error or warning in all cases will create too many false positives, and declaring that it wraps on overflow has been deemed unacceptable by the committee.


UB problems happen only when compilers detect it, understand it is an opportunity for optimization, because they are allowed to do anything, then do that optimization. When compiler can't detect UB, it actually works as expected: when integers overflow they do exactly that and nothing else, when null pointers are dereferenced they do exactly that and nothing else, when uninitialized memory is read it does exactly that and nothing else.


> UB problems happen only when compilers detect it, understand it is an opportunity for optimization, because they are allowed to do anything, then do that optimization.

This is a deep misconception, and not at all how most undefined behavior is related to optimization.

Trivial example: Compilers assume that your variables aren't written to randomly from other threads. Without this assumption, virtually no optimization would be possible. Therefore, data races are UB - they violate a hard assumption of the compiler. But at no point did the compiler say "oh, you wrote to this variable without synchronization! Now I'll show you, hehehe!".

This is the same for e.g. removed overflow checks. Compilers can optimize many loops only if they assume signed integer overflow never happens. So, because compilers should be able to assume this, it is UB if it happens in your code. The same logic that deduces "this loop cannot wrap around" deduces "this if condition [an overflow check] cannot ever be true".

But it's easier for a programmer who got their UB-reliant checks optimized out to attribute malice to the compiler than to understand optimization fundamentals, and thus we get people complaining to high heaven and back.


>But at no point did the compiler say "oh, you wrote to this variable without synchronization! Now I'll show you, hehehe!".

That's because the compiler doesn't detect the situation. If it knew, it could, say, remove all code leading to that point - it's legal and the resulting code would be faster.

>Compilers can optimize many loops only if they assume signed integer overflow never happens.

I don't think this happens. Most loops iterate over arrays, and at least in C tradition unsigned integers are used as array index with no signed integers in sight.


You are strawmanning the parent, he did not claim that compilers will go "oh, you wrote to this variable without synchronization! Now I'll show you, hehehe!". Your description of the compiler's behavior matches his description; the compilers detect a case where the program might exhibit behavior defined ISO C spec (signed integer overflow), see it as a potential for optimization because they are allowed to do anything (certain passes can be made if overflow is assumed not to happen), then do that optimization.


The parent said

> UB problems happen only when compilers detect it, understand it is an opportunity for optimization, because they are allowed to do anything, then do that optimization.

Perhaps I'm misunderstanding the comment, but I took "detect it, understand it is an opportunity for optimization, because they are allowed to do anything, ..." to mean "detect that undefined behavior occurs and then ..."

Instead, it should be "detect that an optimization can be applied and that the optimized code will do the right thing in all cases that undefined behavior does not occur, and apply the optimization."

I've met programmers who seem to believe that compilers intentionally apply weird transformations when they detect undefined behavior. But really, the compilers are applying sensible transformations for well-defined code and ignoring whether those transformations are sensible for code that isn't well-defined.


Here is an actual example of taking advantage of UB behavior in a compiler.

The compiler sees a variable x that's allocated on the stack. It looks at all the uses of the address of x, and sees that they are all loads and stores. It therefore decides to promote x into a register variable.

Where's the UB, you ask. Well, since the address of x was never leaked, it is UB to compute another pointer to that address (say, via an integer-to-pointer conversion). The compiler never checked for that possibility to affirm that it was UB; it knew that any code that did that would be UB and simply ignored the possibility.

This makes arithmetic overflow a very poor vehicle for UB because it's unusual in that you can't really take advantage of the UB without pointing to the specific operation that caused the UB to occur. This is why I believe that arithmetic overflow UB is gratuitous, and people objecting to UB because of what happens specifically with arithmetic overflow go on to make suggestions that are completely untenable because they don't have familiarity with how most UB works.


Undefined signed overflow isn’t great (and it doesn’t help that C’s implicit integer promotion rules are bad), but it’s important because it allows loop optimizations, which really are important cross-platform, eg Intel is fine with loops counting up but PPC would like them to count down.

The problem causing this is that C loops are overspecified. If there was a “for…in” loop that didn’t make you declare and increment a variable, then manual increments could trap on overflow without causing so many problems.


I don't understand.

If you can turn the loop from upward to downward then you do have a bound for it, and then you can tell if it overflows or not.

Also why is signed overflow a problem and not unsigned?

Surely you want unsigned loop optimized also?


Unsigned overflow is not UB (it wraps) so it has to be preserved more often, which means you have a loop bounds less often, which means those loops can't be optimized.

Typically not a problem for i=0...n loops but eg `for (i=0;i<n;i+=2)` could overflow.


You can cast literally any optimization into this shape. For starters, virtually any optimization pass requires sane assumptions about (absence of) data races and/or out of bounds writes. The point you are arguing (or claiming OP argued) boils down to "compilers behave as I want when I turn all optimization passes off".

No, the OP wrote specifically that "the compiler detects [UB]" and then does optimizations exploiting that. Not "detects potential UB". The former is a common misconception, the latter is basically a truism.


> You can cast literally any optimization into this shape

Tell me how these use UB: integer constant folding, unused load removal, optimal use of GP registers to minimize stack spilling

>sane assumptions

"sane" here is doing a lot of work. Assuming overflow won't happen is not a sane assumption, assuming some other thread won't randomly write into your stack memory is.


> unused load removal

Oh come on, this is like the most UB-centric thing in the entire compiler. The compiler uses UB to know that it has enumerated all uses of a memory location, as any pointer that didn't get its value from the data dependency tree that compiler used had to use UB to compute the address. (Be it from buffer overflow, use after end-of-lifetime, random integer converted to pointer, etc.)


I might be using the wrong terminology, what I'm talking about is removing e.g. `int x = *ptr` where x is then not used in scope (more realistically, `(*struct_inst).member` shouldn't bother to load the other members of the struct). What you're doing sounds like removing a global it detects is never used, which I agree relies too much on UB and should not be done.


Aside from the possible trap ("segfault") mention in the sibling comment, the first example relies on absence of UB because with UB you could enumerate the contents of the stack to find where `x` is living, and discover it isn't there, contra the spec. Even allocating variables directly into registers and never onto the stack relies on UB.


Why is it UB to use registers to store variables? This seems like an implementation detail. The ISO C spec doesn't even have the word "stack" as far as ctrl-f can show me.


> Why is it UB to use registers to store variables?

It isn't; that sentence is even a type error. UB is not a property of compiler output. It's a property of compiler input.

The use of registers to store variables between uses (when those variables never have their address taken) relies on the lack of any defined behavior which would allow the program to determine this is being done. The fact you can't find this in the spec is precisely because it is not defined.


If any object can potentially be reached by forging pointers or enumerating all memory locations, you can't even keep local variables in registers around potential stores to unknown pointers (or ever, in a multithreaded program).


Removing a load is visible behaviour if the load would have trapped for example.

Everybody agrees that compilers should only do sane optimizations. Nobody agrees on which subset is sane.


That's fair, I'll concede that I make some implicit assumptions. But the magnitude of people who hit problems and accidentally hit UB should give a strong indication that a lot of things done now are not close to sane.


Lobbying the standard body to reduce the amount of UB in the C standard is a perfectly reasonable position. Requiring compilers to DWIM is not.


> integer constant folding

    int a = 10;
    int var1;
    int b = 20;

    foo(&var1);

    int c = a + b;
You'd like to constant fold c? Better assume no UB:

    void foo(int* x) { *(x-1) = 0; }
> unused load removal

Same idea as above.

> Assuming overflow won't happen is not a sane assumption

If the alternative is loops being much slower when you request optimizations, then maybe it is. Consult your compiler manuals if this default irks you.


Okay, yes, you are technically right. What I mean is, making an assumption _other than_ that the code won't alter stack/heap/instruction memory that the implementation is using (e.g. aliasing, overflow, use of NULL)


Sorry, I'm tired of playing whack-a-mole with what are obviously sane assumptions to you. I literally mentioned out-of-bounds writes as a "universal UB" example, you asked me to demonstrate how UB is relevant in a few optimizations, I did, and now you shift goalposts.

My original point was that compilers don't specifically abuse UB to sleight you. You're trying hard to draw a line in the sand between different degrees of UB so that you can label one side as "not sane", with (what appears to be) the sole intent of concluding malice on the compiler author side. Please accept that the related tradeoffs have been discussed, and that a default was chosen (if you are curious for a recent instance of this, see [0]).

If you are ok with leaving performance on the table by disabling overflow being UB, you are literally free to do that. You're also free to lobby the standard bodies to alter their carefully considered choices. But you're not going to get anywhere by labeling any of this "insane".

[0]: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p090...


Isn't TBAA exactly the assumption that the program won't alter the stack/head/instruction memory in an illegal way?


That's not true at all. Compilers can detect some instances of UB, and they will happily warn when they do.

Most instances of UB are ones that the compiler could not detect. Your compiler isn't going to detect that your integer overflows, it assumes that it won't. Shit blows up if that assumption was wrong.

If you want to detect UB, you should run ubsan and the like. The compiler is not running it.


In some sense, the problem isn't that the compiler "can't" detect that an integer addition will overflow. The problem is that with some exceptions (a loop that is clearly indexing along an array, for example), the compiler will have to flag every integer addition as UB. UB notifications aren't that useful if you're getting a dozen for every five line function in your entire codebase.

Programmers do not, in general, want to do the amount of work it takes to safely add two numbers together. You need something "weird". Like an error path for every addition. Or a type system for ranges, where you actually have to specify ranges (because adding two bytes together that you casually left as 0-255 will create a new ranged value 0-510; multiplication is even more fun). Or some other exotic idea. So we just let them overflow and the chips fall where they may.


> In some sense, the problem isn't that the compiler "can't" detect that an integer addition will overflow.

It absolutely can't if it doesn't know what the values are going to be.

> the compiler will have to flag every integer addition as UB

No, I disagree. If it has no knowledge of my runtime values, it can't flag addition as UB because it isn't. It's UB only if I'm using values that would overflow, and the compiler in general can't know that. If I've done my program right, it will never overflow. There is no UB, any flag would be just absolutely wrong. The compiler couldn't detect UB. There's nothing to flag.

If it knows the values, then of course it can detect it and flag it:

    x.c:3:18: warning: integer overflow in expression of type ‘int’ results in ‘-2’ [-Woverflow]
        3 |  int a=2147483647+2147483647;
> Programmers do not, in general, want to do the amount of work it takes to safely add two numbers together.

I agree that overflow-checking is needlessly sucky in C but that's not "the problem." I usually make sure my arithmetic is safe but the compiler won't know it.


It blows up, but not as violently as when the compiler adds fuel to it.


UB is closer to "we assume this can never happen and make decisions accordingly" than "Ha ha, I caught you making a mistake, let's actively try to mess up your life now."


They know they mess up, they just have defense for it.


The problem is that this doesn't mean anything, at least in terms of the C programming language. You say "When integers overflow, they do that and nothing else," but there is no definition of signed integer overflow, so there is no "that" for them to do. That is what it means for something to be undefined behavior. You have some definition in your mind, which might happen to match what some versions of some compilers do at some optimization levels — but that isn't actually the definition, because there isn't one.


Wrapping arithmetic is supported by gcc and clang.


Reading uninitialized memory is not UB... but anyway. Here's an example of how compilers treat null dereferencing as UB:

    void func(struct x *aptr, struct x *bptr) {
        x->a = 5;
        y->a = 10;
        x->a = 10;
    }
This is a contrived example but you can imagine writing code that makes a sequence of assignments like this. An optimizing compiler can remove the "dead store" which assigns 5 to x->a. However, this optimization is valid because writing to a null pointer is undefined... if writing to a null pointer trapped (SIGFAULT), which is what it actually does at runtime, then the optimization would be incorrect... because you could observe the intermediate value of x->a = 5.

The compiler is not really "detecting that your program has undefined behavior". Instead, it is assuming that undefined behavior doesn't happen, and optimizes your program using that assumption. It's unclear what "detecting undefined behavior" would mean here... what kind of diagnostic the compiler should emit, or how to suppress the diagnostic. It's clear that this simple code is pervasive, and the preferred approach for dealing with it is somewhat outside the scope of a normal C compiler... something like static analysis, runtime instrumentation, or some kind of formal methods.


> However, this optimization is valid because writing to a null pointer is undefined... if writing to a null pointer trapped (SIGFAULT), which is what it actually does at runtime, then the optimization would be incorrect... because you could observe the intermediate value of x->a = 5.

I'm not sure this is a good example, because there's no way for you to read them from the same thread (and reading from another thread would be a race). Reading them from a signal handler would yield unspecified values (unless you're using lock-free atomic types, in which case we're probably not worried about optimizing these assignments).

> When the processing of the abstract machine is interrupted by receipt of a signal, the values of objects that are neither lock-free atomic objects nor of type volatile sig_atomic_t are unspecified, as is the state of the floating-point environment.


That's a good point, however I think it's not hard to come up with a different example that shows how compilers assume that pointers aren't NULL. Like,

    void function(struct x *ptr) {
        ptr->x = 1;
        if (ptr == NULL) {
            some_big_chunk_of_code();
        }
    }
The idea is that this could be the result of inlining, where the inlined code does a null check, but in the context it's being inlined into, we already know ptr is not null.


The compiler should help the programmer optimize instead.

"Warning: dead store detected (file):line clobbered by (file):line. Use keyword volatile to always store or remove (file):line."

This way the optimization is promoted from the resulting binary to the source code, and bugs / typos / etc can be corrected.


Optimizations often deal with inlined code. The code in the inlined function might be fine in the general sense, but once taken in a particular context, there's dead or redundant code because the compiler can deduce values from context. It might even all fold into a constant.

Telling the developers to optimize these would be equivalent to telling them to start inlining their code (or creating variants of every function that might be inlined, to optimize it for the specific context where it will be inlined).


That's a completely unworkable solution.

The above code isn't necessarily code as the programmer wrote it, but the code after it appears after macroexpansion and various optimization passes. If you were writing C++, the code may be templated.


You would get literally millions of such warnings on your average C codebase.


So basically you are saying that in the average C codebase there are millions of landmines waiting to silently fail on you. This is not a ringing endorsement of the language it's more like a ringing indictment instead.


I certainly do not think that each dead store which has been eliminated is a landmine, this is such an extreme position that it isn't even worth considering. In the very rare cases were I needed a store to actually happen I will use inline asm or some form of volatile.


Which coming back to the main subject isn't ISO C, rather a compiler extension.


Compilers should have a -do-what-i-mean-not-what-i-say flags. The required mind reading is implementation defined.


This is a pretty dismissive response to something that's a real problem.

Sure, the "Why are you deleting my checks for if *this is null" is a little silly - but there are definitely sharp edges where UB conflicts with actually useful things you might want to do. Did you know seqlocks are undefined (benign race conditions)? Ever ran into padding concerns playing poorly with trying to do atomic CAS?

It's not unreasonable for the standard to say 'padding is undefined', 'data-races are undefined' - but having no way to say "hey, trust me, please un-poison this thing you don't like" is pretty unfortunate.


> Did you know seqlocks are undefined (benign race conditions)?

It wouldn't be undefined behavior if you used atomic variables--data races involving atomics aren't undefined behavior.


That is true in theory, but in practice it's not feasible. Even assuming you restrict to trivially copyable types, you either have to have a purely-atomic duplicate of anything that you want to store, or play games with trying to get a fast atomic memcpy.

Unless things have changed a lot in the past two years neither LLVM or GCC do that much optimisation around atomics so this comes with disastrous performance implications as well as overhead battling the standard.


Yes, a fast and safe seqlock is still elusive. We will hopefully get the misleadingly named std::atomic_memccpy at some point in the future.


I know this is not relevant to your point... but padding in C is not "undefined". The amount of padding is implementation-defined, but it can only occur in specific places. The value of padding bits is unspecified. "Unspecified" in the standard means that it can be various different things, but it has to actually be one of those things. As opposed to undefined behavior, which is more of a "poison value" that trims paths from your CFG.


Ah yes, that's correct - I believe that reading and comparing the padding bytes results in undefined behavior - the bytes might not actually be initialized depending on the object class. I also don't know if you can actually use the result of comparing an unspecified/implementation defined/indeterminate value - that's where poison is generated.

In practice (UB aside), this is basically fine in the context of a read, compute, CAS loop. Those bytes do have some value in the machine and if that memory isn't written they won't mysteriously change. It's playing games with the optimisers and UB however. You might be able to get around this by first initialising the bytes to zero, then in-place copy constructing whatever you want? I wouldn't bet anything serious on that being defined though.


The poison is only generated with indeterminate values in objects with automatic storage duration (local variables on the stack). Padding bytes in other circumstances will have specific, valid values and can be compared. (An "indeterminate value" is an unspecified value or a trap representation, unsigned char does not permit trap representations, and unspecified values are "valid values".) Use of other unspecified values results in "unspecified behavior" which does not result in trimming the CFG.

I think the spec is written this way to codify how compilers are allowed to do liveness analysis... a local variable is marked as live when it is initialized, and if a variable is not live, its storage (registers or stack) may be used for other variables or temporary values. You then get UB because, for example, two comparisons may result in contradictory outcomes, because the value has been overwritten with garbage between the two comparisons. Or in a more modern compiler, you read from an uninitialized local variable, and the compiler trims the CFG.

Here's an example of what I'm thinking of:

    int x, y;
    // point A
    if (condition) {
        x = 5;
    } else {
        y = 10;
    }
    // point B
A compiler can realize that x and y can't both be live at the same time, and assign them to the same register or stack location. However, this means that if you read the value of x at point A and point B, but x is uninitialized, it will naturally have a different value each time you use it... because the value of Y is overwriting it!


UB is only UB when it comes to ISO C++ guarantees. The implementations can always make their own guarantees above and beyond that; they don't have to restrict themselves to those parts that are explicitly "unspecified" or "implementation-defined" in the Standard.


Nothing in this article convinces me the problem is contained to optimized C. The strict type aliasing is a general problem, right?


In practice GCC won't enable any TBAA optimizations at -O0. There is always -fno-strict-aliasing anyway.


It seems to me a flaw in a language and in a compiler if the average programmer has to avoid higher levels of optimizating because it cannot be predicted what they do.


What if the only languages/compilers without that "flaw" had similar performance to lower optimization levels?

(I don't know if that's the case, but it's what I thought the GP was implying.)


There is a saying that you can make any program run fast if you don't care about correctness.

We have reached the point where C programmers cannot understand the language/compiler anymore.

Given that this has been going on for a long time, my hope is that Rust will be the next systems programming language.


Rust code in unsafe blocks has lots of similar issues with fuzziness around aliasing rules:

https://github.com/rust-lang/unsafe-code-guidelines/blob/mas...


Do you think people understand Rust?


The safe subset of Rust does not permit any UB. This greatly simplifies understanding of UB in the majority of the codebase — because there isn't any. Only when there's an `unsafe` keyword in Rust, you put your "C" hat on.

(it was and continues to be a challenge to uphold this guarantee, because UB in LLVM is so ubiquitous, e.g. Rust had to patch up float to int conversions and eliminate no-op loops)


Rust can be hard to write, but by and large is not hard to read.

Rust does have more abstraction mechanisms than C. So it is possible to go overboard in that area. But that is more a coding style issue. C programs that nest multiple levels of functions pointers can also be hard to understand.


It's easier to understand production Rust than production C


It doesn't seem like anyone is taking the position here that C is flawless. Rather, the question is, given both its flaws and its ubitquity, is it acceptable to trade off developer head-wall interactions for the possibility of high performance.

The only winning move is not to play. (with C)


I don't see it that way. Isn't that precisely the point of higher levels? Don't go there if you don't know what you're doing. Since we can't magically install knowledge into average programmer's heads, the only other choice would be to have those higher levels not exist because the average programmer will never understand them, and also the compiler can't mind-read the average programmer to understand that what they want isn't exactly what they said they want.


I think the general problem is that if so much is going to hinge on this, it should be much easier to tell whether or not you know what you're doing. Undefined behavior really wouldn't be a problem if you could spot it reliably (possibly using tools, like the compiler toolchain itself).

This loose mapping between four optimization levels and a variety of obscure language situations is invisible and situated only in a handful of top engineers' memories. That's not a great way to run a language if you actually care about the correctness of programs written in that language.


> Isn't that precisely the point of higher levels?

I used to think that higher levels are for giving the compiler more time to think so it can come up with a better result (and for somewhat specifying the desired space-performance tradeoff). In my mind the "don't do it if you don't know what you are doing"-things were seperate flags, like -ffast-math.

Obviously I eventually learned that bad things can happen, but "just build prod with -O2" still seems to be what almost everyone does.


-O3 mostly is “try harder” but in practice it is more about using more experimental/dangerous techniques than thinking harder. That’s because, basically, compiler optimizations aren’t as important as people think and there aren’t amazing insights you can come to by thinking harder.

If there were, it’d be better for the compiler to tell the programmer about them rather than rediscover them every compilation.


I always thought the main point of the levels was simply compilation speed. That and debuggability.


For the most part yes. O3 also enables (or used to) some optimizations that are not always profitable.


Nope.

That optimisations must not change the visible behavior (except for running time) of a program was the cardinal rule of optimisation.

Then it got flushed down the toilet. (With the trick of retroactively redefining the behavior to be undefined in the first place and thus all bets being off).


Optimizations aren't supposed to change the meaning of your code. And specifically for C, unsafe optimizations are supposed to only apply with -O3. Level 2 is supposed to be safe.


This is definitely untrue. The difference between -O1, -O2, -O3, and -Os is as follows:

- O1 includes optimizations that only require a small amount of additional compilation time. (Fast compilation, code is much faster than -O0.)

- O2 includes O1, and additional optimizations that use a medium amount of compilation time. (Medium compilation, reasonably fast code.)

- O3 includes O2, and additional optimizations that take more compilation time. (Slower compilation, faster, larger code. Unrolled loops and the like.) It often makes sense to compile hot inner loops with O3 because the compiler can do some fairly aggressive vectorization, if you know what you are doing... and the code remains correct in either case, if it was correct in the first place. It's basically "free" vectorization, when it works.

- Os includes O2, except flags that increase code size. (Medium compilation, code is slightly slower and smaller than O2.)

The only optimization level that actually changes the meaning of correct code is -Ofast, which is rarely used. It's described in the manual with the phrase "Disregard strict standards compliance".

ALL levels of optimization will change the meaning of your code if your code relies on undefined behavior. For example,

https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html

> -faggressive-loop-optimizations

> This option tells the loop optimizer to use language constraints to derive bounds for the number of iterations of a loop. This assumes that loop code does not invoke undefined behavior by for example causing signed integer overflows or out-of-bound array accesses. The bounds for the number of iterations of a loop are used to guide loop unrolling and peeling and loop exit test optimizations. This option is enabled by default.

It's always enabled.


> And specifically for C, unsafe optimizations are supposed to only apply with -O3. Level 2 is supposed to be safe.

Citation needed? AIUI optimization levels are entirely up to the compiler. GCC's man page says nothing about the safety of these levels.

> Optimizations aren't supposed to change the meaning of your code.

And they generally won't. The trouble is when the meaning of your code wasn't well defined to begin with (or you thought it meant something it didn't).


> GCC's man page says nothing about the safety of these levels.

Well, it used to.


Imagine you and the compiler are parties to a contract. Your obligation is to not let UB happen. The compiler's obligation is to not change the meaning of your code. Insisting that the compiler should uphold its end of the deal when you don't uphold yours is like saying if you sign a contract to buy widgets, you should have to pay even if the vendor fails to deliver them.


This is the best expression of the idea I have heard!

"Do what I mean!" "Well, if your code does UB then it doesn't mean anything"

In fact, the behaviour of code is only limited to defined behaviours... if you stray... then it's undefined.


> Your obligation is to not let UB happen.

Sorry, no can do. If that's the contract than C is a completely useless language.


> Sorry, no can do. If that's the contract than C is a completely useless language.

That is the contract. Your inference may be correct, but it does no good to insist that C should behave differently than it does. There are better options.


Nope, that's not the contract.


I have always been told that, if a program contains UB, then the computer is justified in doing anything at all. This would naturally imply I am required to avoid any UB.

Do you disagree with this?


> I have always been told that, if a program contains UB, then the computer is justified in doing anything at all

You have been told wrong.

That's the point.

1. The C standard actually contains language that says what actions are permitted. These do not include "anything at all".

Permissible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message).

This was initially a binding part of the standard. Later versions of the standard still include this language, but made it non-binding. Which certain people interpreted as "the standard now says the exact opposite of what is literally in the text of the standard".

2. The C standard is and always was a compromise document, not a complete definition, in order to make it possible for the standard to be applicable to a wide variety of compilers on a very wide variety of different kinds of machines. It is thus by necessity and intention incomplete. Hence IDB and UB. That is, being compliant with the C standard is not sufficient to be useful implementation of C, something that is (or at least was) stated explicitly in the standard. This language might have been purged from newer versions of the standard, but it was there originally.


Why do you think "unpredictable results" doesn't mean "anything at all", especially when the sentence before that one, that you conveniently didn't quote, says UB is behavior "for which the Standard imposes no requirement"? And that's what it's said all the way since C89.


> Why do you think "unpredictable results" doesn't mean "anything at all"

Why do you think it does??

If I roll dice, the exact number that will come up is unpredictable.

However, the result will not be pink elephants parachuting through the roof.

Once again, the standard then goes on to clarify what it means by "unpredictable":

Permissible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message).

If that isn't clear enough, there is also the rationale:

http://port70.net/~nsz/c/c89/rationale/a.html#1-5

"The terms unspecified behavior, undefined behavior, and implementation-defined behavior are used to categorize the result of writing programs whose properties the Standard does not, or cannot, completely describe."

Hmm...how can that be?

Let's dig further:

"Undefined behavior gives the implementor license not to catch certain program errors that are difficult to diagnose. It also identifies areas of possible conforming language extension: the implementor may augment the language by providing a definition of the officially undefined behavior. "

Why doesn't this say "UB gives the implementor license to do anything at all"? Maybe because that wasn't the intention behind or purpose of UB?

Let's dig further to "Compliance"

"The three-fold definition of compliance is used to broaden the population of conforming programs and distinguish between conforming programs using a single implementation and portable conforming programs. "

Hmm...according to the "standard purists", there is only one level of conformance: full conformance to the standard, everything else means the compiler can do anything it wants. Strangely, the standard itself disagrees.

Let's look some more:

"Existing code is important, existing implementations are not. A large body of C code exists of considerable commercial value. Every attempt has been made to ensure that the bulk of this code will be acceptable to any implementation conforming to the Standard. The Committee did not want to force most programmers to modify their C programs just to have them accepted by a conforming translator. "

Again, the exact opposite of what the "standard extremists" claim.

There are those who poo-pooh the idea that C is a "portable assembler". Here is what the rationale document says:

"C code can be non-portable. Although it strove to give programmers the opportunity to write truly portable programs, the Committee did not want to force programmers into writing portably, to preclude the use of C as a ``high-level assembler'': the ability to write machine-specific code is one of the strengths of C. It is this principle which largely motivates drawing the distinction between strictly conforming program and conforming program"

So the C standards committee explicitly sought C to be usable as a "high-level assembler". Who would have thunk? Apart from everybody who actually knows about the spirit of C. What is nonsense is this "spirit"? Well the rational says this:

"The Committee kept as a major goal to preserve the traditional spirit of C. There are many facets of the spirit of C, but the essence is a community sentiment of the underlying principles upon which the C language is based. Some of the facets of the spirit of C can be summarized in phrases like

- Trust the programmer.

- Don't prevent the programmer from doing what needs to be done.

- Keep the language small and simple.

- Provide only one way to do an operation.

- Make it fast, even if it is not guaranteed to be portable.

The last proverb needs a little explanation. The potential for efficient code generation is one of the most important strengths of C. To help ensure that no code explosion occurs for what appears to be a very simple operation, many operations are defined to be how the target machine's hardware does it rather than by a general abstract rule. "

Hmm...


Again, UB is defined as behavior "for which the Standard imposes no requirement". No requirement. If your interpretation of the other parts of the standard were correct and there were requirements for what UB could be, then the standard would be self-contradictory.


There's no contract. Nobody ever shared a piece of paper saying "your software must not use UB".

The closest thing to a "contract" a language has is training material and reference books. And guess what, nearly all of C material mostly ignore UB, because enumerating it all is completely impractical even for reference books.

Most of the C software in use on the wild was created before the hyper-optimization trend of C compilers, when your program did mostly what you said, and if what you said is undefined, it would still behave in some way that is close to a literal reading of the code. For most of the C software that exists, the modern treatment of UB is a huge post-facto unilateral contract change.


The C specification is the contract between you and the compiler.


Yep. The creators of the standard spoke to this:

"The three-fold definition of compliance is used to broaden the population of conforming programs and distinguish between conforming programs using a single implementation and portable conforming programs.

A strictly conforming program is another term for a maximally portable program. The goal is to give the programmer a fighting chance to make powerful C programs that are also highly portable, without demeaning perfectly useful C programs that happen not to be portable. Thus the adverb strictly.

By defining conforming implementations in terms of the programs they accept, the Standard leaves open the door for a broad class of extensions as part of a conforming implementation. By defining both conforming hosted and conforming freestanding implementations, the Standard recognizes the use of C to write such programs as operating systems and ROM-based applications, as well as more conventional hosted applications. Beyond this two-level scheme, no additional subsetting is defined for C, since the Committee felt strongly that too many levels dilutes the effectiveness of a standard.

Conforming program is thus the most tolerant of all categories, since only one conforming implementation need accept a program to rule it conforming. The primary limitation on this license is §2.1.1.3. "

It turns out it's not an all-or-nothing thing. Not being "strictly conforming" is not a "breach of the contract" by the developer, and thus license for the compiler to do anything they want.

See also:

"Existing code is important, existing implementations are not. A large body of C code exists of considerable commercial value. Every attempt has been made to ensure that the bulk of this code will be acceptable to any implementation conforming to the Standard. The Committee did not want to force most programmers to modify their C programs just to have them accepted by a conforming translator. "

http://port70.net/~nsz/c/c89/rationale/a.html


It's not a contract if you can't expect one of the parties to have read it.

Compiler developers do not need a contract with the users. No compiler team works with that impression. No compiler user expects one either.

But if you want to use it to justify some impractical behavior, well, both your justification is a complete fabrication, and justifying it is completely meaningless. It doesn't become practical because you have some sociological theory.


You seem to be using the legal definition of 'contract'.

We're using the computer science definition, which is simply "assuming it's correctly implemented, if you feed this program Foo, it'll do Bar".


You haven’t read the C standard? It’s not that long and fairly understandable.

Plus you can go around telling people true things like “malloc() can’t be implemented in standards-compliant C” and they’ll get mad at you.


Which compiler team doesn't take the C standard seriously?

And the C standard is by definition a contract between the implementation and its users.


If you actually read the standard, you will find out that this turns out not to be the case.

The C standard is (or maybe was, quite possible that the compiler mafia removed that bit as well) very clear that standards compliance is not sufficient for a compiler to be "fit for purpose".


So the contract includes a "this contract does not guarantee that the product is fit for purpose" clause. So what? Many contracts do that; i.e. it's a fairly typical trait of contracts.


But they really meant it.

See also the different conformance levels and the rationale given for them.

http://port70.net/~nsz/c/c89/rationale/a.html


I have read the standard, and I don't believe your comment to be true. Can you point out the exact section that backs up what you're saying? (And if you say the "compiler mafia" did remove it, then can you point to an older version that says it?)


Alas, I can no longer find it. :-(

However, the rationale is quite clear on most of this.

http://port70.net/~nsz/c/c89/rationale/a.html


I didn't say it's the contract; I said it's a contract.


That's absolutely the contract compiler implementors are following. If you don't want to be a part of that contract you need to implement your own compiler from the ground up with optimisations based on your interpretation of UB.


I didn't sign a contract though, the compilers just do dumb things and I am at their mercy. If I were negotiating a contract then I wouldn't allow all the cases of UB.


Writing a C compiler that does a trivial translation to ASM is not very hard, in fact there are already quite a few. I don't understand why people don't use that if that's what they want.


Ironically, it's because the code they generate is too slow.


"I didn't sign a contract though, Linux just does dumb things and I am at its mercy."

The license is still the closest thing you've got to a contract with the makers of Linux.

Same with a language specification.


I recall reading recently that gcc moved to making overflow undefined at -O0




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: