Hacker News new | past | comments | ask | show | jobs | submit login

As usual; if a compiler knows about undefined behavior I would much rather it throw an error rather than optimize something the programmer didn't intend based on the compiler out-smarting a human's ability to be specific.



I've heard this proposal thrown around a lot on HN, but how would that even work? Can you describe how the compiler would do this, what patterns it would look for, and what error messages it would produce? For example, consider this:

    int factorial(int n) {
        int r = 1;
        for (int i = 1; i <= n; i++) {
           r *= i;
        }
        return r;
    }
Since there are two instances of possible UB in the above function, what error messages would you like the compiler to produce?

First of all, the compiler can assume that the loop terminates. This is a reasonable assumption from a human perspective—the loop is almost certainly intended to run N times. However, if n = INT_MAX, then there’s UB… and there’s no real way to know that n ≠ INT_MAX.

You could argue that you want -fsanitize=undefined, where the compiler inserts checks at runtime.

I don’t think this proposal is well formulated enough to be viable.

Or consider something like this:

    void puts_safe(const char *s) {
      puts(s == NULL ? "(null)" : s)
    }

    void puts_with_len(const char *s) {
      printf("len = %zu\n", strlen(s));
      puts_safe(s);
    }
Do you want something like,

    Warning: puts_safe replaced with puts, because s != NULL, because strlen(NULL) is undefined
I suspect there would be a mountain of false positives, and I would rather just use a static analyzer.


What I want is for the compiler to have a mode (which ideally would be enabled by default, at least with modern compile targets) that never optimizes based on undefined behavior.

It would do this by handing any UB as an error and making the programmer write a correct program that lacks UB through informing the programmer where undefined behavior has been proven by the compiler. This would improve both the code and the programmer's skills, rather than trying to make each out-think the other.


The big gap is that it's not that compilers optimize on "having UB" but rather on "having a potential for UB".

It's not about places where "undefined behavior has been proven by the compiler", it's about all the many places in completely correct code where (as far as the compiler can reason) the code might have UB but should not if it is correct code.

The question is about what to do when the programmer has written a completely correct program that lacks UB. The current approach is to assume that it actually does lack UB and that the code path that would contain the UB is unreachable (though in a manner that the compiler can't prove). The other approach is to insert run-time checks in every such place, which is a heavy performance hit as there are many such places (e.g. every integer addition).

Requiring programmers to eliminate every place in a C program that has a potential for UB is not feasible - cleanly proving for every every integer increment and every pointer dereference that the input data can't cause UB is impossible in the general case, and assuming that all of these can cause UB and pushing warnings means that every second line of perfectly correct code will have a warning that you can't fix, because the code is correct as-is.


This is untenable in the general case. For example, consider this function:

  int dereference(int *p) {
      return *p;
  }
What should the compiler do in this case? If I pass (int *)0x1283412 into this function, that's undefined behavior…should it just always warn?


The compiler isn't __changing any programmer dictated behavior__. There are no UB sourced 'optimizations' being implicitly disabled and none have been expressly enabled. As long as valid code compiles it's on the humans that wrote the code (or that triggered it's writing in some higher level synthesis tool).

Expanding on this; I don't want compilers _making_ optimizations based on UB. I want them educating the programmers so that the UB can be eliminated at a human level and correct outcomes are the result, optimized if it makes sense.


But this is a misunderstanding. The “compiler changing behavior” isn’t actually happening in those cases. The program’s behavior never changed. We just have a wrong understanding of the program’s behavior as humans reading the source. If we were to codify and write down all of the expectations we might have about what behavior a source file might represent so that the compiler can match it exactly.... that’d be the standard and we are back where we started.


So, you want something like,

    WARNING: p may be NULL (undefined behavior)


That's one way, but I don't want the compiler attempting to out-think whatever wrote the code. If the code has undefined behavior, if the code has a non-obvious transformation that would make it faster (but IE changes the ordering, referenced variables, etc) that should be presented for integration to the source code, NOT silently performed for that single compilation. Never _make_ the optimization for the programmer. It's fine to have it as an error and suggest the optimal variant instead __for human review__.

"I don't want compilers _making_ optimizations based on UB. I want them educating the programmers so that the UB can be eliminated at a human level and correct outcomes are the result, optimized if it makes sense."


What you are describing is formal methods. You can take a subset of C, use a formally-verified toolchain, and write your own formally-verified C code using proof systems.

Or perhaps what you want is a different language.

It’s just that what you’re describing—having the compiler detect UB and warn you about it, but still letting you write C code—without the burden of also writing correctness proofs using some formalism—it’s just not a viable option.

From a usability perspective, if you turn on aggressive static analysis / warning settings like this, what you usually end up with is programmers that just start ignoring warnings or turning them off, except for greenfield projects. C’s semantics don’t make this kind of thing easy. If you have a greenfield project and you are fighting the language this hard, that’s precisely the time when using a different language is the most viable option.


That's impossible in the general case, since it's trivially equivalent to the halting problem (take the UB-detecting compiler, have it perform UB iff the program has no UB, run it on its own source code).

What we can do is a combination of formal methods, heuristics and sanitizers. Plus software engineering techniques, such as running tests to high coverage with sanitizers, fuzzing, and so on.


One challenge is that there are lots of cases where a program might have UB for a certain input, but a full-program analysis reveals that this UB is not actually possible to hit with any execution.

Should it still report UB? What if the full program analysis is beyond what the compiler is able to reason about?


So, do you mean that in the above loop example, it would require you to prove somehow that the loop always finishes, or if the programmer fails to do that, fail with "possibly infinite loop" error? In what language/system you would require that proof?


Detecting UB is far harder than the compiler simply assuming there is no UB. That said, there are already tools to do it for some cases (but far from all). They're not often used in C/C++, perhaps because they're slow.


For your first example, I think most people want integer overflow to be unspecified behavior instead of undefined behavior - this is how most other languages treat it, it is how all C compilers behaved for a long time, and it is unreasonably difficult to actually write C code that makes sure not to cause integer overflow.

Your example is in fact perfect for why that should be the case: consider the following code:

  int n = 0;
  scanf("%d", &n)
  factorial(n); //using your definition of factorial(int)
A standard-compliant C compiler is apparently free to optimize this program to format your hard-drive.

For your second example, I would say that, rather than omitting the NULL check, a helpful compiler could do one of two things:

1. Don't reason from strlen(s) to s != NULL, so the NULL check must be left in (unless it has even more context and can see a non-NULL assignment to s) 2. Wherever trying to optimize away a comparison to NULL based on UB reasoning, put in a new comparison to NULL at the place you applied that reasoning. For this example, optimize the program as if it looked like this:

  //first, the inline version  
  void puts_with_len(const char *s) {
    s_len = strlen(s);
    printf("len = %zu\n", s_len);
    const char* puts_arg = s == NULL ? "(null)" : s
    puts(puts_arg)
  }
  //after optimization
  void puts_with_len(const char *s) {
    s_len = strlen(s);
    const char* puts_arg;
    if (s != NULL) {
      puts_arg = s;
    } else {
      puts_arg = "(null)"; //could also explicitly signal a segfault or assert here instead
    }
    printf("len = %zu\n", s_len);    
    puts(puts_arg);
  }
In this case we didn't gain anything, but if puts_with_len were itself inlined the check would be moved further back, potentially replacing many NULL checks with a single one.

I would note that there is a third option here that goes in a different direction: now that compilers are extremely aggressive with NULL check removal optimizations, a lot of unsafe C functions could be made safe by manually adding the missing NULL checks to the stdlib and other major libraries. This wouldn't affect the semantics, and it wouldn't hurt performance assuming the optimizer really is doing its business.

For example, strlen() could itself raise an assertion/exit on strlen(NULL). If called from a context where it is known that s != NULL, the null check can be optimized away by the aggressive optimizer; if not, better safe than sorry.


If signed integer overflow is implementation defined rather than undefined then it isn’t an error and we cannot make compiler features that warn or reject when we can prove it will occur. In your case we’ve managed to get the worst of both worlds (a buggy program and no capacity for the compiler to stop you).


For a long time in C's history, for most platforms, int overflow was actually treated as well defined behavior, with many examples suggesting to use tests like x + y < x (assuming positive integers) to detect overflow.

In modern C there is simply no portable way to easily check for integer overflow for 64-bit values, even though the vast majority of programs are running on a processor that defines exactly what happens with integer overflow, and even sets a flag that can be tested for in a single jump.

People often cite for loops over arrays as an example of places where treating integer overflow as UB helps with optimizations. This despite the fact that the recommended, standards compliant portable way to iterate over the range of indices in an array is to use a size_t index variable, which is an unsigned type.


> even though the vast majority of programs are running on a processor that defines exactly what happens with integer overflow, and even sets a flag that can be tested for in a single jump

Widths matter. Platforms that do this don't overflow both 32 and 64 bit signed integers. So if you want to define signed integer overflow for all signed integer widths then for one (or both) of these widths you need to stick in runtime checks.


It would sometimes work but I think this (very common comment) misses the general point, there's absolutely fine and non buggy on warning-worthy code that can be optimized away if the compiler relies on UB. A very simple example:

    void do_stuff(int *some_ptr) {
        do_substuff(some_ptr);

        *some_ptr += 2;
    }

    static void do_substuff(int *some_ptr) {
        if (some_ptr != NULL) {
            *some_ptr = 10;
        }
    }
do_stuff calls a subroutine that does a NULL pointer check, then unconditionally dereferences the same pointer.

From this the compiler, if it decides to inline that code, can decide to optimize the NULL check away since if the pointer is NULL it's guaranteed to trigger UB. The logic being "clearly the programmer assumes that the pointer can't be NULL here, so thanks to UB rules I can too".

There's nothing wrong with this code, there's no reason to emit a warning.

Distinguishing between "of course that's a reasonable optimization, that's probably what the developer intended" and "wow this compiler is so dumb, obviously I never meant for that to happen" is a very tough nut to crack.

At this point you can push the blame to the C language not being expressive enough, in this case not giving us a way to express nullable vs. non-nullable pointers within the language, which forces the compiler to lean onto these UB heuristics to optimize.


Even in your heavily contrived example, I can think of cases where the optimization isn't what the programmer wants. For instance, I might have a special handler via userfaultfd(2) that detects if I'm doing an increment of the null pointer and handles it in some special way, but can't handle just setting it to 10. For a more real example, I might have acquired a lock around do_substuff, and I might be okay with the thread segfaulting only if the lock wasn't held.


I don't find my example contrived at all, having functions assuming non-NULL pointers call other subroutines that defend against such a thing is definitely a routine thing in my experience. Maybe "do_substuff" is called in contexts where the pointer could be NULL, or maybe its developer was paranoid.

I don't think it's reasonable to expect compilers to issue less optimized code to defend against coders doing smart things that are well beyond the scope of the standard. If you want to play fast and loose with segfault handlers then be my guest, but if you want to play with fire you better know what you're doing.

Note that many of C's UBs are also generally effectively unportable, different systems will react differently to things like NULL pointer dereferences (segfault, access to some mapped memory, access to nothing at all or a special handler like the one you described) and signed overflow (overflow to 2s complement, saturation, trap etc...).

I think blaming UBs is frankly the wrong target. The problem with C is that it doesn't let you express those constraints to let the compiler enforce them and let you know when you're doing something wrong. I can't tell the compiler "this pointer is not nullable" and have it keep track of it for me, warning me if I mess up. In contrast in a language like Rust I could use an Option<> type to encode this, and I get a compilation error if I have a mismatch.

That's what I want to see in a more modern C, not a babysitting compiler that emits suboptimal code because it's trying to make my coding mistakes somewhat less mistakeful.


If the programmer really needs reads and writes through particular pointers to happen in a particular sequence, because the target memory follows different rules than the language ordinarily allows the compiler to assume, then it’s the programmer’s responsibility to use the annotation provided by the language for exactly that purpose: volatile. If the compiler had to assume that every pointer needs to be treated as volatile, just about all generated code would be be slowed down dramatically.

As for locks, the language already has rules about which memory access are allowed to be reordered across lock acquire and release calls. Otherwise locks wouldn’t work correctly at all.


To be extra pedantic, that's not what volatile does. Volatile ensures that access through the variable must behave strictly according to the rules of the C abstract machine, but the definition of access is implementation defined. A compiler author could define "access" to be "reads from the variable" or "writes to the variable", or neither, and make the entire keyword useless. As long as they document that somewhere, it's compliant with the standard. You think it means "reads and writes" but it doesn't have to.

It's tempting to write a "malicious compliance C compiler" that complies with the standard but makes all the most perverse possible choices.


Your comment is technically correct, but the original description of “volatile” in the parent comment is much clearer. It may be fun to imagine what would happen if the compiler writer were malicious, but it doesn’t really help us understand C.

Just like C compilers assume that the program is correct, the writers of the C standard assume that the people reading it are not insane or malicious.


> A compiler author could define "access" to be "reads from the variable" or "writes to the variable", or neither, and make the entire keyword useless. As long as they document that somewhere, it's compliant with the standard.

Nobody is going to accept that as a compliant implementation.


The compiler very rarely knows that UB exists in the program.

To give a concrete example: the compiler sees this:

    ```
    while (<expr>) {
        if (p == 0) { do_thing(); }
        *p = 42;
    }
    ```
In this case, the compiler wants to move `do_thing()` outside the loop because it's expensive. It also knows that it would be UB for this branch to execute if `*p = 42` executes, because that would result in UB.

Therefore the compiler can assume that if the loop runs at least once, then `p != 0`. At no point does the compiler have any information about whether the program is in fact UB.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: