Hacker News new | past | comments | ask | show | jobs | submit login

Ralf Jung has a blog post looking at some of the claims in this paper [0]. Some hopefully representative quotes:

> The paper makes many good points, but I think the author is throwing out the baby with the bathwater by concluding that we should entirely get rid of this kind of Undefined Behavior. The point of this blog post is to argue that we do need UB by showing that even some of the most basic optimizations that all compilers perform require this far-reaching notion of Undefined Behavior.

<snip>

> I honestly think trying to write a highly optimizing compiler based on a different interpretation of UB would be a worthwhile experiment. We sorely lack data on how big the performance gain of exploiting UB actually is. However, I strongly doubt that the result would even come close to the most widely used compilers today—and programmers that can accept such a big performance hit would probably not use C to begin with. Certainly, any proposal for requiring compilers to curtail their exploitation of UB must come with evidence that this would even be possible while keeping C a viable language for performance-sensitive code.

> To conclude, I fully agree with Yodaiken that C has a problem, and that reliably writing C has become incredibly hard since undefined behavior is so difficult to avoid. It is certainly worth reducing the amount of things that can cause UB in C, and developing practical tools to detect more advanced kinds of UB such as strict aliasing violations.

<snip>

> However, I do not think this problem can be solved with a platform-specific interpretation of UB. That would declare all but the most basic C compilers as non-compliant. We need to find some middle ground that actually permits compilers to meaningfully optimize the code, while also enabling programmers to actually write standards-compliant programs.

[0]: https://www.ralfj.de/blog/2021/11/24/ub-necessary.html





I had the same basic thought while reading this: the only alternative here is to interpret this sort of undefined behavior in a "different" way (such as the way the K&R originally interpreted it) and leave it up the programmer to intuit what other assumptions the compiler is making on their behalf. Like... I can assume that the compiler will interpret:

    if (i++ < i)
as

    mov ax, i
    inc ax
    cmp ax, i
    jge else
... but that's still me making a (reasonable?) assumption. The C spec essentially just says "don't program that way".


Odd intuition, since you never store i back after incrementing. Also, I believe i++ might increment after the comparison rather than before, vs. ++i incrementing first


Oh wow I totally missed the i++ < i Don't mind me


++ is itself a store.


You know what I mean.


I actually don't, but maybe I just lack enough background knowledge on the discussion to understand your intent


The article talks about a similar code snippet being optimized out by the compiler because i++ can never be less than i originally was (unless you take into account the actual behavior of computers).


I think they're saying that the assembly translation doesn't store back `i`, whereas the C version does, so it's a not straightforward to assume that the assembly compiled from the C won't do that.


I really wish we got

cc: warning: UB line: 123 file: foo.c


How many warnings do you want for this small function?

  void oh_the_humanity(int *ptr, int val) {
    *ptr = val + 1;
  }
Off the top of my head:

* UB: ptr may be pointing a float variable. (It's not illegal to assign a float* to an int*, it's only UB when you actually dereference it with the wrong type.)

* UB: val + 1 may overflow.

* UB: potential data race on writing *ptr.

* UB: ptr may be a one-past-the-end-of-the-array pointer, which can be validly constructed, but may not be dereferenced.

* UB: ptr may be pointing to an object whose lifetime has expired.

* UB: ptr may be uninitialized.

* UB: val may be uninitialized.

As you can see, UB is intensely specific to the actual data values; it's not really possible to catch even a large fraction of UB statically without severe false positive rates.


Yeah I know I get it, it's me more being wishful, but I more seriously wish at least compilers could emit a warning when they optimize something after UB:

  % cat foo.c
  #include <stdlib.h>
   
  int
  foo(int *bar)
  {
   int baz = *bar;
  
   if (bar == NULL) exit(2);
  
   return (baz);
  }
  % cc -O3 -Wall -Wextra -c foo.c
  % objdump -dr foo.o            
  
  foo.o: file format Mach-O 64-bit x86-64
  
  
  Disassembly of section __TEXT,__text:
  
  0000000000000000 _foo:
         0: 55                            pushq %rbp
         1: 48 89 e5                      movq %rsp, %rbp
         4: 8b 07                         movl (%rdi), %eax
         6: 5d                            popq %rbp
         7: c3                            retq
  % cc -O0 -Wall -Wextra -c foo.c
  % objdump -dr foo.o            
  
  foo.o: file format Mach-O 64-bit x86-64
  
  
  Disassembly of section __TEXT,__text:
  
  0000000000000000 _foo:
         0: 55                            pushq %rbp
         1: 48 89 e5                      movq %rsp, %rbp
         4: 48 83 ec 10                   subq $16, %rsp
         8: 48 89 7d f8                   movq %rdi, -8(%rbp)
         c: 48 8b 45 f8                   movq -8(%rbp), %rax
        10: 8b 08                         movl (%rax), %ecx
        12: 89 4d f4                      movl %ecx, -12(%rbp)
        15: 48 83 7d f8 00                cmpq $0, -8(%rbp)
        1a: 0f 85 0a 00 00 00             jne 10 <_foo+0x2a>
        20: bf 02 00 00 00                movl $2, %edi
        25: e8 00 00 00 00                callq 0 <_foo+0x2a>
    0000000000000026:  X86_64_RELOC_BRANCH _exit
        2a: 8b 45 f4                      movl -12(%rbp), %eax
        2d: 48 83 c4 10                   addq $16, %rsp
        31: 5d                            popq %rbp
        32: c3                            retq
  % 
That's very similar to something that bit me in embedded except it was with pointer to structure. Compiler realizes I've derefed NULL and that's UB anyway so no need to do the NULL check later and merrily scribble exc vectors or whatever.


That's a nice example. It'd definitely be nice to have a warning for this one.

Fwiw GCC does have a related warning flag (-Wnull-dereference) but I'm not sure it's made exactly for this. I believe it works based on functions being annotated for possibly returning NULL, e.g. malloc. It's also not enabled by -Wall or -W because apparently there were too many false positives: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96554

I imagine patches would be welcome. I'm guessing there are more people who like to wish for more compiler features than there are people who like to develop compilers :)


edit: Thanks, I just checked and the warning doesn't work

https://godbolt.org/z/4TP1hfx4j

But your hint found -fno-delete-null-pointer-checks which does the trick

https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html

And -fno-delete-null-pointer-checks made it into LLVM too. It's a good to know but a little late for when I needed it, cheers :)

  % cc -O3 -fno-delete-null-pointer-checks -Wall -Wextra -c foo.c
  % objdump -dr foo.o                                            
  
  foo.o: file format Mach-O 64-bit x86-64
  
  
  Disassembly of section __TEXT,__text:
  
  0000000000000000 _foo:
         0: 55                            pushq %rbp
         1: 48 89 e5                      movq %rsp, %rbp
         4: 48 85 ff                      testq %rdi, %rdi
         7: 74 04                         je 4 <_foo+0xd>
         9: 8b 07                         movl (%rdi), %eax
         b: 5d                            popq %rbp
         c: c3                            retq
         d: bf 02 00 00 00                movl $2, %edi
        12: e8 00 00 00 00                callq 0 <_foo+0x17>
    0000000000000013:  X86_64_RELOC_BRANCH _exit


If you want a general approach you can turn on ubsan in trap-only mode and see what traps have ended up in your output.


But such code might be generated by macros (or some code generator), in which case silent elimination of unnecessary code is expected and wanted behavior.


In this particular case the code is wrong and can be fixed easily by swapping the assignment and the comparison.

Likewise, code generators shouldn’t be generating this faulty code.

Raising compiler error here is the only right thing to do.

There are of course more ambiguous examples, though obvious examples like the one above are sadly way too common.


Why can't we say that the original code is wrong then? The whole point of having something be UB rather than implementation defined is because the language committee believes that it represents a bug in your program.


(Or templates, in the C++ case. Think of all the inlined iterator functions...)


Well if we're not super worried about implementation difficulty right now:

* 1 and 4-7: Don't worry about where the pointer goes, as long as you treat it like an int in this function. If there are going to be warnings about improper pointers, they should be at the call sites.

* 2 If overflow will either trap, wrap, or behave like a bignum, then it's not the dangerous kind of UB, so no warning by default. Consider extending C so the coder can more easily control integer overflow.

* 3 Anything could race. Out of scope, don't worry about it.


You can't detect most UB at compile time. LLVM has a system to detect it a runtime. There is a significant performance penalty, but it can be useful during testing.

See https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html


It’s not actually that significant compared to something like asan. You can ship software with ubsan enabled if you want to.


Compilers will routinely warn if they can detect UB at compile time.

The problem is that except in a few trivial cases it is impossible to detect UB at compile time. Even whole program static analysis can only catch a small subset.


Yeah, it'd be nice if we could solve the halting problem.


>> We sorely lack data on how big the performance gain of exploiting UB actually is. However, I strongly doubt that the result would even come close to the most widely used compilers today—and programmers that can accept such a big performance hit would probably not use C to begin with. Certainly, any proposal for requiring compilers to curtail their exploitation of UB must come with evidence that this would even be possible while keeping C a viable language for performance-sensitive code.

There actually is data, [1] and it goes against intuition because it shows that UB makes code slower.

It turns out that when programmers are faced with the possibility of UB, they (quite rightly) try to work around it. (In my case, I implemented signed two's-complement arithmetic entirely with unsigned types. I also implemented my own array types, including bounds checks when indexing the arrays.) These workarounds make their code slower, in general.

When you think about it that way, it makes sense because programmers almost never want UB in their software, so on average, the software that people care about will work around UB and become slower.

UB in C was defensible when platforms were not homogenous and when compiler writers did not abuse the spirit of it. But nowadays, most of it is indefensible because platforms are mostly homogenous, and compiler writers have been abusing UB for "performance."

We saw the same thing happen with speculative execution in chips. For years, chip manufacturers got away with lots of tricks to increase performance. Then Spectre and Meltdown were discovered. As a result, a lot of software had to be changed, which resulted in the software slowing down.

(Yes, there are now mitigations in chips, but I think those mitigations will be successfully attacked too. I think that it will continue that way until most or all speculative execution is removed.)

Likewise, with compilers exploiting UB against the original spirit of UB, which was just to make C easy to port to any architecture, we are probably going to have a reckoning about it in the same way we did Spectre and Meltdown. In a way, we already do, but it's spread out like a thousand paper cuts. Maybe that means that it will stay invisible enough that programmers never wake up; I hope not.

tl;dr: compilers exploiting UB actually slows down good code in much the same way that Spectre and Meltdown do.

[1]: https://www.complang.tuwien.ac.at/kps2015/proceedings/KPS_20...


That data is bullshit. It measures -fno-strict-overflow -fno-strict-aliasing. I in fact agree flipping defaults on these UBs is reasonable, but that is very far from "portable assembler" or UB-free C.

THE primary UB of C is out-of-bounds write. Any proposal for UB-free C should address how to compile out-of-bounds write.

If you insist out-of-bounds write should compile in "portable assembler" way and reliably corrupt memory instead of allowing compilers to assume it doesn't happen, compilers can't allocate local variables in register. If it's on stack, out-of-bounds write may reliably corrupt stack, so how can it be on register?

"Portable assembler" people invariably say "that's not what I meant!" when compiler people raise this issue, but the question is "then what do you mean?" And I am yet to see any good answer.


Could you you explain your position here more fully? I'm a C programmer mostly in the "portable assembler" camp, and I don't see why insisting on out-of-bound writes to be performed prevents compilers from keeping local variables in registers. I think the difference might be your use of the word "reliably".

I don't think anyone in my camp is insisting that the results be reliable in the sense you seem to take for granted. What we want is for the compiler to attempt to do what it's instructed to do, without eliding code based on counterfactual reasoning that assumes the absence of undefined behavior. In the absence of an explicit read that is being ignored, we're generally fine with cached values of local variables being out of date.

Maybe you could give an example that makes your point more clearly, and we can see if we actually disagree?



Unfortunately, these don't help much.

I thought about responding to jcranmer's post instead of yours, but wasn't sure how to approach it. It's a good comment, and I appreciated his attempt, but I feel like he's thoroughly missing the point of the complaints about UB. The complaint isn't that too much UB exists in C, but that compiler writers seem to use the presence of UB as an excuse for being allowed to break code that has worked historically. And the complaint isn't just that the code is broken, but that no apparent care is taken to avoid doing so despite the small amount of gain. It's a worldview mismatch, and I don't know how to bridge it.

Your comment seems about the same as the one I responded to. You seem to assume that people who complain about UB in C would have a problem with keeping local variables in registers, but I've never seen anyone actually make this complaint. Take for example the Arxiv paper we are discussing: he doesn't bring this up. This makes me suspect that your mental model of the people who are complaining about UB in C is probably flawed. I understand the technical issue you are gesturing toward, I just don't see it as being something that refutes any of the issues brought up in the Arxiv paper.

My hope was that a concrete example might help to clarify, but I do realize this might not be the right forum for that.


There's not a bright line between optimization passes being aware of UB in unreasonable vs "exploitative" ways. The principled way to think about optimization is that an optimization pass can assume certain invariants about the code but must preserve other invariants through the transformation as long as the assumptions hold. I think basically any invariant you assume in a memory-unsafe language C could be invalidated by real code.

A lot of the things people complain about are cases where the compiler got better at reasoning through valid transformations to the code, not cases where the compiler . E.g. I'm sure that even very old compilers have had optimization passes that remove redundant NULL checks that logically depend on dereferencing NULL pointers being undefined. But these probably were more ad-hoc and could only reason about relatively simple cases and constrained scopes. Another example is integer overflow. I'd bet a lot of older compilers have loop optimizations that somehow depend on the loop counter not overflowing, or pointer addresses not wrapping around.

I think it's perfectly reasonable to say that C's definition of undefined behaviour is too expansive and too difficult to avoid in real code.

But I think that a lot of the complaints end up being equivalent to "don't depend on UB in ways that I notice in my code" or "don't depend on UB in ways that compilers from the 90s didn't". That's not something you can practically apply or verify when building a compiler because there isn't a bright line between assuming an invariant in reasonable vs unreasonable ways, unless you can define a narrow invariant that distinguishes them.


No, I don't assume people are against register allocation, but any concrete proposal I have seen kind of implies such conclusion. I am trying to understand what people actually want, since they seem clearly different from what people say they want.

Okay let's discuss a concrete example.

    *x = 12345678;
    f();
    return *x; // can you copy propagate 12345678 to here?
f() does this:

    for (int *p = 0; p++; p < MEMSIZE)
        if (*p == 12345678)
            *p = 12345679.
That is, f scans the memory for 12345678 and replace all instances with 12345679. There is no doubt this actually works that way in assembly. Things like cheat engines do this! C compilers assume this doesn't happen, because it is UB.

Hence, portable assembly C compiler can't omit any load. Now I understand there are minority of people who will answer "that's what I want!", but like register allocation, I think people generally want this to optimize. But that necessarily implies memory search-and-replace can't compile in portable assembly manner.


I can't really speak to the "portable assembler" point of view here, but if I was trying to make UB less dangerous I would say that the code had better return either 12345678 or 12345679, as long as no other memory addresses have 12345678 stored in them. Or it could trap.


> I can't really speak to the "portable assembler" point of view here, but if I was trying to make UB less dangerous I would say that the code had better return either 12345678 or 12345679

If 12345678 is acceptable to you then the language specification is already doing what you want. The alternative is to require arbitrary havocs on every memory address upon any function call to a different translation unit. Nightmare.

> Or it could trap.

UBSan exists and works very well. But introducing runtime checks for all of this stuff is not acceptable in the C or C++ communities, outside of very small niches regarding cryptography, because of the incredible overhead required. Imagine if your hardware doesn't support signed integer overflow detection. Now suddenly every single arithmetic operation is followed by a branch to check for overflow in software. Slow.


> If 12345678 is acceptable to you then the language specification is already doing what you want.

No it's not.

The compiler is allowed to look at this code and make it print 5. Or make it delete my files. This code is undefined and the compiler could do anything without violating the C standard.


> The compiler is allowed to look at this code and make it print 5. Or make it delete my files.

It is allowed to do this but it won't. "The compiler will maximally abuse me if I put any undefined behavior in my program" is not a concern that is actually based in any reality. In the above program the compiler cannot meaningfully prove that undefined behavior exists and if it could it would yell at you and raise an error rather than filling your hard drives with pictures of cats.

This meme has done so much damage to the discussion around UB. The gcc and clang maintainers aren't licking their lips just waiting to delete hard drives whenever people dereference null.

Go compile that program. You can stick it in compiler explorer. It is going to print 12345678.


> It is allowed to do this but it won't.

It is very possible for a non-malicious compiler to end up eliminating this code as dead.

That's the biggest risk. I only mentioned "delete my files" to demonstrate how big the gap in the spec is, because you were saying the spec is already "doing what I want", which happens long before we get to what compilers will or won't do.


A programmer wrote things in a specific order for a specific reason.

Lets instead assume that the variable assignments above are to some global configuration variables and then f() also references those and the behavior of f() changes based on the previously written code.

The objections from the 'C as portable assembler' camp are:

* Re-ordering the order of operations across context switch bounds (curly braces and function calls). -- re-ordering non-volatile store / loads within a context is fine, and shouldn't generate warnings.

* Eliminating written instructions (not calling f()) based on optimizations. -- Modification to computed work should always generate a warning so the optimization can be applied to the source code, or bugs corrected.


> A programmer wrote things in a specific order for a specific reason.

Is it not possible that the programmer introduced a bug?

Consider the bug that caused the 911 glitch in Android phones recently. An unstable comparator was defined in a type, violating the contract that Comparable has with Java's sorting algorithms. When Java detects that this implementation violates the assumptions its sorting algorithms make, it throws an exception. Should it not do this and instead say that the programmer wrote that specific comparator on purpose and it should loop forever or produce an incorrect sort? I think most people would say "no". So why is the contract around pointer dereferencing meaningfully different?


> Modification to computed work should always generate a warning so the optimization can be applied to the source code, or bugs corrected.

This only works out in very, very limited cases. What if this opportunity only presents itself after inlining? What if it's the result of a macro? Or a C++ template?

Just because the compiler can optimize something out in one case doesn't mean you can just delete it in the code...


Globals and locals are different. All compilers will give a global a specific memory location and load and store from it. Locals by contrast can be escape analyzed.


The example didn't show where X was defined; it could be anything.


No it could not have been for copy propagation to be valid. It had to be a local except under some very special conditions.


How about circumstances such as opting in to a semantically new version of C?

  #ifndef __CC_VOLATILE_FUNCTIONS
  /\* No volatile function support, remove code */
  #define VOLFUNC
  #else
/* This compiler supports volatile functions. Only volatile functions may cause external side effects without likely bugs. */

  #define VOLFUNC volatile
  #endif
https://www.postgresql.org/docs/current/xfunc-volatility.htm...

Similarly stable functions could have results cached. You might also note that PostgreSQL assumes any undeclared function is volatile.


> any concrete proposal I have seen kind of implies such conclusion.

No it does not. In your example, I personally would prefer it did not propagate the 12345678. Good grief, I wrote the deref there.

> C compilers assume this doesn't happen, because it is UB.

Incorrectly. IMHO.

> but like register allocation,

You are silently equating int x; with a memory deref. There is no need to do this.

Anyway, here is part of the rationale for C:

"Although it strove to give programmers the opportunity to write truly portable programs, the Committee did not want to force programmers into writing portably, to preclude the use of C as a ``high-level assembler'': the ability to write machine-specific code is one of the strengths of C. It is this principle which largely motivates drawing the distinction between strictly conforming program and conforming program"

http://port70.net/~nsz/c/c89/rationale/a.html#1-5

So the idea of C as a portable assembler is not some strange idea proposed by ignorant people who don't understand C, but an idea that was fundamental to C and fundamental to the people who created the ANSI/ISO C standard.

But hey, what do they know?


Thanks for making an example!

I'm against compiling this particular example to return a constant. I presumably wrote the awkward and unnatural construction return *x because I want to to force a load from x at return. If I wanted to return a constant, I'd have written it differently! I'm odd though in that I occasionally do optimizations to the level where I intentionally need to force a reload to get the assembly that I want.

Philosophically, I think our difference might be that you want to conclude that one answer to this question directly implies that the "compiler can't omit any load", while I'd probably argue that it's actually OK for the compiler to treat cases differently based on the apparent intent of the programmer. Or maybe it's OK to treat things differently if f() can be analyzed by the compiler than if it's in a shared library.

It would be interesting to see whether your prediction holds: do a majority actually want to return a constant here? My instinct is that C programmers who complain about the compiler treatment of UB behavior will agree with me, but that C++ programmers dependent on optimizations of third party templates might be more likely to agree with you.


Oh, so you are on "that's what I want!" camp. But I am pretty sure you are in minority, or at the very least economic minority. Slowdown implied by this semantics is large, and easily costs millions of dollars.

> while I'd probably argue that it's actually OK for the compiler to treat cases differently based on the apparent intent of the programmer.

This is actually what I am looking for, i.e. answer to "then what do you mean?". Standard should define how to divine the apparent intent of the programmer, so that compilers can do divination consistently. So far, proposals have been lacking in detailed instruction of how to do this divination.


> and easily costs millions of dollars

Looks like UB bugs can cost more. It's a new age of UB sanitizers as a reaction to a clear problem with UB.


Bugs based on optimizations that compilers make based on assumptions enabled by undefined behavior (like the null check issue from 2009 in the Linux kernel) actually don't cost very much. They get a disproportionate amount of scrutiny relative to their importance.


My experience with a lot of the UB-is-bad crowd is that they don't have much of an appreciation for semantics in general. That is to say, they tend to react to particular compiler behavior, and they don't have any suggestions for how to rectify the situation in a way that preserves consistent semantics. And when you try to pin down the semantics, you usually end up with a compiler that has to "do what I mean, not what I said."

A salient example that people often try on me, that I don't find persuasive, is the null pointer check:

  void foo(int *x) {
    int val = *x;
    if (!x) return;
    /* do real stuff */
  }
"The compiler is stupid for eliminating that null check!" "So you want the code to crash because of a null pointer dereference then." "No, no, no! Do the check first, and dereference only when it's not null!" "That's not what you said..."


> "No, no, no! Do the check first, and dereference only when it's not null!"

I don't think I've heard anyone express this preference. If they did, I'd agree with you that they are being unreasonable. My impression is that practically everyone on the "portable assembler" team would think it's perfectly reasonable to attempt the read, take the SIGSEGV if x is NULL, and crash if it's not handled. Most would also be happy skipping the read if the value can be shown to be unused. None would be happy with skipping the conditional return.

Where it gets thornier is when there is "time travel" involved. What if the unprotected read occurs later, and instead of just returning on NULL, we want to log the error and fall through:

  void foo(int *x) {
    if (!x) /* log error */;
    int val = *x;
    return;
  }
Is it reasonable to have the later unconditional read of x cause the compiler to skip the earlier check of whether x is NULL? In the absence of an exit() or return in the error handler, I think it would be legal for the compiler to skip both the error logging and the unused load and silently return, but I don't want the compiler to do this. I want it to log the error I asked it to (and then optionally crash) so I can identify the problem. Alternatively, I'd probably be happy with some compile time warning that tells me what I did wrong.


How do you feel about:

  void foo1(int *p) {
    *p = 7;
    printf("%d\n", *p);
    free(p);
  }
May the compiler replace that with puts("7") and free()? Recall that free() is a no-op when the pointer is NULL.

Are you arguing that each function may reduce pointer accesses down to one but not down to zero? What about

  int foo2() {
    int *p = malloc(400);
    *p = 0;
    /* ... code here was inlined then optimized away ... */
    ret = 7;
    free(p);
    return ret;
  }
? May we remove '*p = 0;', whether we remove the malloc+free or not?

Generally speaking the compiler tries not to reason about the whole function but just to look at the smallest possible collection of instructions, like how add(x, x) can be pattern matched to mul(x, 2) and so on. Having to reduce to one but not zero memory accesses per function is not easily compatible with that model. We would have to model branches that make the access conditional, the length of accesses may differ (what if 'p' is cast to char* and loaded), read vs. write, multiple accesses with different alignment, and maybe other things I haven't considered.

Both gcc and clang provide -fsanitize=null which checks all pointer accesses for null-ness before performing them. These can be surprising, your code (libraries you didn't write, headers you included) may be dereferencing NULL and relying on the optimizer to remove the invalid access. IMO there should be a "pointer is null-checked after use", it's a clear example where the programmer wrote something other than what they intended.


Quick responses before I go to bed:

I think compilers should emit code that make writes to memory when the program asks them to, even if the pointer is then freed. Not doing so too easily leads to security issues. If the pointer turns out to be NULL at runtime, then let it crash. Using 'volatile' often works as a workaround, though.

I have no strong opinion on substituting puts() for printf(). I think incorporating the standard library into the spec was unhelpful, but isn't a major issue either way. It occasionally makes tracing slightly more difficult, but presumably helps with performance enough to offset this, and it's never bitten me as badly as the UB optimizations have.


The existence of volatile is a strong argument that was never the intention that C should map every pointer dereference at the source level to load and stores at the asm level.


Not at all.

It was a workaround (unreliable at that, IIRC) for compilers getting more aggressive in ignoring the intentions of C, including the early standards.

"The Committee kept as a major goal to preserve the traditional spirit of C. There are many facets of the spirit of C, but the essence is a community sentiment of the underlying principles upon which the C language is based. Some of the facets of the spirit of C can be summarized in phrases like

- Trust the programmer.

- Don't prevent the programmer from doing what needs to be done. ..."

TRUST THE PROGRAMMER.

http://port70.net/~nsz/c/c89/rationale/a.html#1-5


> It was a workaround (unreliable at that, IIRC) for compilers getting more aggressive in ignoring the intentions of C, including the early standards.

But volatile was in C89? So how can it be a response to compilers "ignoring the intentions of C" in the early standards?

If anything, an example in the C89 standard makes it very explicit that an implementation may do exactly what gpderetta said (emphasis added):

> An implementation might define a one-to-one correspondence between abstract and actual semantics: at every sequence point, the values of the actual objects would agree with those specified by the abstract semantics. The keyword volatile would then be redundant.

> Alternatively, an implementation might perform various optimizations within each translation unit, such that the actual semantics would agree with the abstract semantics only when making function calls across translation unit boundaries. In such an implementation, at the time of each function entry and function return where the calling function and the called function are in different translation units, the values of all externally linked objects and of all objects accessible via pointers therein would agree with the abstract semantics. Furthermore, at the time of each such function entry the values of the parameters of the called function and of all objects accessible via pointers therein would agree with the abstract semantics. In this type of implementation, objects referred to by interrupt service routines activated by the signal function would require explicit specification of volatile storage, as well as other implementation-defined restrictions.

[0]: https://port70.net/~nsz/c/c89/c89-draft.html


Going from trust the programmer to requiring volatile semantics for each access is quite a stretch. I would say you are part of an extremely small minority.


Don't compilers already have ways to mark variables and dereferences in a way to say 'I really want access to this value happen'?

They are free to optimize away access to any result that is not used based on code elimination, and these annotations limit what can be eliminated.

However, basing code elimination on UB, as pointed out earlier, would basically eliminate all code if you were rigorous enough because you basically cannot eliminate UB in C code, which is not in any way useful.


> Don't compilers already have ways to mark variables and dereferences in a way to say 'I really want access to this value happen'?

The standard defines it, it's `volatile` I believe. But it does not help with the examples above as far as I understand (removing the log, removing the early return, time-travel...).


I believe it completely solves the question

? May we remove '*p = 0;', whether we remove the malloc+free or not?

Sure, it does not solve the question when arbitrarily removing NULL pointer checks is OK.

It is true that when the compiler is inlining code or expanding a macro it may have a NULL check that is spurious in environments that do not map page 0 based on the observation that the pointer was dereferenced previously.

And this assumption is incorrect in environments that do map page 0 causing wrong code generation.


On a lot of systems load from address zero doesn't segfault, I'm fine with the cpu loading it. I'm disappointed that the new compiler removed the check someone wrote a dozen years ago to prevent the cpu from overwriting something important though.


The null pointer need not map to address zero exactly to cater to this sort of corner cases.


In the case where loads from address zero are legal, the dereference-means-non-null-pointer assumption is disabled.



That case wasn't one of the cases where null pointers are valid memory, judging from the commit message. (There was a case where gcc had a bug where it didn't disable that check properly, but this doesn't appear to be that case.)


Actually, yes, I do want that code to crash if x is NULL.


Then you should logically be OK with deleting the null pointer check, as that check is unreachable.


Are you familiar with this classic example? http://blog.llvm.org/2011/05/what-every-c-programmer-should-...

The problem (if I understand it right) is that "Redundant Null Check Elimination" might be run first, and will get rid of the safety return. But then the "Dead Code Elimination" can be run, which gets rid of the unused read, and thus removes the crash. Which means that rather than being logically equivalent, you can end up with a situation where /* do real stuff */ (aka /* launch missiles */) can be run despite the programmer's clear intentions.


Right, each optimization is fine on its own but the combination is dangerous.


Not really. My understanding of the opinion (which I don't share, FWIW, and you probably know this) is that the null dereference should not be deleted, and it should be marked as an undeleteable trap similar to what an out-of-bounds access might be in Rust. That is, not unreachable in the sense of "code can never reach here", but unreachable in the sense that if it is hit then no other code executes.


The compiler can't know if page 0 is mapped.


Nope.


I would expect the code to crash when passed a null pointer, absolutely! And the if statement is there to let me recover after using "continue" in the debugger. And if I run on an ISA without protected memory, that load will not actually crash, and I'm OK with that. That's a level of UB differing-behavior (more like ID, really) that's reasonable.

I know of no experienced C programmer who would expect the compiler to re-order the statements. That sounds very much like a strawman argument to me!

I'd also be OK with a compiler that emitted a warning: "This comparison looks dumb because the rules say it should do nothing." Emitting that warning is helpful to me, the programmer, to understand where I may have made a mistake. Silently hiding the mistake and cutting out parts of code I wrote, is not generally helpful, even if there exist some benchmark where some inner loop will gain 0.01% of performance if you do so.

After all: The end goal is to produce and operate programs that run reliably and robustly with good performance, with minimum programmer cost. Saying that the possible performance improvements of code that nobody will run because it's buggy absolutely trump every other concern in software development, would be a statement I don't think reflects the needs of the software development profession.


> ...compiler writers seem to use the presence of UB as an excuse for being allowed to break code that has worked historically.

I may be dense, but I just don't understand this. Programmer compiles old C with a new compiler with certain optimizations set to on, and then is disappointed when the new compiler optimizes UB into some unsafe UB goofiness. I think expecting different behavior re: UB is folly. The language is explicit that this is the greyest of grey areas.

An aside -- what is so funny to me, not re: you, but re: some C programmers and Rust, is I've heard, just this week, 1) "Why do you need Rust? Unsafe memory behavior -- that's actually the programmers fault", 2) "Rust doesn't have a spec, a language must have a spec, a spec is the only thing between us and anarchy" (yet C has a spec and this is the suck), and 3) "Rust is too fast moving to use for anything because I can't compile something I wrote 3 months ago on a new compiler and have it work" (which seems, mostly, apocryphal.)

> It's a worldview mismatch, and I don't know how to bridge it.

I made the aside above only because -- maybe C and its values are the problem.


The concept of "friendly C" is not new, and this is one of the straw men that it knocked down, because it is actually possible to define "what I mean."

It's totally OK to say that an out-of-bounds write may or may not be detectable by reading any particular other local or global variable or heap value. That doesn't in turn translate to compilers being allowed to turn a bug in a SPECint kernel into an empty infinite loop (a case that happened!)


So what is the definition? That's the entire issue. If the definition is -fno-strict-overflow -fno-strict-aliasing, we are in violent agreement. We should consider making it standard. If the definition is for C to be free of UB, we are in violent disagreement, that would completely ruin C's performance.


Not surprising that is the only way it managed to win the performance game against safer systems programming languages.

"Oh, it was quite a while ago. I kind of stopped when C came out. That was a big blow. We were making so much good progress on optimizations and transformations. We were getting rid of just one nice problem after another. When C came out, at one of the SIGPLAN compiler conferences, there was a debate between Steve Johnson from Bell Labs, who was supporting C, and one of our people, Bill Harrison, who was working on a project that I had at that time supporting automatic optimization...The nubbin of the debate was Steve's defense of not having to build optimizers anymore because the programmer would take care of it. That it was really a programmer's issue.... Seibel: Do you think C is a reasonable language if they had restricted its use to operating-system kernels? Allen: Oh, yeah. That would have been fine. And, in fact, you need to have something like that, something where experts can really fine-tune without big bottlenecks because those are key problems to solve. By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are higher-level than C. We have seriously regressed, since C developed. C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine. This is one of the reasons compilers are ... basically not taught much anymore in the colleges and universities."

-- Fran Allen interview, Excerpted from: Peter Seibel. Coders at Work: Reflections on the Craft of Programming

So it was time to embrace UB was means to recover ground, and here we are back at hardware memory tagging and sanitizers as ultimate mitigations for a patient that will never recover from its bad habits.


> and reliably corrupt memory

Who said memory corruption had to occur "reliably"??

"Permissible undefined behavior ranges from ignoring the situation completely with unpredictable results"

> compilers can't allocate local variables in register.

Why not?

> If it's on stack, out-of-bounds write may reliably corrupt stack, so how can it be on register?

"Permissible undefined behavior ranges from ignoring the situation completely with unpredictable results"

> And I am yet to see any good answer.

What would qualify an answer as "good" IYHO?


>> and reliably corrupt memory > Who said memory corruption had to occur "reliably"??

Usually when I write code that corrupts memory, it reliably corrupts memory an unreliable way.


What is that word salad supposed to mean?


Consistent inconsistency or inconsistent consistency. Similar, but slightly different.


I want a safe C. If that means I have to deal with fewer available optimizations, I'm all for that! That's why I wrote two's-complement arithmetic with unsigned types and why I actually have bounds checks on my array accesses in my code.

That means I would gladly take a compiler that could not allocate local variables in registers.

Or even better, just do a bounds check. The standard does not preclude storing a raw pointer and a length together. Why couldn't the compiler do something like that and do a bounds check?

(Yes, the compiler would have to use its own version of libc to make that happen, but it could.)

In other words, I want correct code first, performant code second. I'll take the performance hit for correct code.

People claim that exploiting UB gives 1-2% speedup and that that's important. What they conveniently leave out is that you only get that speedup when your code has UB, which means that you only get that speedup when your code has a bug in it.

Fast wrong code is bad. Very bad. Correct code, even if slower, is an absolute must.


> People claim that exploiting UB gives 1-2% speedup and that that's important. What they conveniently leave out is that you only get that speedup when your code has UB, which means that you only get that speedup when your code has a bug in it.

Assuming there is no UB and optimizing under that assumption gives a speedup. You get the speedup whether that assumption is wrong or not. If that assumption is wrong, your program might also blow up in hilarious ways. But in no way does making the assumption that your code has no undefined behavior rely on your code having undefined behavior.


> Assuming there is no UB and optimizing under that assumption gives a speedup.

Sure, but to make sure you do not have any UB, at least for things like signed arithmetic, you basically have to either give up signed arithmetic, which is what I've done by using only unsigned, or ensure your arithmetic is always within range, which requires formal methods. Otherwise, you have UB, and that 1-2% speedup is because of that UB. If you do what I do, you get no speedup. If you use formal methods to ensure ranges, you do, but at an enormous cost.

> But in no way does making the assumption that your code has no undefined behavior rely on your code having undefined behavior.

But it does. Because the optimizations that assume UB only kick in when UB could be present. And if UB could be present, it most likely is.

Once again, those optimizations will never kick in on my code that uses purely unsigned arithmetic. So yes, those optimizations do require UB.


Is it formal methods to conclude that the vast majority of arithmetic, which for the software I typically write consists of iterators and buffer offsets dealing with fixed size buffers, can never overflow an int? Is it formal methods when I conclude that the 12-bit output from an ADC cannot overflow a 32-bit int when squared? Is it formal methods to conclude that a system with 256 kilobytes of RAM is not going to have more than 2^31 objects in memory? I guess I do formal methods then, at an enormous cost. Sometimes I have runtime checks with early bail outs that allow code later down the stream to be optimized.


There are always exceptions, of course.

But then again, there's also the Ariane 5.


> I would gladly take a compiler that could not allocate local variables in registers

That sounds rather extreme. I imagine it would be ruinous to performance. To my knowledge the various safe alternatives to C have no trouble with register allocation. Safe Rust and Ada SPARK, for instance, or even Java.

> The standard does not preclude storing a raw pointer and a length together. Why couldn't the compiler do something like that and do a bounds check?

That technique is called fat pointers. It's been well researched. Walter Bright, who just posted a comment elsewhere in this thread, has a blog post on it. [0] I imagine ABI incompatibility is one of the main reasons it hasn't caught on.

C++ compilers do something similar, storing array lengths in an address like myArray[-1]. They need to store array lengths at runtime because of the delete[] operator (unless they can optimise it away of course). C++ still allows all sorts of pointer arithmetic though, so it wouldn't be easy for a compiler to offer Java-like guarantees against out-of-bounds access. Doing so takes a sophisticated tool like Valgrind rather than just another flag to gcc.

[0] https://digitalmars.com/articles/C-biggest-mistake.html


Why not compile at -O0? That gives you everything that you want today, including no register allocation.


-O0 doesn't change what the compiler code generation is allowed to do around UB. Optimizing tends to expose UB by producing undesired operation, but the optimizer is only allowed to make transformations that result in the code still complying with the standard. So code generation without the optimizer could still produce undesired operation, and turning the optimizer off is no guarantee that it'll do what you want.


If you want code to be run line by line as written in source file I think you want -Og. O0 means optimize compilation speed.

Wouldn’t recommend relying on either of them for making your build correct. It breeds the false accusation that it’s the optimizations that break your code, when the code is in fact already broken and might fail for other reasons.


At least for GCC/Clang this isn't what O0 means. Excerpt from the GCC manual:

> Most optimizations are completely disabled at -O0 or if an -O level is not set on the command line, even if individual optimization flags are specified.

And

> Optimize debugging experience. -Og should be the optimization level of choice for the standard edit-compile-debug cycle, offering a reasonable level of optimization while maintaining fast compilation and a good debugging experience. It is a better choice than -O0 for producing debuggable code because some compiler passes that collect debug information are disabled at -O0.

(Og isn't really implemented in clang yet, but the mindset is the same)


Yes, this is a consistent position, but bounds checking all indexing and making pointers larger by including length will cause slowdown much larger than 1-2%. My estimate is at least 10% slowdown. So citing 1-2% slowdown is misleading.


I agree 1-2% might be misleading, but Dr. John Regehr puts overflow checking overhead at 5%. [1]

I'll take 5% or even 10% for correctness because, again, fast and wrong is very bad.

[1]: https://blog.regehr.org/archives/1154



> the original spirit of UB, which was just to make C easy to port to any architecture

I thought the original spirit of undefined behavior was 'we have competing compiler implementations and don't have the political will to pick one'.


I don't think compiler implementations are responsible for the standard to refuse to endorse 2-complement (which is the root cause of signed overflow being UB originally if I understand correctly).


I think those are one and the same; I was just more polite about it, I guess. I think yours is closer to the truth than mine.


I pretty strongly disagree with these quotes you have here. You don't need UB to do optimization, in fact because of UB in C many forms of optimization are impossible in C because of aliasing uncertainty. If things are defined to not alias then even though things are fully defined there is a lot more optimization that can be performed by not reloading variables from memory and instead using local cached-in-register copies of variables. This is why naive Rust can in some cases be faster than naive C (without using of restrict keyword) and this will only improve with time as compilers better support this.


> in fact because of UB in C many forms of optimization are impossible in C because of aliasing uncertainty.

This doesn't seem right? Specific types of aliasing make optimization more difficult/impossible because they are specifically not UB. Rust's aliasing optimizations can (theoretically?) occur specifically because same-type aliasing is UB in Rust. I'm not sure how "UB prevents optimization" falls out of that?


> If things are defined to not alias

... What do you call it then when you put aliasing pointers into pointer variables "defined not to alias"? Can you explain how this reduces UB?


I think they're trying to say that if your language prevents aliasing (i.e. pointers by definition do not alias and it is impossible to make them alias), then the compiler doesn't need to worry about it. C doesn't prevent aliasing, therefore the compiler has to worry about every case of aliasing that isn't undefined.

But it's not the UB here that prevents optimization, it's precisely the fact that aliasing is (sometimes) defined. If it were always UB, the compiler wouldn't ever have to worry about it... but that'd require some interesting changes to the language to keep it usable. Definitely not backwards compatible changes.


> pointers by definition do not alias and it is impossible to make them alias

This is equivalent to saying that pointer arithmetic is disallowed. Pointers are by their nature in the C virtual machine offsets into a linear memory space, so for any two pointers, x and y, there exists a c such that (ptr_t)x+c == (ptr_t)y, and thus there can always be aliasing.


> Pointers are by their nature in the C virtual machine offsets into a linear memory space

Historically, many platforms - such as Multics or Windows 3.x - didn’t have a linear memory space, they had some kind of memory segmentation. The industry has largely moved away from that towards the flat address space model. Go back to the 1980s, it was still a much bigger thing, and people used C on those platforms, and the standard was written to support them. The actual detailed control of memory segmentation is inherently non-portable so cannot be addressed by the standard, but the standard defines pointer arithmetic in such a way to support those platforms - pointer arithmetic on unrelated pointers is undefined, because if the pointers belong to different memory segments the results can be meaningless and useless.


Even in cases of segmented memory architectures there is still a requirement that a void pointer be able to cast (reversably) into an integral type (7.18.1.4 in C99, 3.3.4 in C90, the first ISO standard).


This is not true in either of those C versions. C99§7.18.1.4 describes (u)intptr_t as optional, which (per §7.18) means that <stdint.h> need not provide those typedefs if the (compiler) implementation doesn't provide an integer type that allows reversible casting from a void pointer. Similarly, it's not clear in C90§3.3.4 that the implementation has to implement these casts in a reversible manner, although that is the obvious way of implementing it.

That being said, I can't think of an implementation that didn't support this, even if they did have to pack segment and offset information into a larger integer.


It wouldn’t work on capability-based architectures where pointer validity is enforced by tag bits. Cast the pointer to an integer, lose the tag; cast it back, tag is missing and pointer is invalid (except for privileged system software which has the authority to enable the tag bit on an arbitrary pointer.)

Do such architectures still exist? Do/did they support C? Well, 128-bit MI pointers on IBM i (fka AS/400) are like this - a hardware tag bit protects them against forgery - and ILE C lets you manipulate such pointers (it calls them “system pointers”, _SYSPTR), so that would be a real world example of a pointer in C which can be cast to an integer but cannot be cast back. (IBM i also has 64-bit pointers which aren’t capabilities and hence aren’t tag-protected and can be cast to/from integers - but they don’t point into the main operating system address space, which is a single-level store single address space shared by all non-Unix processes, they only point into per-process private address spaces, so-called “teraspaces”.)

I think some UB in C is motivated by allowing C to be used on these kinds of architectures, even if they are now exceptionally rare. When C was initially being standardised in the 1980s, many people thought these kinds of architectures were the future, I think they were surprised by the fact they’ve never gone mainstream


ARM Morello (on the front page earlier this week: https://news.ycombinator.com/item?id=30007474) is a capability-based architecture, with 129-bit pointers. Compilers for it provide a uintptr_t that is appropriate, but it is far stricter about the kinds of operations that can be done in the reverse direction.


Does that matter? If you change the integer then you are not allowed to cast the altered value to a pointer.

And a compiler could enforce this if it really wanted to.

When you only have to worry about pointer arithmetic on actual pointers, it's straightforward to make sure they never alias.


Actually no, the C abstract machine is not contiguous. If x and y point inside distinct objects (recursively) not part of the same object, there is no valid c that you can add to x to reach y.

edit: allowing that would prevent even basic optimizations like register allocation.

edit: s/virtual/abstract/


The C abstract machine requires reversible transformations from pointer to integral types (7.18.1.4 in C99, 3.3.4 in C90, the first ISO standard).

Practically speaking modern devices are in fact mostly flat, so you can of course do this, although you do brush up against undefined behavior to get there.


Reversible transformations don't imply a flat address space. All it means is that there's an integral type with enough bits to record any address in them.


Pointer provenance is more complex and subtle.


What is the c virtual machine? I thought there wasn't one


They mean the abstract machine, in terms of which the semantics are defined.


See also decades of Fortran performance dominance, only overcome by throwing a gazillion C programmers at the problem.


In theory, because those gazillion C programmers happen to misuse restrict, so in the end it mostly ends up in flames.


but doesn't Fortran get away with it exactly because argument aliasing is UB? You can use restrict to get Fortran semantics.


Actually yeah, you are right. I somehow misread it as a defense of UB because of the last sentence. Total a brain-fart on my part, sorry.


I'm afraid the author as usual confuses the cases when UB produces strange results and when the compiler actively and deliberately exacerbates the results. He also claims gcc interpretation of UB provides a lot of performance. Ironically gcc or clang already do experiments on different interpretation of UB like signed integer overflow and implicit initialization.


Can you elaborate on the signed integer overflow and the implicit initialization differences you're referring to?



But these are options, it's not a big deal to me that compiler offers special options for special use cases. It's not clear to me if you are saying that the *default* for clang and GCC differs, aren't they both using `fno-wrapv` by default?


These options provide a different interpretation of UB like signed integer overflow and implicit initialization. It's in reference to Ralf's blog post:

>I honestly think trying to write a highly optimizing compiler based on a different interpretation of UB would be a worthwhile experiment. We sorely lack data on how big the performance gain of exploiting UB actually is. However, I strongly doubt that the result would even come close to the most widely used compilers today—and programmers that can accept such a big performance hit would probably not use C to begin with. Certainly, any proposal for requiring compilers to curtail their exploitation of UB must come with evidence that this would even be possible while keeping C a viable language for performance-sensitive code.

He doesn't know such interpretations are implemented by widely used C compilers.


He doesn't really address any of the claims made, and makes some startling claims himself. For example:

> This example demonstrates that even ICC with -O1 already requires unrestricted UB.

The example demonstrates nothing of the sort. It demonstrates that ICC uses unrestricted undefined behaviour, not that this is required in any way shape or form. (The only way the word "requires" is reasonable here is that the behavior seen "requires" use of this kind of UB to be present. But that's something very different, and doesn't match with the rest of his use).

> writing C has become incredibly hard since undefined behavior is so difficult to avoid

No, it has become difficult because compilers exploit UB in insane ways. The platform specific UB that he claims is "not an option" is, incidentally, exactly how UB is defined in the standard:

Permissible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message).

This was made non-binding in later versions of the standard, so optimiser engineers act as if these words don't exist.

And of course there is no reason given for this interpretation being "not an option". Except for "but I wanna". Yes, it's not an option if you really want to exploit UB in the insane ways that compilers these days want to exploit it.

But that's not necessary in any way shape or form.

> That would declare all but the most basic C compilers as non-compliant.

Yes, because by any sane interpretation of the standard they are non-compliant.

On a more general note, I find this idea that things are necessary because I want to do them really bizarre.


> This was made non-binding in later versions of the standard, so optimiser engineers act as if these words don't exist.

This is reminiscent of the argument in "How One Word Broke C" [0, HN discussion at 1]. I'm not particularly convinced this argument is correct. In my opinion, it's "ignoring the situation completely" that's the phrase of interest; after all, what is assuming that UB cannot occur but "ignoring the situation completely"?

I'm a nobody, though, so take that with an appropriate grain of salt.

[0]: https://news.quelsolaar.com/2020/03/16/how-one-word-broke-c/ (currently broken; archive at https://web.archive.org/web/20210307213745/https://news.quel...

[1]: https://news.ycombinator.com/item?id=22589657


Apologies, but "ignoring the situation completely" is the exact opposite of "assuming the situation cannot occur and acting on that assumption".


I'm not sure how "acting on an assumption that the situation cannot occur" is distinguishable from "choosing to ignore situations completely whenever they come up". The former is a blanket description of how you treat individual instances.

For example:

    int f(int* i) {
        int val = *i;
        if (i == NULL) { return 0; }
        return val;
    }
I submit that there are two situations here:

1. i is NULL. Program flow will be caught by the NULL check and the return value is 0.

2. i is not NULL. The NULL check cannot be hit, and the return value is val.

As allowed by the standard, I'll just ignore the situation with UB, leaving

> i is not NULL. The NULL check cannot be hit, and the return value is val.

Alternatively, I can assume that UB cannot occur, which eliminates option 1, leaving

> i is not NULL. The NULL check cannot be hit, and the return value is val.

You get the same result either way.

And that gets to the root of the problem: What exactly does "ignoring the situation completely" mean? In particular, what is the scope of a "situation"?


Not sure what such an absurd code example is supposed to show.

However, ignoring the situation completely in this case is emitting the code as written. This is not hard at all, despite all the mental gymnastics expended to pretend that it were hard.

That is the compiler's job: emit the code that the programmer wrote. Even if that code is stupid, as in this case. It is not the compiler's job to second guess the programmer and generate the code that it believes the programmer meant to write.

In this case:

   1. Dereference i.
   2. Check i.
   3. If i is null return 0.
   4. return the value from line 1.
If you want, I can also write it down as assembly code.

Again, this isn't hard.

"Ignoring the situation completely" means ignoring the fact that this is UB and just carrying on.


> Not sure what such an absurd code example is supposed to show.

I had a few things in mind:

1. Highlight that it's not the wording change from "permissible" to "possible" that enables UB-based optimization - it's the interpretation of one of the behaviors the standard lists

2. It's the vagueness of "the situation" that's at the heart of the issue.

3. "Ignoring the situation completely" can produce the same result as "assuming UB never happens", contrary to your assertion (albeit subject to an interpretation that you disagree with)

(And absurd or not, similar code does show up in real life [0]. It's part of why -fno-delete-null-pointer-checks exists, after all).

> However, ignoring the situation completely in this case is emitting the code as written.

> "Ignoring the situation completely" means ignoring the fact that this is UB and just carrying on.

Wouldn't "emitting the code as written" or "ignoring the fact that this is UB and just carrying on" fall under the "to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message)"? If it does, why have the "ignoring the situation completely with unpredictable results" wording in the first place?

I'm not really convinced your definition of "ignoring the situation completely" is the only valid definition, either. Why wouldn't the standard's wording allow the interpretation in the example?

> That is the compiler's job: emit the code that the programmer wrote.

Why bother with optimizations, then?

[0]: https://lwn.net/Articles/342330/


>> That is the compiler's job: emit the code that the programmer wrote.

> Why bother with optimizations, then?

Why not?

If you can optimise without compromising the intended semantics of the code, go ahead. If you cannot, do not.

Note: you will have to apply judgement, because the C standard in particular happens to be incomplete.


> Why not?

Because "That is the compiler's job: emit the code that the programmer wrote", and optimizing means the compiler is allowed to emit code the programmer did not write.

> If you can optimise without compromising the intended semantics of the code, go ahead.

And how does the compiler know what the "intended semantics" of the code are, if it isn't precisely what the programmer wrote?

> you will have to apply judgement

How is that compatible with "It is not the compiler's job to second guess the programmer and generate the code that it believes the programmer meant to write"? Applying judgement to guess what the programmer intended sounds exactly like "generating the code that [the compiler] believes the programmer meant to write".


> It is not the compiler's job to second guess the programmer and generate the code that it believes the programmer meant to write.

Not if the compiler is run at some (hopefully available) "absolute vanilla", translate-this-code-literally-into-machine-language-and-do-absolutely-nothing-else settings, no. But if you add some optimise-this or optimise-that or optimise-everything switch? Then second-guessing the programmer and generating the code it believes the programmer meant to write (if only they knew all these nifty optimisation techniques) is exactly the compiler's job.

And isn't that the subject under discussion here; Undefined Behaviour and compiler optimisation?

> "Ignoring the situation completely" means ignoring the fact that this is UB and just carrying on.

Q: How?

A: In an undefined way. (Right, since this is "Undefined Behaviour"?)

So... Whatever the heck happens, happens. It's, like, not defined. If you want it defined (as "Do exactly what I wrote anyway!"), then don't be so hung up on the Undefined Behaviour bit; just switch off all the optimisation bits in stead.


> Not if the compiler is run at some (hopefully available) "absolute vanilla"

Not this straw man again, please!

You can optimise, as long as you keep the semantics of the code.

When what the programmer wrote conflicts with an optimisation the compiler would like to perform:

"- Trust the programmer."

http://port70.net/~nsz/c/c89/rationale/a.html

So if the compiler wanted to put a 4 element array in registers as an optimisation, but the programmer wrote code that accesses element 6, guess what? You can't do that optimisation!

[You can also emit a warning/error if you want]

Also, if your fantastic optimisation breaks tons of existing code, it's no good.

"Existing code is important, existing implementations are not. A large body of C code exists of considerable commercial value. "

http://port70.net/~nsz/c/c89/rationale/a.html

>> "Ignoring the situation completely" means ignoring the fact that this is UB and just carrying on. >Q: How? >A: In an undefined way. (Right, since this is "Undefined Behaviour"?)

Nope. There are bounds. Again, this isn't hard, it says so right there.


> You can optimise, as long as you keep the semantics of the code.

In order to do this, we need to define the semantics. Actually doing this is a much harder exercise than you might think, and largely involves creating a PDP-11 emulator since modern machines simply do not behave like the systems that C exposes. We don't even have a flat buffer of memory to address.

Things get way way worse when you start considering nontraditional architectures and suddenly the only way to match the proposed semantics is to introduce a huge amount of software overhead. For example, we might decide to define signed integer overflow as the normal thing that exists on x86 machines. Now when compiling to a weird architecture that does not have hardware overflow detection you need to introduce software checks at every arithmetic operation so you can match the desired semantics appropriately.


sorry, I don't understand, it seems to me that current optimizing compilers are fully compliant with the option of "ignoring the situation completely with unpredictable results". That's how UB exploiting optimizations work: the compiler ignores the possibility that the erroneous case can ever happen and takes into account only all valid states.


Apologies, but "ignoring the situation completely with undpredictable results" is the exact opposite of "assuming the situation cannot occur and acting on that assumption".

If my program attempts to write outside the bounds of declared array, ignoring the situation (that this is UB) is letting that write happen, and letting the chips fall where they might.

How is assuming it cannot/must not happen, and then optimising it away because it did happen "ignoring the situation"??


So a compiler that is allowed to ignore a situation would still be required to generate code for that situation? I don't even know how that would be possible.


It ignores the fact that this is UB.

It is not just "possible", but exceedingly simple to comply with what is written in the standard and generate code in that situation. For example:

Writing beyond the bounds of an array whose length is known is apparently UB.

   int a[4];
   a[2]=2;
   a[6]=10;
To generate code for the last line, use the same algorithm you used to generate code for the second line, just parameterised with 6 instead of 2.

What was impossible about that?


If each of a[0...3] is stored only inside registers, which register should the compiler pick for a[6]?


Who is forcing the compiler to store fracking arrays in registers?

Once again, “incompatible with weird optimizations I want to do” is not close to the same thing as “not possible”.

If the two are incompatible, it is obviously the weird optimization that has to go.

How do you pass the address of that array to another function?


> Who is forcing the compiler to store fracking arrays in registers?

The people who want their program to be fast. "Just do everything in main memory" means your program will be extremely slow.


1. Arrays ≠ Everything. Please stop with the straw men.

2. "Compiler Advances Double Computing Power Every 50 Years, And The Interval Keeps Growing". (i.e. even Probsting was wildly optimistic when it comes to the benefits of compiler research). The benefits of these shenanigans are much less than even the critics thought, never mind what the advocates claim. They are actually small and shrinking.

https://zeux.io/2022/01/08/on-proebstings-law/


Sorry, we are going in circle. I thought we were discussing what it means for the compiler to ignore UB.

Also calling scalar replacement of aggregates a weird optimization is very strange.


It means ignoring the situation and emitting the code that the programmer wrote. The programmer did not write "put this array in registers". The programmer did write "access element 6 of this array".

This isn't hard.

"Keep the spirit of C. The Committee kept as a major goal to preserve the traditional spirit of C. There are many facets of the spirit of C, but the essence is a community sentiment of the underlying principles upon which the C language is based. Some of the facets of the spirit of C can be summarized in phrases like

- Trust the programmer.

- Don't prevent the programmer from doing what needs to be done.

- Keep the language small and simple.

- Provide only one way to do an operation.

- Make it fast, even if it is not guaranteed to be portable.

The last proverb needs a little explanation. The potential for efficient code generation is one of the most important strengths of C. To help ensure that no code explosion occurs for what appears to be a very simple operation, many operations are defined to be how the target machine's hardware does it rather than by a general abstract rule."

http://port70.net/~nsz/c/c89/rationale/a.html

> calling scalar replacement of aggregates a weird optimization is very strange.

No it's not, certainly for arrays. It's a little less weird for structs than it is for arrays. But the fact that you consider calling it weird "very strange" is telling, and to me the heart of this disconnect.

The Optimiser-über-alles community feels that the needs of the optimiser far outweigh the explicit instructions of the programmer. Many programmers beg to differ.


At this point it is obvious that you wouldn't accept anything except a trivial translation to ASM. That's a perfectly reasonable thing to want, but you'll also understand that's not what 99.99% of C and C++ programmers want; even assuming (but not conceding) that might have been the original intent of the language, 50 year later user expectations have changed and there is no reason compilers authors should be bound by some questionable dogma.


> Apologies, but "ignoring the situation completely with undpredictable results" is the exact opposite of "assuming the situation cannot occur and acting on that assumption".

Seems to me it's not the opposite but the exact same thing.


Not sure how you can say that.

   int a[4];
   a[2]=2;
   a[6]=3;
"Ignore the situation": emit the code the code for a[6]=3; in the same way you emitted the code for a[2]=2. You've ignored the fact that this is UB.

"Assume the situation cannot occur": don't know, but according to the UB extremists the compiler can now do anything it wants to do, including not emitting any code for this fragment at all (which appears to happen) or formatting your hard disk (which doesn't, last I checked).

Assuming that a[6]=3 does not occur, despite the fact that it is written right there, would also allow putting a into registers, which "ignoring the situation" would not.


To me, the "situation" is that `a[6]` is being assigned to. So I'd make the compiler ignore the situation, and omit line 3.

Then the compiler could optimize the remaining program, possibly by putting `a` into registers.

I think you have a different view on what the "situation" is, so that your view of ignoring it is different from mine (and CRConrads, etc).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: