Hacker News new | past | comments | ask | show | jobs | submit login

Unfortunately, these don't help much.

I thought about responding to jcranmer's post instead of yours, but wasn't sure how to approach it. It's a good comment, and I appreciated his attempt, but I feel like he's thoroughly missing the point of the complaints about UB. The complaint isn't that too much UB exists in C, but that compiler writers seem to use the presence of UB as an excuse for being allowed to break code that has worked historically. And the complaint isn't just that the code is broken, but that no apparent care is taken to avoid doing so despite the small amount of gain. It's a worldview mismatch, and I don't know how to bridge it.

Your comment seems about the same as the one I responded to. You seem to assume that people who complain about UB in C would have a problem with keeping local variables in registers, but I've never seen anyone actually make this complaint. Take for example the Arxiv paper we are discussing: he doesn't bring this up. This makes me suspect that your mental model of the people who are complaining about UB in C is probably flawed. I understand the technical issue you are gesturing toward, I just don't see it as being something that refutes any of the issues brought up in the Arxiv paper.

My hope was that a concrete example might help to clarify, but I do realize this might not be the right forum for that.




There's not a bright line between optimization passes being aware of UB in unreasonable vs "exploitative" ways. The principled way to think about optimization is that an optimization pass can assume certain invariants about the code but must preserve other invariants through the transformation as long as the assumptions hold. I think basically any invariant you assume in a memory-unsafe language C could be invalidated by real code.

A lot of the things people complain about are cases where the compiler got better at reasoning through valid transformations to the code, not cases where the compiler . E.g. I'm sure that even very old compilers have had optimization passes that remove redundant NULL checks that logically depend on dereferencing NULL pointers being undefined. But these probably were more ad-hoc and could only reason about relatively simple cases and constrained scopes. Another example is integer overflow. I'd bet a lot of older compilers have loop optimizations that somehow depend on the loop counter not overflowing, or pointer addresses not wrapping around.

I think it's perfectly reasonable to say that C's definition of undefined behaviour is too expansive and too difficult to avoid in real code.

But I think that a lot of the complaints end up being equivalent to "don't depend on UB in ways that I notice in my code" or "don't depend on UB in ways that compilers from the 90s didn't". That's not something you can practically apply or verify when building a compiler because there isn't a bright line between assuming an invariant in reasonable vs unreasonable ways, unless you can define a narrow invariant that distinguishes them.


No, I don't assume people are against register allocation, but any concrete proposal I have seen kind of implies such conclusion. I am trying to understand what people actually want, since they seem clearly different from what people say they want.

Okay let's discuss a concrete example.

    *x = 12345678;
    f();
    return *x; // can you copy propagate 12345678 to here?
f() does this:

    for (int *p = 0; p++; p < MEMSIZE)
        if (*p == 12345678)
            *p = 12345679.
That is, f scans the memory for 12345678 and replace all instances with 12345679. There is no doubt this actually works that way in assembly. Things like cheat engines do this! C compilers assume this doesn't happen, because it is UB.

Hence, portable assembly C compiler can't omit any load. Now I understand there are minority of people who will answer "that's what I want!", but like register allocation, I think people generally want this to optimize. But that necessarily implies memory search-and-replace can't compile in portable assembly manner.


I can't really speak to the "portable assembler" point of view here, but if I was trying to make UB less dangerous I would say that the code had better return either 12345678 or 12345679, as long as no other memory addresses have 12345678 stored in them. Or it could trap.


> I can't really speak to the "portable assembler" point of view here, but if I was trying to make UB less dangerous I would say that the code had better return either 12345678 or 12345679

If 12345678 is acceptable to you then the language specification is already doing what you want. The alternative is to require arbitrary havocs on every memory address upon any function call to a different translation unit. Nightmare.

> Or it could trap.

UBSan exists and works very well. But introducing runtime checks for all of this stuff is not acceptable in the C or C++ communities, outside of very small niches regarding cryptography, because of the incredible overhead required. Imagine if your hardware doesn't support signed integer overflow detection. Now suddenly every single arithmetic operation is followed by a branch to check for overflow in software. Slow.


> If 12345678 is acceptable to you then the language specification is already doing what you want.

No it's not.

The compiler is allowed to look at this code and make it print 5. Or make it delete my files. This code is undefined and the compiler could do anything without violating the C standard.


> The compiler is allowed to look at this code and make it print 5. Or make it delete my files.

It is allowed to do this but it won't. "The compiler will maximally abuse me if I put any undefined behavior in my program" is not a concern that is actually based in any reality. In the above program the compiler cannot meaningfully prove that undefined behavior exists and if it could it would yell at you and raise an error rather than filling your hard drives with pictures of cats.

This meme has done so much damage to the discussion around UB. The gcc and clang maintainers aren't licking their lips just waiting to delete hard drives whenever people dereference null.

Go compile that program. You can stick it in compiler explorer. It is going to print 12345678.


> It is allowed to do this but it won't.

It is very possible for a non-malicious compiler to end up eliminating this code as dead.

That's the biggest risk. I only mentioned "delete my files" to demonstrate how big the gap in the spec is, because you were saying the spec is already "doing what I want", which happens long before we get to what compilers will or won't do.


A programmer wrote things in a specific order for a specific reason.

Lets instead assume that the variable assignments above are to some global configuration variables and then f() also references those and the behavior of f() changes based on the previously written code.

The objections from the 'C as portable assembler' camp are:

* Re-ordering the order of operations across context switch bounds (curly braces and function calls). -- re-ordering non-volatile store / loads within a context is fine, and shouldn't generate warnings.

* Eliminating written instructions (not calling f()) based on optimizations. -- Modification to computed work should always generate a warning so the optimization can be applied to the source code, or bugs corrected.


> A programmer wrote things in a specific order for a specific reason.

Is it not possible that the programmer introduced a bug?

Consider the bug that caused the 911 glitch in Android phones recently. An unstable comparator was defined in a type, violating the contract that Comparable has with Java's sorting algorithms. When Java detects that this implementation violates the assumptions its sorting algorithms make, it throws an exception. Should it not do this and instead say that the programmer wrote that specific comparator on purpose and it should loop forever or produce an incorrect sort? I think most people would say "no". So why is the contract around pointer dereferencing meaningfully different?


> Modification to computed work should always generate a warning so the optimization can be applied to the source code, or bugs corrected.

This only works out in very, very limited cases. What if this opportunity only presents itself after inlining? What if it's the result of a macro? Or a C++ template?

Just because the compiler can optimize something out in one case doesn't mean you can just delete it in the code...


Globals and locals are different. All compilers will give a global a specific memory location and load and store from it. Locals by contrast can be escape analyzed.


The example didn't show where X was defined; it could be anything.


No it could not have been for copy propagation to be valid. It had to be a local except under some very special conditions.


How about circumstances such as opting in to a semantically new version of C?

  #ifndef __CC_VOLATILE_FUNCTIONS
  /\* No volatile function support, remove code */
  #define VOLFUNC
  #else
/* This compiler supports volatile functions. Only volatile functions may cause external side effects without likely bugs. */

  #define VOLFUNC volatile
  #endif
https://www.postgresql.org/docs/current/xfunc-volatility.htm...

Similarly stable functions could have results cached. You might also note that PostgreSQL assumes any undeclared function is volatile.


> any concrete proposal I have seen kind of implies such conclusion.

No it does not. In your example, I personally would prefer it did not propagate the 12345678. Good grief, I wrote the deref there.

> C compilers assume this doesn't happen, because it is UB.

Incorrectly. IMHO.

> but like register allocation,

You are silently equating int x; with a memory deref. There is no need to do this.

Anyway, here is part of the rationale for C:

"Although it strove to give programmers the opportunity to write truly portable programs, the Committee did not want to force programmers into writing portably, to preclude the use of C as a ``high-level assembler'': the ability to write machine-specific code is one of the strengths of C. It is this principle which largely motivates drawing the distinction between strictly conforming program and conforming program"

http://port70.net/~nsz/c/c89/rationale/a.html#1-5

So the idea of C as a portable assembler is not some strange idea proposed by ignorant people who don't understand C, but an idea that was fundamental to C and fundamental to the people who created the ANSI/ISO C standard.

But hey, what do they know?


Thanks for making an example!

I'm against compiling this particular example to return a constant. I presumably wrote the awkward and unnatural construction return *x because I want to to force a load from x at return. If I wanted to return a constant, I'd have written it differently! I'm odd though in that I occasionally do optimizations to the level where I intentionally need to force a reload to get the assembly that I want.

Philosophically, I think our difference might be that you want to conclude that one answer to this question directly implies that the "compiler can't omit any load", while I'd probably argue that it's actually OK for the compiler to treat cases differently based on the apparent intent of the programmer. Or maybe it's OK to treat things differently if f() can be analyzed by the compiler than if it's in a shared library.

It would be interesting to see whether your prediction holds: do a majority actually want to return a constant here? My instinct is that C programmers who complain about the compiler treatment of UB behavior will agree with me, but that C++ programmers dependent on optimizations of third party templates might be more likely to agree with you.


Oh, so you are on "that's what I want!" camp. But I am pretty sure you are in minority, or at the very least economic minority. Slowdown implied by this semantics is large, and easily costs millions of dollars.

> while I'd probably argue that it's actually OK for the compiler to treat cases differently based on the apparent intent of the programmer.

This is actually what I am looking for, i.e. answer to "then what do you mean?". Standard should define how to divine the apparent intent of the programmer, so that compilers can do divination consistently. So far, proposals have been lacking in detailed instruction of how to do this divination.


> and easily costs millions of dollars

Looks like UB bugs can cost more. It's a new age of UB sanitizers as a reaction to a clear problem with UB.


Bugs based on optimizations that compilers make based on assumptions enabled by undefined behavior (like the null check issue from 2009 in the Linux kernel) actually don't cost very much. They get a disproportionate amount of scrutiny relative to their importance.


My experience with a lot of the UB-is-bad crowd is that they don't have much of an appreciation for semantics in general. That is to say, they tend to react to particular compiler behavior, and they don't have any suggestions for how to rectify the situation in a way that preserves consistent semantics. And when you try to pin down the semantics, you usually end up with a compiler that has to "do what I mean, not what I said."

A salient example that people often try on me, that I don't find persuasive, is the null pointer check:

  void foo(int *x) {
    int val = *x;
    if (!x) return;
    /* do real stuff */
  }
"The compiler is stupid for eliminating that null check!" "So you want the code to crash because of a null pointer dereference then." "No, no, no! Do the check first, and dereference only when it's not null!" "That's not what you said..."


> "No, no, no! Do the check first, and dereference only when it's not null!"

I don't think I've heard anyone express this preference. If they did, I'd agree with you that they are being unreasonable. My impression is that practically everyone on the "portable assembler" team would think it's perfectly reasonable to attempt the read, take the SIGSEGV if x is NULL, and crash if it's not handled. Most would also be happy skipping the read if the value can be shown to be unused. None would be happy with skipping the conditional return.

Where it gets thornier is when there is "time travel" involved. What if the unprotected read occurs later, and instead of just returning on NULL, we want to log the error and fall through:

  void foo(int *x) {
    if (!x) /* log error */;
    int val = *x;
    return;
  }
Is it reasonable to have the later unconditional read of x cause the compiler to skip the earlier check of whether x is NULL? In the absence of an exit() or return in the error handler, I think it would be legal for the compiler to skip both the error logging and the unused load and silently return, but I don't want the compiler to do this. I want it to log the error I asked it to (and then optionally crash) so I can identify the problem. Alternatively, I'd probably be happy with some compile time warning that tells me what I did wrong.


How do you feel about:

  void foo1(int *p) {
    *p = 7;
    printf("%d\n", *p);
    free(p);
  }
May the compiler replace that with puts("7") and free()? Recall that free() is a no-op when the pointer is NULL.

Are you arguing that each function may reduce pointer accesses down to one but not down to zero? What about

  int foo2() {
    int *p = malloc(400);
    *p = 0;
    /* ... code here was inlined then optimized away ... */
    ret = 7;
    free(p);
    return ret;
  }
? May we remove '*p = 0;', whether we remove the malloc+free or not?

Generally speaking the compiler tries not to reason about the whole function but just to look at the smallest possible collection of instructions, like how add(x, x) can be pattern matched to mul(x, 2) and so on. Having to reduce to one but not zero memory accesses per function is not easily compatible with that model. We would have to model branches that make the access conditional, the length of accesses may differ (what if 'p' is cast to char* and loaded), read vs. write, multiple accesses with different alignment, and maybe other things I haven't considered.

Both gcc and clang provide -fsanitize=null which checks all pointer accesses for null-ness before performing them. These can be surprising, your code (libraries you didn't write, headers you included) may be dereferencing NULL and relying on the optimizer to remove the invalid access. IMO there should be a "pointer is null-checked after use", it's a clear example where the programmer wrote something other than what they intended.


Quick responses before I go to bed:

I think compilers should emit code that make writes to memory when the program asks them to, even if the pointer is then freed. Not doing so too easily leads to security issues. If the pointer turns out to be NULL at runtime, then let it crash. Using 'volatile' often works as a workaround, though.

I have no strong opinion on substituting puts() for printf(). I think incorporating the standard library into the spec was unhelpful, but isn't a major issue either way. It occasionally makes tracing slightly more difficult, but presumably helps with performance enough to offset this, and it's never bitten me as badly as the UB optimizations have.


The existence of volatile is a strong argument that was never the intention that C should map every pointer dereference at the source level to load and stores at the asm level.


Not at all.

It was a workaround (unreliable at that, IIRC) for compilers getting more aggressive in ignoring the intentions of C, including the early standards.

"The Committee kept as a major goal to preserve the traditional spirit of C. There are many facets of the spirit of C, but the essence is a community sentiment of the underlying principles upon which the C language is based. Some of the facets of the spirit of C can be summarized in phrases like

- Trust the programmer.

- Don't prevent the programmer from doing what needs to be done. ..."

TRUST THE PROGRAMMER.

http://port70.net/~nsz/c/c89/rationale/a.html#1-5


> It was a workaround (unreliable at that, IIRC) for compilers getting more aggressive in ignoring the intentions of C, including the early standards.

But volatile was in C89? So how can it be a response to compilers "ignoring the intentions of C" in the early standards?

If anything, an example in the C89 standard makes it very explicit that an implementation may do exactly what gpderetta said (emphasis added):

> An implementation might define a one-to-one correspondence between abstract and actual semantics: at every sequence point, the values of the actual objects would agree with those specified by the abstract semantics. The keyword volatile would then be redundant.

> Alternatively, an implementation might perform various optimizations within each translation unit, such that the actual semantics would agree with the abstract semantics only when making function calls across translation unit boundaries. In such an implementation, at the time of each function entry and function return where the calling function and the called function are in different translation units, the values of all externally linked objects and of all objects accessible via pointers therein would agree with the abstract semantics. Furthermore, at the time of each such function entry the values of the parameters of the called function and of all objects accessible via pointers therein would agree with the abstract semantics. In this type of implementation, objects referred to by interrupt service routines activated by the signal function would require explicit specification of volatile storage, as well as other implementation-defined restrictions.

[0]: https://port70.net/~nsz/c/c89/c89-draft.html


Going from trust the programmer to requiring volatile semantics for each access is quite a stretch. I would say you are part of an extremely small minority.


Don't compilers already have ways to mark variables and dereferences in a way to say 'I really want access to this value happen'?

They are free to optimize away access to any result that is not used based on code elimination, and these annotations limit what can be eliminated.

However, basing code elimination on UB, as pointed out earlier, would basically eliminate all code if you were rigorous enough because you basically cannot eliminate UB in C code, which is not in any way useful.


> Don't compilers already have ways to mark variables and dereferences in a way to say 'I really want access to this value happen'?

The standard defines it, it's `volatile` I believe. But it does not help with the examples above as far as I understand (removing the log, removing the early return, time-travel...).


I believe it completely solves the question

? May we remove '*p = 0;', whether we remove the malloc+free or not?

Sure, it does not solve the question when arbitrarily removing NULL pointer checks is OK.

It is true that when the compiler is inlining code or expanding a macro it may have a NULL check that is spurious in environments that do not map page 0 based on the observation that the pointer was dereferenced previously.

And this assumption is incorrect in environments that do map page 0 causing wrong code generation.


On a lot of systems load from address zero doesn't segfault, I'm fine with the cpu loading it. I'm disappointed that the new compiler removed the check someone wrote a dozen years ago to prevent the cpu from overwriting something important though.


The null pointer need not map to address zero exactly to cater to this sort of corner cases.


In the case where loads from address zero are legal, the dereference-means-non-null-pointer assumption is disabled.



That case wasn't one of the cases where null pointers are valid memory, judging from the commit message. (There was a case where gcc had a bug where it didn't disable that check properly, but this doesn't appear to be that case.)


Actually, yes, I do want that code to crash if x is NULL.


Then you should logically be OK with deleting the null pointer check, as that check is unreachable.


Are you familiar with this classic example? http://blog.llvm.org/2011/05/what-every-c-programmer-should-...

The problem (if I understand it right) is that "Redundant Null Check Elimination" might be run first, and will get rid of the safety return. But then the "Dead Code Elimination" can be run, which gets rid of the unused read, and thus removes the crash. Which means that rather than being logically equivalent, you can end up with a situation where /* do real stuff */ (aka /* launch missiles */) can be run despite the programmer's clear intentions.


Right, each optimization is fine on its own but the combination is dangerous.


Not really. My understanding of the opinion (which I don't share, FWIW, and you probably know this) is that the null dereference should not be deleted, and it should be marked as an undeleteable trap similar to what an out-of-bounds access might be in Rust. That is, not unreachable in the sense of "code can never reach here", but unreachable in the sense that if it is hit then no other code executes.


The compiler can't know if page 0 is mapped.


Nope.


I would expect the code to crash when passed a null pointer, absolutely! And the if statement is there to let me recover after using "continue" in the debugger. And if I run on an ISA without protected memory, that load will not actually crash, and I'm OK with that. That's a level of UB differing-behavior (more like ID, really) that's reasonable.

I know of no experienced C programmer who would expect the compiler to re-order the statements. That sounds very much like a strawman argument to me!

I'd also be OK with a compiler that emitted a warning: "This comparison looks dumb because the rules say it should do nothing." Emitting that warning is helpful to me, the programmer, to understand where I may have made a mistake. Silently hiding the mistake and cutting out parts of code I wrote, is not generally helpful, even if there exist some benchmark where some inner loop will gain 0.01% of performance if you do so.

After all: The end goal is to produce and operate programs that run reliably and robustly with good performance, with minimum programmer cost. Saying that the possible performance improvements of code that nobody will run because it's buggy absolutely trump every other concern in software development, would be a statement I don't think reflects the needs of the software development profession.


> ...compiler writers seem to use the presence of UB as an excuse for being allowed to break code that has worked historically.

I may be dense, but I just don't understand this. Programmer compiles old C with a new compiler with certain optimizations set to on, and then is disappointed when the new compiler optimizes UB into some unsafe UB goofiness. I think expecting different behavior re: UB is folly. The language is explicit that this is the greyest of grey areas.

An aside -- what is so funny to me, not re: you, but re: some C programmers and Rust, is I've heard, just this week, 1) "Why do you need Rust? Unsafe memory behavior -- that's actually the programmers fault", 2) "Rust doesn't have a spec, a language must have a spec, a spec is the only thing between us and anarchy" (yet C has a spec and this is the suck), and 3) "Rust is too fast moving to use for anything because I can't compile something I wrote 3 months ago on a new compiler and have it work" (which seems, mostly, apocryphal.)

> It's a worldview mismatch, and I don't know how to bridge it.

I made the aside above only because -- maybe C and its values are the problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: