Hacker News new | past | comments | ask | show | jobs | submit login

Optimizing compilers don’t work like that. They can either deviate from the standard and leave it as defined behavior, or mark it UB and go with it as usual.

To get some insight by analogy, consider this set of constraints (unrelated to C):

  x <= 7
  2x >= 5
  …(more with x, y, z but not more constraining x)…
When you feed this to a linear constraint solver, you may get anything from 2.5 to 7 as x. E.g. 3.1415926. Not because a solver wanted to draw some circles, but because it transformed your geometric problem into an abstract representation for its own algorithm, performed some fast operations over it and returned the result. Nobody knows how exactly a specific solving method will behave wrt (underconstrained) x given that the description above is all you have.

When you feed UB into an optimizer, you feed a bit of lava into a plastic pipe, figuratively. You’ll get anything from program #2500…0000 to program #6999…9999, where “…” is few more thousands/millions of digits. Run some numbers from there as an .exe to see if something absurd happens.

The nature of UB and optimizers is that you either relax UBs into DBs and get worse efficiency, or you specify more UBs and get worse programming safety. What happens in between can be perceived as completely random. And the better/faster the optimizer is, the more random the outcome will likely be.

The exaggeration that is spread by some people regarding UB is close to absurd

UB-in-code is absurd by definition, no exaggeration here.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: