One is that the as-is rule only says that code has to match a possible execution of the abstract machine. Let's say an optimization changes the address where a variable gets allocated. That's an extremely valid optimization, even though the program can observe the change. But that would make it fail the "eval e = eval (opt e)" rule in siraben's proof. The same for picking a different order to execute the functions in "f() + g()".
The other is optimizing around undefined behavior. The as-is rule only applies for valid inputs. Optimizing a loop by assuming you won't overflow would get rejected by that proof. So would optimizing code so that it won't segfault.
And depending on how exactly that eval test works, it might effectively mark every variable as volatile too.
> But that would make it fail the "eval e = eval (opt e)" rule in siraben's proof. The same for picking a different order to execute the functions in "f() + g()".
This is really a question of how the semantics are formulated. The eval function I gave doesn't take into account an abstract machine so there is no notion of "variable allocation" or "final state" to check, the semantics doesn't account for it.
To scale it to a more realistic model with nondeterminism, heaps and so on, the semantics needs to be changed to a relational one. For instance, eval would now be a relation that relates two states of the machine, and a proof of correctness would be something like[0], which takes into account all possible states of the heap.
Equality would no longer used to relate two "equivalent" programs but rather some other equivalence relation with the properties one cares about, for instance two programs would be heap-equivalent if they have exactly the same effect on the heap, or UB-equivalent if they have possible UB at the "same" (again under another relation) places.
One is that the as-is rule only says that code has to match a possible execution of the abstract machine. Let's say an optimization changes the address where a variable gets allocated. That's an extremely valid optimization, even though the program can observe the change. But that would make it fail the "eval e = eval (opt e)" rule in siraben's proof. The same for picking a different order to execute the functions in "f() + g()".
The other is optimizing around undefined behavior. The as-is rule only applies for valid inputs. Optimizing a loop by assuming you won't overflow would get rejected by that proof. So would optimizing code so that it won't segfault.
And depending on how exactly that eval test works, it might effectively mark every variable as volatile too.