> You can optimise, as long as you keep the semantics of the code.
In order to do this, we need to define the semantics. Actually doing this is a much harder exercise than you might think, and largely involves creating a PDP-11 emulator since modern machines simply do not behave like the systems that C exposes. We don't even have a flat buffer of memory to address.
Things get way way worse when you start considering nontraditional architectures and suddenly the only way to match the proposed semantics is to introduce a huge amount of software overhead. For example, we might decide to define signed integer overflow as the normal thing that exists on x86 machines. Now when compiling to a weird architecture that does not have hardware overflow detection you need to introduce software checks at every arithmetic operation so you can match the desired semantics appropriately.
In order to do this, we need to define the semantics. Actually doing this is a much harder exercise than you might think, and largely involves creating a PDP-11 emulator since modern machines simply do not behave like the systems that C exposes. We don't even have a flat buffer of memory to address.
Things get way way worse when you start considering nontraditional architectures and suddenly the only way to match the proposed semantics is to introduce a huge amount of software overhead. For example, we might decide to define signed integer overflow as the normal thing that exists on x86 machines. Now when compiling to a weird architecture that does not have hardware overflow detection you need to introduce software checks at every arithmetic operation so you can match the desired semantics appropriately.