I think you're replying to a strawman. Here's the full quote:
> The excuse for not taking responsibility is that there are "language standards" saying that these bugs should be blamed on millions of programmers writing code that bumps into "undefined behavior", rather than being blamed on the much smaller group of compiler writers subsequently changing how this code behaves. These "language standards" are written by the compiler writers.
> Evidently the compiler writers find it more important to continue developing "optimizations" than to have computer systems functioning as expected. Developing "optimizations" seems to be a very large part of what compiler writers are paid to do.
The argument is that the compiler writers are themselves the ones deciding what is and isn't undefined, and they are defining those standards in such a way as to allow themselves latitude for further optimizations. Those optimizations then break previously working code.
The compiler writers could instead choose to prioritize backwards compatibility, but they don't. Further, these optimizations don't meaningfully improve the performance of real world code, so the trade-off of breaking code isn't even worth it.
Perhaps the solution is also to reign in the language standard to support stricter use cases. For example, what if there was a constant-time { ... }; block in the same way you have extern "C" { ... }; . Not only would it allow you to have optimizations outside of the block, it would also force the compiler to ensure that a given block of code is always constant-time (as a security check done by the compiler).
> The excuse for not taking responsibility is that there are "language standards" saying that these bugs should be blamed on millions of programmers writing code that bumps into "undefined behavior", rather than being blamed on the much smaller group of compiler writers subsequently changing how this code behaves. These "language standards" are written by the compiler writers.
> Evidently the compiler writers find it more important to continue developing "optimizations" than to have computer systems functioning as expected. Developing "optimizations" seems to be a very large part of what compiler writers are paid to do.
The argument is that the compiler writers are themselves the ones deciding what is and isn't undefined, and they are defining those standards in such a way as to allow themselves latitude for further optimizations. Those optimizations then break previously working code.
The compiler writers could instead choose to prioritize backwards compatibility, but they don't. Further, these optimizations don't meaningfully improve the performance of real world code, so the trade-off of breaking code isn't even worth it.
That's the argument you need to rebut.