There is a fundamental difference of priorities between the two worlds. For most general application code any optimization is fine as long as the output is correct. In security critical code information leakage from execution time and resource usage on the chip matters but that essentially means you need to get away from data-dependent memory access patterns and flow control.
Then such code needs to be written in a language that actually makes the relevant timing guarantees. That language may be C with appropriate extensions but it certainly is not C with whining that compilers don't apply my special requirements to all code.
That argument would make more sense if such a language was widely available but today in practice it isn't so we live in the universe of less ideal solutions. Actually it doesn't really respond to DJB's point anyway, his case here is that the downstream labor cost of compiler churn exceeds the actual return in performance gains from new features and that a change in policy could give security-related code a more predictable target without requiring a whole new language or toolchain. For what it's worth I think the better solution will end up being something like constant-time function annotations (not stopping new compiler features) but I don't discount his view that absent human nature maybe we would be better of focusing compiler dev on correctness and stability.
> his case here is that the downstream labor cost of compiler churn exceeds the actual return in performance gains from new features
Yes but his examples are about churn in code that makes assumptions that neither the language nor the compiler guarantees. It's not at all surprising that if your code depends on coincidental properties of your compiler that compiler upgrades might break it. You can't build your code on assumptions and then blame others when those assumptions turn out to be false. But then again, it's perhaps not too surprising that cryptographers would do this since their entire field depends on unproven assumptions.
A general policy change here makes no sense because most language users do not care about constant runtime and would rather have their programs always run as fast as possible.
I think this attitude is what is driving his complaints. Most engineering work exists in the context of towering teetering piles of legacy decisions, organizational cultures, partially specified problems, and uncertainty about the future. Put another way "the implementation is the spec" and "everything is a remodel" are better mental models than spec-lawyering. I agree that relying on say stability of the common set of compiler optimizations circa 2015 is a terrible solution but I'm not convinced it's the wrong one in the short term. Are we really getting enough perf out of the work to justify the complexity? I don't know. It's also completely infeasible given the incentives at play, complexity and bugs are mostly externalities that with some delay burden users and customers.
Personally I'm grateful the cryptographers do what they do, computers would be a lot less useful without their work.
The problem is that preventing timing attacks often means you have to implement something in constant time. And most language specifications and implementations don't give you any guarantees that any operations hapen in constant time and can't be optimized.
So the only possible way to ensure things like string comparison don't have data-dependent timing is often to implement it in assembly, which is not great.
What we really need is intrinsics that are guaranteed to have the desired timing properties , and/or a way to disable optimization, or at least certain kinds of optimization for an area of code.
Intrinsics which do the right thing seems like so obviously the correct answer to me that I've always been confused about why the discussion is always about disabling optimizations. Even in the absence of compiler optimizations (which is not even an entirely meaningful concept), writing C code which you hope the compiler will decide to translate into the exact assembly you had in mind is just a very brittle way to write software. If you need the program to have very specific behavior which the language doesn't give you the tools to express, you should be asking for those tools to be added to the language, not complaining about how your attempts at tricking the compiler into the thing you want keep breaking.
The article explains why this is not as simple as that, especially in the case of timing attacks. Here it's not just the end-result that matters, but how it's done that matters. If any code can be change to anything else that gives the same results, then this becomes quite hard.
Absolutist statements such as this may give you a glowing sense of superiority and cleverness, but they contribute nothing and are not as clever as you think.
The article describes why you can’t write code which is resistant to timing attacks in portable C, but then concludes that actually the code he wrote is correct and it’s the compiler’s fault it didn’t work. It’s inconvenient that anything which cares about timing attacks cannot be securely written in C, but that doesn’t make the code not fundamentally incorrect and broken.