Compiler optimizations can actually improve developer productivity, because they allow developers to write clean but inefficient code that can be rewritten to near optimal form. For example, in Rust iterators are a very convenient and clear interface that are generally zero cost (sometimes even more efficient) compared to a manual loop implementation. But without optimization, they would be many times slower.
A lot of times code compiled with no optimisations (-O0) is unusable. Specifically, in video some software compiled without optimisations won't push frames on time and instead will just keep dropping frames. There was a post a couple days ago about it being problematic in the games industry where a game compiled without optimisations is unplayable, while higher optimisation levels are hard to inspect in a debugger, due to the myth of "zero-cost-abstractions" in C++. Also to put it on its head a bit, when a compiler isn't fast enough (read not enough work was put into performance of the compiler itself, mostly on the design level, not on the microoptimisation level really), the feedback loop is so long, that developers stop testing out hypotheses and instead try to do as much as possible in their heads, without verifying, only to avoid the cost of recompiling a project. Another instance: when a photo-editing application can't quickly give me a preview of the photo I'm editing, I'm going to test fewer possible edits and probably get a worse photo as a result. With websites, if an action doesn't happen within a couple seconds of me clicking I often assume the website doesn't work and just close it, even though I know there are a lot of crappy websites out there that are just this slow. Doesn't matter. The waiting usually isn't worth my time and frustration.
One might argue that cheap overseas development labour makes it a commodity, but I care more for being humane towards humans than CPUs.