No. If I say (e.g.) _mm256_set_epi32(a,b,...,c) with constant arguments (which is the preferred way to make a vector constant), I expect to see 32 aligned bytes in the constant pool and a VMOVDQA in the code, not the mess of VPINSRDs that I’ll get at -O0 and that makes it essentially impossible to write decent vectorized code. The same way that I don’t expect to see a MUL in the assembly when I write sizeof(int) * CHAR_BIT in the source (and IIRC I won’t see one).
(Brought to you by a two-week investigation of a mysterious literally 100× slowdown that was caused by the fact that QA always ran a debug build and those are always compiled at -O0.)
In this case, I’d expect constant folding to be the absolute minimum performed at all optimization levels. It is, in fact,—for integers. For (integer) vectors, it’s not, even though it’s much more important there. That’s why advising cryptographers who program using vector intrinsics (aka “assembly except you get a register allocator”) to compile with GCC at -O0 is such bad advice. (Just checked MSVC and it’s better there.)
There are, however, more unambiguous cases, where Intel documents an intrinsic to produce an instruction, but GCC does not in fact produce said instruction from said intrinsic unless optimization is enabled. (I just don’t remember them because constants in particular were so ridiculously bad in the specific case I hit.)
If you constant fold and keep things in registers then you generally can't look at or change the pieces in a debugger. So everything gets written to the stack where it's easy to find.
(Brought to you by a two-week investigation of a mysterious literally 100× slowdown that was caused by the fact that QA always ran a debug build and those are always compiled at -O0.)