Wasn't that a gigantic issue with x87 floating point implementation? It would internally use 80 bit registers and the result of any computation was completely at the mercy of how the compiler used those 80 bit registers as every move to system memory would drop precision back to the 32 bit or 64 bit specified by the program.
The footnote for point 4.3 in the linked document above explicitly calls out optimization modes and possible non compliance. Yeah, its the x87 80 bit registers all over again.
Except the compilers will not be dumb enough to use it implicitly when you don't ask for it, just like they don't automatically use FMA now (except maybe with -ffast-math).
Nope. C/C++ think it's OK to use fma when you didn't ask for it with default compiler settings. Also, with default settings, GCC is willing to replace single precision math with double precision math if it feels like it.
I'm not aware of any time GCC does that except due to C++'s promotion rules (e.g. float + double -> double), which is a problem with C++, not the compiler. I write deterministic software using floating point in C++.
Yes, it's not replacing 64-bit math with 80-bit math, x87 just doesn't have proper 64-bit floats with correctly sized exponent-field and the flags here interpret float literals as "long double" if I remember correctly. It's just the 80-bit x87 problem referred to earlier in the thread. The workarounds mentioned in the github aren't actually enough. Compilers cannot do IEEE-compliant computation on x87 without relatively large performance penalties (I made a library that did it).
> Nope. C/C++ think it's OK to use fma when you didn't ask for it with default compiler settings.
No, it doesn't. You have to request #pragma STDC FP_CONTRACT ON explicitly, or use -ffp-contract flag-equivalent (implied by -ffast-math or -Ofast) on most compilers. icc is the only major compiler that actually defaults to fast-math flags.
> Also, with default settings, GCC is willing to replace single precision math with double precision math if it feels like it.
I'm less familiar with gcc than I am with LLVM, but I strongly doubt that this is the case. There is a provision in C/C++ for FLT_EVAL_METHOD, which indicates what the internal precision of arithmetic expressions (which excludes assignments and casts) is, and this is set to 2 on 32-bit x86, because x87 internally operates on all numbers as long double precision, only explicitly rounding to float/double when you tell it to in an extension. But on 64-bit x86, FLT_EVAL_METHOD is 0 (everybody executes according to their own type), because SSE can operate on single- or double-precision numbers directly.