It doesn't exist yet and it's not clear it should be replicated as is. The fast-math flag does a bunch of related things that should probably be exposed separately so it's not a footgun in several situations. I'm also partial to exposing it per-function so the control is actually in the hands of the people writing the code that know the context and not subject to someone fiddling with compiler flags and getting incorrect code.
For this example you'd probably want -fassociative-math and not the other stuff that may result in incorrect code. -ffast-math was not actually used in the clang compilation and it's possible that the -fvectorize that was used picks a sensible mix of options.
Floating point code is really difficult to do correctly. LLVM doesn't actually model IEEE 754 correctly yet, although hopefully the remaining issues with the constrained intrinsics will be fixed by the end of the year (even then, sNaN support is likely to still be broken for some time).
What makes floating point more difficult than integers is two things. First, there is an implicit dependency on the status register that greatly inhibits optimization, yet very few users actually care about that dependency. The second issue is that there is many more properties that matter for floating point. For an integer, you essentially only care about three modes: wrapping (w/ optional carry flag), saturating, and no-overflow. Exposing these as separate types exhaustively is easy. For floating point, you have orthogonal concerns of rounding mode (including dynamic), no-NaN, no-infinity, denormals (including flush-to-zero support), contraction to form FMA, reciprocal approximation, associative, acceptable precision loss (can I use an approximate inverse sqrt?), may trap, and exception sticky bits. Since they're orthogonal, that means that instead of a dozen types, you need a few thousand types to combine them, although many combinations are probably not going to be too interesting.
Technically Rust only guarantees memory-safety (and only outside of unsafe!{}). It has many features that aid in other kinds of safety - strongly encouraging you to unwrap Option<> and Result<>, requiring all match cases to be covered, allowing for lots of strategic immutability, etc. But it doesn't guarantee that kind of correctness.
That's not correct. Safe Rust is advertised as sound, and Rust defines that as "safe Rust programs do not exhibit undefined behavior". Undefined behavior is a much larger term than just memory safety, and include things like thread safety, const safety, unwind safety, etc.
rust doesn't guarantee anything if you opt out of the guarantees. two examples that come to mind: unsafe and maybe bounds checks in release mode.
fastmath is probably different anyway, as the "breaking" is on a floating point logic level, as in: results become imprecise, but not exactly "wrong" - as in undefined behaviour. but i don't know fastmath, so i might be wrong.
yes, this is all purely down to Rust not doing ffast-math (on purpose)
writing a wrapper is not trivial, at all.
what Rust could do though, is add a wrapped float type, that the compiler will forward to llvm saying "you can ffast-math these" That is the approach Rust tends to take for these kinds of things, though no plans are in the works to do this to make floats vectorizeable yet. Maybe we should start such plans?
It's worth explaining why. So far as I understand the situation, dependencies may assume and depend on the absence of ffast-math. Enabling it globally for a compilation is generally considered to be a footgun for this reason.
Edit: After some search I found these:
https://github.com/rust-lang/rust/issues/21690
https://doc.rust-lang.org/core/intrinsics/fn.fadd_fast.html
Writing a wrapper around f64 that uses these intrinsics shouldn't be too hard. I don't program in Rust though.