Hacker News new | past | comments | ask | show | jobs | submit login

So the difference basically boils down to -ffast-math, right? Is there an equivalent in Rust?

Edit: After some search I found these:

https://github.com/rust-lang/rust/issues/21690

https://doc.rust-lang.org/core/intrinsics/fn.fadd_fast.html

Writing a wrapper around f64 that uses these intrinsics shouldn't be too hard. I don't program in Rust though.




It doesn't exist yet and it's not clear it should be replicated as is. The fast-math flag does a bunch of related things that should probably be exposed separately so it's not a footgun in several situations. I'm also partial to exposing it per-function so the control is actually in the hands of the people writing the code that know the context and not subject to someone fiddling with compiler flags and getting incorrect code.

For this example you'd probably want -fassociative-math and not the other stuff that may result in incorrect code. -ffast-math was not actually used in the clang compilation and it's possible that the -fvectorize that was used picks a sensible mix of options.

Here's a preliminary discussion:

https://internals.rust-lang.org/t/pre-rfc-whats-the-best-way...

Trying to do an RFC process for this could be useful. The rust development process seems to be pretty good at thinking deeply about these things.


> I'm also partial to exposing it per-function so the control is actually in the hands of the people writing the code that know the context

As a C++ programmer who routinely uses fast-math "until something breaks" with DSP code, I would find that capability very attractive.


This should probably be exposed as separate floating point types. With relatively cheap conversions. (Mostly done error checks.)


There's some conversation about exposing fast math on this internals thread: https://internals.rust-lang.org/t/pre-pre-rfc-floating-point...

Floating point code is really difficult to do correctly. LLVM doesn't actually model IEEE 754 correctly yet, although hopefully the remaining issues with the constrained intrinsics will be fixed by the end of the year (even then, sNaN support is likely to still be broken for some time).

What makes floating point more difficult than integers is two things. First, there is an implicit dependency on the status register that greatly inhibits optimization, yet very few users actually care about that dependency. The second issue is that there is many more properties that matter for floating point. For an integer, you essentially only care about three modes: wrapping (w/ optional carry flag), saturating, and no-overflow. Exposing these as separate types exhaustively is easy. For floating point, you have orthogonal concerns of rounding mode (including dynamic), no-NaN, no-infinity, denormals (including flush-to-zero support), contraction to form FMA, reciprocal approximation, associative, acceptable precision loss (can I use an approximate inverse sqrt?), may trap, and exception sticky bits. Since they're orthogonal, that means that instead of a dozen types, you need a few thousand types to combine them, although many combinations are probably not going to be too interesting.


That's kind of at odds with Rust guarantee that your code never breaks.


Technically Rust only guarantees memory-safety (and only outside of unsafe!{}). It has many features that aid in other kinds of safety - strongly encouraging you to unwrap Option<> and Result<>, requiring all match cases to be covered, allowing for lots of strategic immutability, etc. But it doesn't guarantee that kind of correctness.


That's not correct. Safe Rust is advertised as sound, and Rust defines that as "safe Rust programs do not exhibit undefined behavior". Undefined behavior is a much larger term than just memory safety, and include things like thread safety, const safety, unwind safety, etc.


rust doesn't guarantee anything if you opt out of the guarantees. two examples that come to mind: unsafe and maybe bounds checks in release mode.

fastmath is probably different anyway, as the "breaking" is on a floating point logic level, as in: results become imprecise, but not exactly "wrong" - as in undefined behaviour. but i don't know fastmath, so i might be wrong.


(bounds checks are not removed in release mode, you have to use unsafe to not have the bounds check)

On the /r/rust thread, folks provided examples of why fastmath would produce UB in safe Rust.


ah, i think i meant overflow checks. and thanks for the pointer, i'll have a look.


ah yes, overflow checks are not in release mode today. They may be in the future. And overflow isn't UB, it's two's compliment wrapping.


Clang -Ofast implies -ffast-math.


yes, this is all purely down to Rust not doing ffast-math (on purpose)

writing a wrapper is not trivial, at all.

what Rust could do though, is add a wrapped float type, that the compiler will forward to llvm saying "you can ffast-math these" That is the approach Rust tends to take for these kinds of things, though no plans are in the works to do this to make floats vectorizeable yet. Maybe we should start such plans?


> on purpose

It's worth explaining why. So far as I understand the situation, dependencies may assume and depend on the absence of ffast-math. Enabling it globally for a compilation is generally considered to be a footgun for this reason.


Yes, and that doesn't surprise me.

In my experience -Ofast/-ffast-math yields very impressive results for FP code.

If you can tolerate platform-specific variation in the trailing parts of floating point numbers, it's wonderful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: